MikeKk Posted July 22, 2014 Share Posted July 22, 2014 Hi, I'm currently building a model which involves a very large queue of entities to be read into the model and dynamically searched. I've attempted to limit the number in the model through different techniques such as reading in only relevant entities in at one time (from csv), Limiting the search ranges to distinct queues (as opposed to searching entire entity populations) and offsetting the reading and searching intervals (so they don't fall within the same time-stamps and slow the model). Despite this the model still runs rather slowly and (depending on variability) may still breach the 20,000 entities in system mark. I'm curious to find out how anyone else has handled large numbers of entities in the past? Is there a way to disable this 20k entities in system warning? Are there more efficient ways of generating and searching entities? Any feedback would be great. Michael Link to comment Share on other sites More sharing options...
dsturrock Posted July 22, 2014 Share Posted July 22, 2014 The 20,000 entities message, like the "maximum number in system" message is just a warning, not a limit. It is there to let you know of a potential problem that could cause an out-of-memory crash. If it is expected behavior and your computer memory can handle it, then simply check the "Don't show me this again" message and press "Continue running" and you will never see it again in that model. Regarding model speed - how many entities are you searching, how complex is the search, and how often are you repeating it? Frequent complex searches across lots of entities will certainly slow execution. Are there other ways to approach your problem? Perhaps use arrays and Find instead of Search? You also mention "reading". Reading from a file (or worse Excel or a DB) during a run can be painfully slow. If possible, replace that with data in a table that is "read" from memory rather than a slow disk access and possibly a context shift. Link to comment Share on other sites More sharing options...
MikeKk Posted July 22, 2014 Author Share Posted July 22, 2014 Information on entities are read-in in batches of 300 after an initial read in of 600 from csv file when the queue within the model falls to a certain amount (150) to lower the number of simultaneous read-in cycles. The rate at which this Queue depopulates is a key property in the model though and may vary greatly, as is the order in which they are processed. There are several of searches of this entity queue. Searching for both match criteria and ordering by search expression. These searches are conducted frequently but are offset by small periods to prevent them from occurring simultaneously. Searches that fail to find appropriate candidates wait for queue entered events (which are also offset after the triggering event) before re-searching the queue. A large queue is necessary as matching criteria may vary and without a large population within the model many searches will return without results. Link to comment Share on other sites More sharing options...
Recommended Posts