c4brian Posted November 3, 2015 Report Share Posted November 3, 2015 I would like to hear your best practices for ensuring simulation efficiency. I'm open to all ideas; the profiler in my tool is nightmare to make sense of. I'm especially interested in... good coding practices checks for incorrect garbage collection, etc (if such a thing exists) etc. Thanks! Quote Link to comment Share on other sites More sharing options...
apfitch Posted November 5, 2015 Report Share Posted November 5, 2015 I was just going to say "use the profiler" :-) The things that have caught us out are - repeated construction of sequence items (as opposed to repeatedly randomising one item and cloning it) - silly constraints (e.g. writing a foreach constraint to set every element of a 2048 element array to its index. It's much faster to do it procedurally in post_randomise()) - Too much logging (of course... sorry if this is obvious). I now set regressions to UVM_LOW as a matter of course. - queues/mailboxes that grow and never shrink Generally we've used the profiler to track down problems. Even if you can't get sensible results from call trees etc, sometimes just capacity profiling is good enough to give you a clue. regards Alan P.S. It seems that even though the 80-20 "rule" is really a heuristic, it's generally true. Quote Link to comment Share on other sites More sharing options...
c4brian Posted November 5, 2015 Author Report Share Posted November 5, 2015 Thanks for the reply. repeated construction of sequence items (as opposed to repeatedly randomising one item and cloning it) This sounds like a good idea to practice in my sequences, and also in my monitors. Then, if a scoreboard needs a copy, let it clone it. Agree? About some of the other topics... have you been able quantify your performance gains? For example, you set verbosity from UVM_DEBUG to UVM_LOW, and run a regression suite... Short of timing it with a stopwatch, do you SEE the gain somewhere: "ah, the profiler said my simulation took up 5B less cycles" or "the tool informed me the simulation took 5 minutes less wall clock time to run!" That black on white comparison is what I'm going for, because I'd really like to flesh out how little changes affect my performance (which I should see if I discover any "offenders" of the 80/20). Quote Link to comment Share on other sites More sharing options...
c4brian Posted November 5, 2015 Author Report Share Posted November 5, 2015 Also, my simulation gets slower the longer it runs. Is this a sign I'm doing something wrong? Quote Link to comment Share on other sites More sharing options...
apfitch Posted November 6, 2015 Report Share Posted November 6, 2015 Hi, yes if you repeatedly randomize the same object you need to be careful to clone it if you want to keep it. But cloning (which invokes copy()) must be more efficient that typeid::create (I hope!). I have done stopwatch style measurements but I didn't write them down, other than noticing that logging has an effect. We've also noticed in our setup that Questa's transcript window actually seems to be a bit faster than running on the command line and logging to the terminal - but that could be an artifact of our setup. And of course we only log to the transcript window when doing interactive debugging. If your simulation gets slower the longer it runs, it might be worth just watching the process in the OS - on Unix you can use vmstat to see what's happening, or top. When I've had simulations that got slower as they went on I've normally found some kind of growth in memory usage, which could either be due to stupid constraints; growing queues; or a genuine tool bug. Sorry it's all a bit wishy-washy... Alan Quote Link to comment Share on other sites More sharing options...
kirloy Posted November 9, 2015 Report Share Posted November 9, 2015 Also, my simulation gets slower the longer it runs. Is this a sign I'm doing something wrong? Have you tried to add uvm_resource_options::turn_off_auditing() before run_test. UVM collects lots of data which is not needed when you do not debug. In certain cases - when you have lots of uvm_resource data base accesses - then it can consume lots of memory. You design can start some activities later on during the tests which may slow down simulator - as it has much more to simulate - ie first test doeas all configuration and then it starts do some transission. Anyway profiler results should give you some clues you are just blind when not looking at it. ie you can run till the point where it is fast see what is on the profiler top on - then run full - see if smth changed. If you have issues with profiler reports analysis you may always consult them with technical support. Finally it can be tool bugs in this case also only support can help you. Quote Link to comment Share on other sites More sharing options...
srivatsa9 Posted November 12, 2015 Report Share Posted November 12, 2015 Do you have a lot of config_db accesses? if you do, the lookup is string based. It is very expensive... Find a way to minimize your config_db lookups... Quote Link to comment Share on other sites More sharing options...
c4brian Posted November 13, 2015 Author Report Share Posted November 13, 2015 anyone know a slick way to determine the number of database access which have occurred in a given sim run? Id love it to print upon sim end. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.