cliffc Posted January 3, 2014 Report Share Posted January 3, 2014 What is the best way to measure simulation time using Cadence Incisive? I would like to do some same-tool benchmarks to measure simulation performance improvements using different coding styles or tricks using the same simulator. I do not intend to do cross-tool benchmarking; only same-tool benchmarking. I will check to see if techniques used with one tool also cause similar performance improvements across multiple tools, but I will not report actual speed differences between the tools (in accordance with my tool-usage agreements with multiple vendors). I am trying to identify best performance coding practices. I do not have a current Incisive license so I will ask colleagues to run the simulations for me. Regards - Cliff Cummings Verilog & SystemVerilog Guru Quote Link to comment Share on other sites More sharing options...
georg-glaeser Posted January 7, 2014 Report Share Posted January 7, 2014 Hi Cliff, INCISIV has an integrated profiler. You can switch it on by passing the -prof option to irun and it will generate a file called ncprof.out which contains performance information about your simulation. With this profiler you also degrade the peed of your simulation, but for comparisons of different approaches in the same simulator, it should be ok. Regards Georg Quote Link to comment Share on other sites More sharing options...
uwes Posted January 7, 2014 Report Share Posted January 7, 2014 hi, there are various options to - learn and track heap memory (tcl: heap -help or via gui) - understand profiling information (source, memory,randomization) to find "hotspots" (irun -helpsubject profile) - get the traditional total time, memory, consumption for each of the processing steps (irun -status) - understand algorithm complexity via code coverage or algorithm quality via functional coverage The issue i see with the same-tool benchmarking and "make a recommendation" out of that is that - this is heavily tool version dependent - i would assume that weak spots in performance are addressed by the tool and internal optimizations asap - code rewrites on the source level to optimize speed might limit optimization options for the tool in the future - commonly accepted "good" code should run fast on all simulators - "fast" code doesnt necessarily mean "good" code. - code quality metrics (such as in sw) is getting more and more important - the "run-fast" vs "create/debug/maintain" trade-off needs to be considered /uwe Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.