Jump to content

sas73

Members
  • Content Count

    21
  • Joined

  • Last visited

Everything posted by sas73

  1. While trying to learn UVM and more specifically the use of uvm_sequence_item I keep running into the same example which looks something like this class bus_transaction extends uvm_sequence_item; rand enum {READ, WRITE} operation; rand bit [31:0] addr; rand bit [31:0] write_data; bit [31:0] read_data; ... This design pattern raises a number of questions: 1. To me read transactions and write transactions are two separate "things" that deserve their own classes (single responsibility principle). The more operations we add to a transaction class the more time we will also spend on randomizing variables not needed by all operations (write_data for the READ operations in this case). Tutorials often point out that read_data shouldn't be randomized for that efficiency reason. Read/write are similar transactions so it may not be a big issue but packet protocols can have many vastly different packet types. What is the rationale for this design pattern? 2. One can also argue that the read response is a separate transaction. I found an explanation for this saying that input and output data should be in the same class because it enables reuse of the same monitor for both the input and the output of a DUT. This didn't convince me. Agents are configured as active or passive. Why can't that configuration be used by the monitor to determine what type of transactions to publish? 3. If all interface operations are put in a single class it's likely to become specific to that interface/driver and there will be no reuse. If there is no reuse it feels like the separation of the abstract transaction from the specific interface pin wiggling is losing purpose. Why not let the transaction class own the pin wiggling knowledge and let the driver be a simple reusable executor of that knowledge? 4. If the idea of reusable transactions holds I would expect to find reusable transaction class libraries providing simple read, write, reset, ... type of transactions and that many providers of verification components would support them in addition to their own more specialized transactions. All I found is TLM 2.0 which has read and write (in a single class just like the example above). Are there any other well supported libraries?
  2. The list of open source UVM repositories can also provide an answer to question 4. I couldn't find any well supported (project with many stars) library of sequence items. There are many verification components for various bus protocols but they all have a single sequence items tailored specifically for that bus. This leads me back to question 3...
  3. The blog post also touches my third question although not providing an answer
  4. Looking at open source UVM repositories I think it safe to say that keeping all transaction types within the same sequence item class is the most common design pattern (the only pattern I found when looking at many of these repos). Luckily, I also found an interesting blog post about type handles that address what I'm looking for. There is a way of doing what I'm looking for although standard practice seems to be something else. I guess that keeping everything in one packet is more convenient. I think this answers my first question.
  5. I found an example from UVM Primer that has dedicated transactions as well as monitors for input and output so I guess that answers my second question.
  6. I'm not looking for complete answers. Any clues that you can provide are appreciated.
  7. @David Black I did stumble upon an accellera/uvm repository on GitHub. Seems to be what I was looking for although it has been dead since UVM 1.2. Why have it there and not use that platform?
  8. Hi, I just downloaded the UVM library but I couldn't find any tests verifying its functionality. Are such tests available? Also, is the git repository from which the library was released open? Thanks
  9. While trying out UVM/SystemVerilog with a very simple transaction/constraint class transaction extends uvm_sequence_item; rand int value; constraint c { value inside {[10:20]}; } endclass I found that there is a significant performance difference if I randomize a transaction like this void'(t.randomize()); or if I do t.value = $urandom_range(20, 10); The first approach is 15x to 25x slower depending on simulator! In my case I have a dummy testbench only consisting of a producer sending a large number of transaction to a consumer. Will this also be a concern when scaling up to real-life designs? Are there situation where it's recommended to use $urandom for simple constraints? If not, why?
  10. Dave, I'm not trying to replace the constraint solver but rather find the situations where the constraint is simple enough and performance important enough to favor the procedural approach. I'm learning SV constraint but I think your example of selecting 8 unique values between 10 and 20 can be expressed as class transaction; rand int arr[8]; constraint c { foreach(arr[i]) arr[i] inside {[10:20]}; unique {arr}; } endclass A procedural implementation of this based on $urandom_range will indeed result in extra calls when the unique constraint is added but only a factor 1.6x on average in this particular case. The execution time in my test increase a bit more, 1.8x, due to the overhead of determining if an extra call is needed. What is interesting is that the constraint solver in the only simulator I tested became 5x slower when the unique constraint was added. What's even more interesting was that the procedural approach was 37x faster without the unique constraint and 104x faster with that constraint included. Declarative constraints are very compact and elegant but it seems that there will be cases when the procedural approach is worth the extra effort.
  11. Thanks for your reply David. The two approaches behave differently but is there a reason why one is better than the other provided I'm aware of these differences and know how and when code changes affects my ability to repeat a test? I tried several different simulators and the performance hit was between 15x and 25x. This seems to be a problem inherent to all constraint solvers. If that is the case wouldn't I be better off using randomization system calls when possible?
  12. Hi, Are there any openly available (template) scripts to parse log files to find non-UVM error messages? Thanks
  13. What is the preferred way to verify that a verification component calls uvm_error when it's supposed to without that error causing an error for my test? I know about SVUnit and its mocking capabilities but is it a way to do this within UVM?
  14. Thanks @dave_59. That would be the easiest way but in my case I want to verify that the message fields are what I expect, not only that a report has been produced. I ended up pushing the report message to a queue and then pop it in my test to verify the fields.
  15. Thanks @kirloy369! Looks more UVM idiomatic. Doesn't necessarily make it a better solution that what has been proposed before but I wanted to start by trying out how it was intended to be done.
  16. Thanks for the reply David. This is similar do what has been done in SVUnit. They provide their own UVM reporter and redefine `uvm_error to call that instead. I was hoping that UVM already had a more built-in mechanism for intercepting messages. If so, I could simply put the intercepted messages in a queue, verify that the queue contains the messages I expect and then pass/fail the test accordingly. I have to dig deeper and learn more but I was hoping that there would be a UVM action I can use or maybe uvm_report_catcher which by the name of it sounds related. Maybe these concepts can't be used for what I'm trying to do?
  17. sas73

    Log File Parsing

    Thanks @dave_59. I did a quick test with the commercial simulators available at EDA playground with the following error scenarios: 1. Assert with $error 2. Null object dereferencing Riviera-PRO would do a normal exit in both these scenarios, Cadence has a non-zero return code for both and Synopsis return non-zero for the null object dereferencing but not for the assert. I didn't check if there are option flags that would change this behavior but it seems that there are different opinions on how the return code should be used. Personally I would fail a test with a non-zero exit regardless of simulator strategy and log file contents. I would also prefer that the tools would use the return code for the errors they know about, at least as an option. Let's say I have a smaller project just using SystemVerilog with $error or VHDL asserts with error severity. The return code would give me pass/fail status if the assertions fail or if there is another problem like null object dereferencing . If it passed I don't care about the logs and if it failed they are small enough to be read manually. If I have a project using less specific error mechanisms like Verilog $display I would need parsing but the scope of parsing is reduced. If I have many long log files I may need scripting to mine them for the interesting events but in that case I rather have more machine readable formats that is well supported by scripting languages. XML or JSON for example. It would make the scripting easier and less error prone.
  18. sas73

    Log File Parsing

    @David BlackWhat would you say is standard praxis for parsing logs to figure out pass/fail status? Find standard patterns for UVM errors + find simulator specific patterns for other errors + make sure there is a UVM report to catch silent errors stopping the simulation? Do people ever look at the return code? Testing at EDA Playground I see that some simulators return a non-zero code for some, but not all, errors. Why isn't this used more? Keeps me from knowing simulator specific error messages and makes the script more portable.
  19. I'm trying to find existing libraries of reusable assert macros that uses UVM messaging. For example checking equality and if it fails it would generate a message of expected and received values using uvm_error. What are my options?
  20. Thanks David. It sounds like no such tests are available. Open source projects in general are not always good at providing their test suites but I find it a bit odd that on open source library for verification doesn't provide the test suites showing how the library itself is verified. It would be easier for people to suggest improvements if they can verify whether or not such a modification breaks something else.
  21. sas73

    Log File Parsing

    Thanks David! I'll look into that.
×