Jump to content

sas73

Members
  • Content Count

    28
  • Joined

  • Last visited

Everything posted by sas73

  1. While trying to learn UVM and more specifically the use of uvm_sequence_item I keep running into the same example which looks something like this class bus_transaction extends uvm_sequence_item; rand enum {READ, WRITE} operation; rand bit [31:0] addr; rand bit [31:0] write_data; bit [31:0] read_data; ... This design pattern raises a number of questions: 1. To me read transactions and write transactions are two separate "things" that deserve their own classes (single responsibility principle). The more operations we add to a transaction class the more time we will also spend on randomizing variables not needed by all operations (write_data for the READ operations in this case). Tutorials often point out that read_data shouldn't be randomized for that efficiency reason. Read/write are similar transactions so it may not be a big issue but packet protocols can have many vastly different packet types. What is the rationale for this design pattern? 2. One can also argue that the read response is a separate transaction. I found an explanation for this saying that input and output data should be in the same class because it enables reuse of the same monitor for both the input and the output of a DUT. This didn't convince me. Agents are configured as active or passive. Why can't that configuration be used by the monitor to determine what type of transactions to publish? 3. If all interface operations are put in a single class it's likely to become specific to that interface/driver and there will be no reuse. If there is no reuse it feels like the separation of the abstract transaction from the specific interface pin wiggling is losing purpose. Why not let the transaction class own the pin wiggling knowledge and let the driver be a simple reusable executor of that knowledge? 4. If the idea of reusable transactions holds I would expect to find reusable transaction class libraries providing simple read, write, reset, ... type of transactions and that many providers of verification components would support them in addition to their own more specialized transactions. All I found is TLM 2.0 which has read and write (in a single class just like the example above). Are there any other well supported libraries?
  2. Reading a bit further I found the concept of API sequences that can be provided by the agent developer. For example a write sequence that hides the details I gave in the example above. The write sequence can then be used in a higher layer sequence (Mentor calls this a worker sequence). The write sequence also provides a write method to start itself and the worker sequence calls that method with a specific address and data. Note that this approach completely overrides randomization of sequence items and moves that responsibility to the sequences.
  3. As a sequence writer I should ideally not be exposed to the implementation details of the driver that have been discussed in this thread. For example, the structure of the sequence item(s), whether get/put, get_next_item/item_done or something else is used. I would like the driver to provide an API excluding all of that. Something similar to this that could be included in the sequence write(<constraint on address and data>) . Does SV allow you to pass a constraint as an argument or is there another way of doing that?
  4. Note that it's not only transactions you can reuse. You can also reuse the handling of transactions (the visit method in the visitor) in some cases, for example in the case you have a transaction for a time delay. You can "inherit" several such handlers using mixins.Thanks for this mixin post @tudor.timi
  5. That's true. If I want to fully randomize I need to add some extra code in my sequence. It seems that it also causes a performance hit making that solution slower despite less randomization. However, if I want to do a write-delay-read sequence with random address and data I can express that more explicitly instead of constraining variables to be a fix value. In this case the solution with separate transactions becomes faster. In these tests I used randomize() everywhere and the differences are in the percentage range. I more concerned about the difference between randomize() and $urandom which can be a factor 100x.
  6. When I failed to see examples of transaction reuse I though that maybe people put their reuse effort elsewhere, for example by moving the pin wiggling functionality (which comes in different flavors) to the transaction class so that the driver becomes more generic. I agree that transactions are data classes and I do want to reuse them so moving the pin wiggling into these classes is not something I want. The visitor pattern is also a way to create a more generic driver while not destroying the potential for transaction reuse. The visitor would remove the need for a $cast.
  7. Being able to do simple reads/writes is indeed a reusable abstraction. I've seen the TLM generic payload but that's also the only attempt for a reusable transaction I've seen. Are there others? To verify a bus interface you need to be concerned about the details but when the focus of your testbench is to verify the functionality which the bus interface is configuring you get far with the simple read/writes. A driver could support both reusable simple read/write, reset, delay and specialized transactions used when fully verifying such an interface. I like to think of the driver as a composition of supported services, some are reused, some are new.
  8. Thanks for your answers @tudor.timi Looking at the examples out there it seems like both the single and double data field approaches are popular. What people prefer depends on their main concerns. You're concerned with the number of if statements but Mentor who takes the double data field approach (https://verificationacademy.com/cookbook/sequences/items) expresses other concerns: I'm also concerned about randomization performance (http://forums.accellera.org/topic/6275-constrained-random-performance) but splitting into two data fields doesn't improve performance. You still have unnecessary randomization of write_data for read requests. All they've done is not making it worse by also randomizing read_data. The corruption risk is related to the shared memory approach of the get_next_item/item_done pattern. They avoid that risk by not sharing the data field but I feel that not sharing request and response objects and use the get/put pattern would be a better approach. UVM supports it but maybe there is a good reason why we shouldn't use it? Since one of my concerns is performance I don't like too many randomized fields that aren't applicable to all commands. The read/write example may not represent a "too many" scenario, it's just a common example where such a problem exists. This gets worse as you add more commands. The address and data fields would for example be completely irrelevant for a reset command. A reset command is also an example of a transaction that would be very reusable if available in isolation. A randomized sec_mode is a property relevant to both read and write so that would not be a reason for splitting. A delay field is also something that is relevant to both reads and writes but it's also reusable so I can see a reason to have that in a separate transaction anyway Summary: I'm not looking for the "best" solution to the read/write example. People have different concerns and I accept that. What I wanted to find out was if people are concerned about performance and reuse in such a way that they would consider alternatives to the all-in-one sequence item. If I understand you correctly you wouldn't use the all-in-one pattern for heterogeneous protocols (A "too many" scenario)?
  9. The list of open source UVM repositories can also provide an answer to question 4. I couldn't find any well supported (project with many stars) library of sequence items. There are many verification components for various bus protocols but they all have a single sequence items tailored specifically for that bus. This leads me back to question 3...
  10. The blog post also touches my third question although not providing an answer
  11. Looking at open source UVM repositories I think it safe to say that keeping all transaction types within the same sequence item class is the most common design pattern (the only pattern I found when looking at many of these repos). Luckily, I also found an interesting blog post about type handles that address what I'm looking for. There is a way of doing what I'm looking for although standard practice seems to be something else. I guess that keeping everything in one packet is more convenient. I think this answers my first question.
  12. I found an example from UVM Primer that has dedicated transactions as well as monitors for input and output so I guess that answers my second question.
  13. I'm not looking for complete answers. Any clues that you can provide are appreciated.
  14. @David Black I did stumble upon an accellera/uvm repository on GitHub. Seems to be what I was looking for although it has been dead since UVM 1.2. Why have it there and not use that platform?
  15. Hi, I just downloaded the UVM library but I couldn't find any tests verifying its functionality. Are such tests available? Also, is the git repository from which the library was released open? Thanks
  16. While trying out UVM/SystemVerilog with a very simple transaction/constraint class transaction extends uvm_sequence_item; rand int value; constraint c { value inside {[10:20]}; } endclass I found that there is a significant performance difference if I randomize a transaction like this void'(t.randomize()); or if I do t.value = $urandom_range(20, 10); The first approach is 15x to 25x slower depending on simulator! In my case I have a dummy testbench only consisting of a producer sending a large number of transaction to a consumer. Will this also be a concern when scaling up to real-life designs? Are there situation where it's recommended to use $urandom for simple constraints? If not, why?
  17. Dave, I'm not trying to replace the constraint solver but rather find the situations where the constraint is simple enough and performance important enough to favor the procedural approach. I'm learning SV constraint but I think your example of selecting 8 unique values between 10 and 20 can be expressed as class transaction; rand int arr[8]; constraint c { foreach(arr[i]) arr[i] inside {[10:20]}; unique {arr}; } endclass A procedural implementation of this based on $urandom_range will indeed result in extra calls when the unique constraint is added but only a factor 1.6x on average in this particular case. The execution time in my test increase a bit more, 1.8x, due to the overhead of determining if an extra call is needed. What is interesting is that the constraint solver in the only simulator I tested became 5x slower when the unique constraint was added. What's even more interesting was that the procedural approach was 37x faster without the unique constraint and 104x faster with that constraint included. Declarative constraints are very compact and elegant but it seems that there will be cases when the procedural approach is worth the extra effort.
  18. Thanks for your reply David. The two approaches behave differently but is there a reason why one is better than the other provided I'm aware of these differences and know how and when code changes affects my ability to repeat a test? I tried several different simulators and the performance hit was between 15x and 25x. This seems to be a problem inherent to all constraint solvers. If that is the case wouldn't I be better off using randomization system calls when possible?
  19. Hi, Are there any openly available (template) scripts to parse log files to find non-UVM error messages? Thanks
  20. What is the preferred way to verify that a verification component calls uvm_error when it's supposed to without that error causing an error for my test? I know about SVUnit and its mocking capabilities but is it a way to do this within UVM?
  21. Thanks @dave_59. That would be the easiest way but in my case I want to verify that the message fields are what I expect, not only that a report has been produced. I ended up pushing the report message to a queue and then pop it in my test to verify the fields.
  22. Thanks @kirloy369! Looks more UVM idiomatic. Doesn't necessarily make it a better solution that what has been proposed before but I wanted to start by trying out how it was intended to be done.
  23. Thanks for the reply David. This is similar do what has been done in SVUnit. They provide their own UVM reporter and redefine `uvm_error to call that instead. I was hoping that UVM already had a more built-in mechanism for intercepting messages. If so, I could simply put the intercepted messages in a queue, verify that the queue contains the messages I expect and then pass/fail the test accordingly. I have to dig deeper and learn more but I was hoping that there would be a UVM action I can use or maybe uvm_report_catcher which by the name of it sounds related. Maybe these concepts can't be used for what I'm trying to do?
  24. sas73

    Log File Parsing

    Thanks @dave_59. I did a quick test with the commercial simulators available at EDA playground with the following error scenarios: 1. Assert with $error 2. Null object dereferencing Riviera-PRO would do a normal exit in both these scenarios, Cadence has a non-zero return code for both and Synopsis return non-zero for the null object dereferencing but not for the assert. I didn't check if there are option flags that would change this behavior but it seems that there are different opinions on how the return code should be used. Personally I would fail a test with a non-zero exit regardless of simulator strategy and log file contents. I would also prefer that the tools would use the return code for the errors they know about, at least as an option. Let's say I have a smaller project just using SystemVerilog with $error or VHDL asserts with error severity. The return code would give me pass/fail status if the assertions fail or if there is another problem like null object dereferencing . If it passed I don't care about the logs and if it failed they are small enough to be read manually. If I have a project using less specific error mechanisms like Verilog $display I would need parsing but the scope of parsing is reduced. If I have many long log files I may need scripting to mine them for the interesting events but in that case I rather have more machine readable formats that is well supported by scripting languages. XML or JSON for example. It would make the scripting easier and less error prone.
  25. sas73

    Log File Parsing

    @David BlackWhat would you say is standard praxis for parsing logs to figure out pass/fail status? Find standard patterns for UVM errors + find simulator specific patterns for other errors + make sure there is a UVM report to catch silent errors stopping the simulation? Do people ever look at the return code? Testing at EDA Playground I see that some simulators return a non-zero code for some, but not all, errors. Why isn't this used more? Keeps me from knowing simulator specific error messages and makes the script more portable.
×