Jump to content


  • Content Count

  • Joined

  • Last visited

  1. Reading a bit further I found the concept of API sequences that can be provided by the agent developer. For example a write sequence that hides the details I gave in the example above. The write sequence can then be used in a higher layer sequence (Mentor calls this a worker sequence). The write sequence also provides a write method to start itself and the worker sequence calls that method with a specific address and data. Note that this approach completely overrides randomization of sequence items and moves that responsibility to the sequences.
  2. As a sequence writer I should ideally not be exposed to the implementation details of the driver that have been discussed in this thread. For example, the structure of the sequence item(s), whether get/put, get_next_item/item_done or something else is used. I would like the driver to provide an API excluding all of that. Something similar to this that could be included in the sequence write(<constraint on address and data>) . Does SV allow you to pass a constraint as an argument or is there another way of doing that?
  3. Note that it's not only transactions you can reuse. You can also reuse the handling of transactions (the visit method in the visitor) in some cases, for example in the case you have a transaction for a time delay. You can "inherit" several such handlers using mixins.Thanks for this mixin post @tudor.timi
  4. That's true. If I want to fully randomize I need to add some extra code in my sequence. It seems that it also causes a performance hit making that solution slower despite less randomization. However, if I want to do a write-delay-read sequence with random address and data I can express that more explicitly instead of constraining variables to be a fix value. In this case the solution with separate transactions becomes faster. In these tests I used randomize() everywhere and the differences are in the percentage range. I more concerned about the difference between randomize() and $urandom which can be a factor 100x.
  5. When I failed to see examples of transaction reuse I though that maybe people put their reuse effort elsewhere, for example by moving the pin wiggling functionality (which comes in different flavors) to the transaction class so that the driver becomes more generic. I agree that transactions are data classes and I do want to reuse them so moving the pin wiggling into these classes is not something I want. The visitor pattern is also a way to create a more generic driver while not destroying the potential for transaction reuse. The visitor would remove the need for a $cast.
  6. Being able to do simple reads/writes is indeed a reusable abstraction. I've seen the TLM generic payload but that's also the only attempt for a reusable transaction I've seen. Are there others? To verify a bus interface you need to be concerned about the details but when the focus of your testbench is to verify the functionality which the bus interface is configuring you get far with the simple read/writes. A driver could support both reusable simple read/write, reset, delay and specialized transactions used when fully verifying such an interface. I like to think of the driver as a composition of supported services, some are reused, some are new.
  7. Thanks for your answers @tudor.timi Looking at the examples out there it seems like both the single and double data field approaches are popular. What people prefer depends on their main concerns. You're concerned with the number of if statements but Mentor who takes the double data field approach (https://verificationacademy.com/cookbook/sequences/items) expresses other concerns: I'm also concerned about randomization performance (http://forums.accellera.org/topic/6275-constrained-random-performance) but splitting into two data fields doesn't improve performance. You still have unnecessary randomization of write_data for read requests. All they've done is not making it worse by also randomizing read_data. The corruption risk is related to the shared memory approach of the get_next_item/item_done pattern. They avoid that risk by not sharing the data field but I feel that not sharing request and response objects and use the get/put pattern would be a better approach. UVM supports it but maybe there is a good reason why we shouldn't use it? Since one of my concerns is performance I don't like too many randomized fields that aren't applicable to all commands. The read/write example may not represent a "too many" scenario, it's just a common example where such a problem exists. This gets worse as you add more commands. The address and data fields would for example be completely irrelevant for a reset command. A reset command is also an example of a transaction that would be very reusable if available in isolation. A randomized sec_mode is a property relevant to both read and write so that would not be a reason for splitting. A delay field is also something that is relevant to both reads and writes but it's also reusable so I can see a reason to have that in a separate transaction anyway Summary: I'm not looking for the "best" solution to the read/write example. People have different concerns and I accept that. What I wanted to find out was if people are concerned about performance and reuse in such a way that they would consider alternatives to the all-in-one sequence item. If I understand you correctly you wouldn't use the all-in-one pattern for heterogeneous protocols (A "too many" scenario)?
  8. The list of open source UVM repositories can also provide an answer to question 4. I couldn't find any well supported (project with many stars) library of sequence items. There are many verification components for various bus protocols but they all have a single sequence items tailored specifically for that bus. This leads me back to question 3...
  9. The blog post also touches my third question although not providing an answer
  10. Looking at open source UVM repositories I think it safe to say that keeping all transaction types within the same sequence item class is the most common design pattern (the only pattern I found when looking at many of these repos). Luckily, I also found an interesting blog post about type handles that address what I'm looking for. There is a way of doing what I'm looking for although standard practice seems to be something else. I guess that keeping everything in one packet is more convenient. I think this answers my first question.
  11. I found an example from UVM Primer that has dedicated transactions as well as monitors for input and output so I guess that answers my second question.
  12. I'm not looking for complete answers. Any clues that you can provide are appreciated.
  13. While trying to learn UVM and more specifically the use of uvm_sequence_item I keep running into the same example which looks something like this class bus_transaction extends uvm_sequence_item; rand enum {READ, WRITE} operation; rand bit [31:0] addr; rand bit [31:0] write_data; bit [31:0] read_data; ... This design pattern raises a number of questions: 1. To me read transactions and write transactions are two separate "things" that deserve their own classes (single responsibility principle). The more operations we add to a transaction class the more time we will also spend on randomizing variables not needed by all operations (write_data for the READ operations in this case). Tutorials often point out that read_data shouldn't be randomized for that efficiency reason. Read/write are similar transactions so it may not be a big issue but packet protocols can have many vastly different packet types. What is the rationale for this design pattern? 2. One can also argue that the read response is a separate transaction. I found an explanation for this saying that input and output data should be in the same class because it enables reuse of the same monitor for both the input and the output of a DUT. This didn't convince me. Agents are configured as active or passive. Why can't that configuration be used by the monitor to determine what type of transactions to publish? 3. If all interface operations are put in a single class it's likely to become specific to that interface/driver and there will be no reuse. If there is no reuse it feels like the separation of the abstract transaction from the specific interface pin wiggling is losing purpose. Why not let the transaction class own the pin wiggling knowledge and let the driver be a simple reusable executor of that knowledge? 4. If the idea of reusable transactions holds I would expect to find reusable transaction class libraries providing simple read, write, reset, ... type of transactions and that many providers of verification components would support them in addition to their own more specialized transactions. All I found is TLM 2.0 which has read and write (in a single class just like the example above). Are there any other well supported libraries?
  14. @David Black I did stumble upon an accellera/uvm repository on GitHub. Seems to be what I was looking for although it has been dead since UVM 1.2. Why have it there and not use that platform?
  15. Dave, I'm not trying to replace the constraint solver but rather find the situations where the constraint is simple enough and performance important enough to favor the procedural approach. I'm learning SV constraint but I think your example of selecting 8 unique values between 10 and 20 can be expressed as class transaction; rand int arr[8]; constraint c { foreach(arr[i]) arr[i] inside {[10:20]}; unique {arr}; } endclass A procedural implementation of this based on $urandom_range will indeed result in extra calls when the unique constraint is added but only a factor 1.6x on average in this particular case. The execution time in my test increase a bit more, 1.8x, due to the overhead of determining if an extra call is needed. What is interesting is that the constraint solver in the only simulator I tested became 5x slower when the unique constraint was added. What's even more interesting was that the procedural approach was 37x faster without the unique constraint and 104x faster with that constraint included. Declarative constraints are very compact and elegant but it seems that there will be cases when the procedural approach is worth the extra effort.