Jump to content

sas73

Members
  • Content Count

    34
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by sas73

  1. @Taichi Ishitani That would certainly narrow the scope a bit but I would hoping for something that would pinpoint the wait statement. Is there a instrumentation mechanism in place? That would require me to add code manually but it would also be useful for other scenarios, for example to locate a loop that is stuck.
  2. 2 days! That's fast response Exactly! If you're not open in the design/pre-release phase you're likely to miss use cases and if the members have committed themselves to solutions and switched their focus to other tasks I imagine that there will be an unwillingness to go back and redo things even if new important insights have been revealed. I think most users would like a code base they can build upon, not one that needs adaptations to make it work. Being fully transparent about the code in the making will reduce the risk for such adaptations What I'm suggesting is free and efficient access to the collective intelligence of the entire community at a point in the development cycles where it makes the most difference. I'm not suggesting a shift in the rights to make the final decisions. That's exclusive to the paying members. What's preventing this from happening within Accellera?
  3. When a testbench hangs such that the UVM timeout is triggered I get a message like this reporter [PH_TIMEOUT] Explicit timeout of 10 hit, indicating a probable testbench issue Are there ways in which UVM can help me identify the cause of this timeout? For example, special wait statements that would notify me if they are blocking when the timeout hits.
  4. Unfortunately I'm not with a member company. I was hoping that I'd have read permissions regardless of my current affiliation. As a user I'd like to see the connection between discussions in the official forum, the issues reported to the issue management system, and the code being developed in response to that. The ability to immediately test that code and possibly give feedback as code comments or a pull request. More like Github, Gitlab and other platforms. Seems to me that this would be a more efficient way to give and get user feedback.
  5. Are there any plans on continuing using the GitHub repository for such releases or has it been discontinued?
  6. The github repositories are the actively developed code for Accellera’s reference implementation (sourceforge was made read-only when github was spun up). That being said, github stores the active development for the reference implementation, not for the standard itself. The class reference (ie. The “Standard”), as well as the Accellera Reference Implementation are officially published on accellera.org: The UVM 1.2 Standard: http://accellera.org/images/downloads/standards/uvm/UVM_Class_Reference_Manual_1.2.pdf The UVM 1.2 Reference Implementation: http://accellera.org/images/downloads/standards/uvm/uvm-1.2.tar.gz
  7. Reading a bit further I found the concept of API sequences that can be provided by the agent developer. For example a write sequence that hides the details I gave in the example above. The write sequence can then be used in a higher layer sequence (Mentor calls this a worker sequence). The write sequence also provides a write method to start itself and the worker sequence calls that method with a specific address and data. Note that this approach completely overrides randomization of sequence items and moves that responsibility to the sequences.
  8. As a sequence writer I should ideally not be exposed to the implementation details of the driver that have been discussed in this thread. For example, the structure of the sequence item(s), whether get/put, get_next_item/item_done or something else is used. I would like the driver to provide an API excluding all of that. Something similar to this that could be included in the sequence write(<constraint on address and data>) . Does SV allow you to pass a constraint as an argument or is there another way of doing that?
  9. Note that it's not only transactions you can reuse. You can also reuse the handling of transactions (the visit method in the visitor) in some cases, for example in the case you have a transaction for a time delay. You can "inherit" several such handlers using mixins.Thanks for this mixin post @tudor.timi
  10. That's true. If I want to fully randomize I need to add some extra code in my sequence. It seems that it also causes a performance hit making that solution slower despite less randomization. However, if I want to do a write-delay-read sequence with random address and data I can express that more explicitly instead of constraining variables to be a fix value. In this case the solution with separate transactions becomes faster. In these tests I used randomize() everywhere and the differences are in the percentage range. I more concerned about the difference between randomize() and $urandom which can be a factor 100x.
  11. When I failed to see examples of transaction reuse I though that maybe people put their reuse effort elsewhere, for example by moving the pin wiggling functionality (which comes in different flavors) to the transaction class so that the driver becomes more generic. I agree that transactions are data classes and I do want to reuse them so moving the pin wiggling into these classes is not something I want. The visitor pattern is also a way to create a more generic driver while not destroying the potential for transaction reuse. The visitor would remove the need for a $cast.
  12. Being able to do simple reads/writes is indeed a reusable abstraction. I've seen the TLM generic payload but that's also the only attempt for a reusable transaction I've seen. Are there others? To verify a bus interface you need to be concerned about the details but when the focus of your testbench is to verify the functionality which the bus interface is configuring you get far with the simple read/writes. A driver could support both reusable simple read/write, reset, delay and specialized transactions used when fully verifying such an interface. I like to think of the driver as a composition of supported services, some are reused, some are new.
  13. Thanks for your answers @tudor.timi Looking at the examples out there it seems like both the single and double data field approaches are popular. What people prefer depends on their main concerns. You're concerned with the number of if statements but Mentor who takes the double data field approach (https://verificationacademy.com/cookbook/sequences/items) expresses other concerns: I'm also concerned about randomization performance (http://forums.accellera.org/topic/6275-constrained-random-performance) but splitting into two data fields doesn't improve performance. You still have unnecessary randomization of write_data for read requests. All they've done is not making it worse by also randomizing read_data. The corruption risk is related to the shared memory approach of the get_next_item/item_done pattern. They avoid that risk by not sharing the data field but I feel that not sharing request and response objects and use the get/put pattern would be a better approach. UVM supports it but maybe there is a good reason why we shouldn't use it? Since one of my concerns is performance I don't like too many randomized fields that aren't applicable to all commands. The read/write example may not represent a "too many" scenario, it's just a common example where such a problem exists. This gets worse as you add more commands. The address and data fields would for example be completely irrelevant for a reset command. A reset command is also an example of a transaction that would be very reusable if available in isolation. A randomized sec_mode is a property relevant to both read and write so that would not be a reason for splitting. A delay field is also something that is relevant to both reads and writes but it's also reusable so I can see a reason to have that in a separate transaction anyway Summary: I'm not looking for the "best" solution to the read/write example. People have different concerns and I accept that. What I wanted to find out was if people are concerned about performance and reuse in such a way that they would consider alternatives to the all-in-one sequence item. If I understand you correctly you wouldn't use the all-in-one pattern for heterogeneous protocols (A "too many" scenario)?
  14. The list of open source UVM repositories can also provide an answer to question 4. I couldn't find any well supported (project with many stars) library of sequence items. There are many verification components for various bus protocols but they all have a single sequence items tailored specifically for that bus. This leads me back to question 3...
  15. The blog post also touches my third question although not providing an answer
  16. Looking at open source UVM repositories I think it safe to say that keeping all transaction types within the same sequence item class is the most common design pattern (the only pattern I found when looking at many of these repos). Luckily, I also found an interesting blog post about type handles that address what I'm looking for. There is a way of doing what I'm looking for although standard practice seems to be something else. I guess that keeping everything in one packet is more convenient. I think this answers my first question.
  17. I found an example from UVM Primer that has dedicated transactions as well as monitors for input and output so I guess that answers my second question.
  18. I'm not looking for complete answers. Any clues that you can provide are appreciated.
  19. While trying to learn UVM and more specifically the use of uvm_sequence_item I keep running into the same example which looks something like this class bus_transaction extends uvm_sequence_item; rand enum {READ, WRITE} operation; rand bit [31:0] addr; rand bit [31:0] write_data; bit [31:0] read_data; ... This design pattern raises a number of questions: 1. To me read transactions and write transactions are two separate "things" that deserve their own classes (single responsibility principle). The more operations we add to a transaction class the more time we will also spend on randomizing variables not needed by all operations (write_data for the READ operations in this case). Tutorials often point out that read_data shouldn't be randomized for that efficiency reason. Read/write are similar transactions so it may not be a big issue but packet protocols can have many vastly different packet types. What is the rationale for this design pattern? 2. One can also argue that the read response is a separate transaction. I found an explanation for this saying that input and output data should be in the same class because it enables reuse of the same monitor for both the input and the output of a DUT. This didn't convince me. Agents are configured as active or passive. Why can't that configuration be used by the monitor to determine what type of transactions to publish? 3. If all interface operations are put in a single class it's likely to become specific to that interface/driver and there will be no reuse. If there is no reuse it feels like the separation of the abstract transaction from the specific interface pin wiggling is losing purpose. Why not let the transaction class own the pin wiggling knowledge and let the driver be a simple reusable executor of that knowledge? 4. If the idea of reusable transactions holds I would expect to find reusable transaction class libraries providing simple read, write, reset, ... type of transactions and that many providers of verification components would support them in addition to their own more specialized transactions. All I found is TLM 2.0 which has read and write (in a single class just like the example above). Are there any other well supported libraries?
  20. @David Black I did stumble upon an accellera/uvm repository on GitHub. Seems to be what I was looking for although it has been dead since UVM 1.2. Why have it there and not use that platform?
  21. Dave, I'm not trying to replace the constraint solver but rather find the situations where the constraint is simple enough and performance important enough to favor the procedural approach. I'm learning SV constraint but I think your example of selecting 8 unique values between 10 and 20 can be expressed as class transaction; rand int arr[8]; constraint c { foreach(arr[i]) arr[i] inside {[10:20]}; unique {arr}; } endclass A procedural implementation of this based on $urandom_range will indeed result in extra calls when the unique constraint is added but only a factor 1.6x on average in this particular case. The execution time in my test increase a bit more, 1.8x, due to the overhead of determining if an extra call is needed. What is interesting is that the constraint solver in the only simulator I tested became 5x slower when the unique constraint was added. What's even more interesting was that the procedural approach was 37x faster without the unique constraint and 104x faster with that constraint included. Declarative constraints are very compact and elegant but it seems that there will be cases when the procedural approach is worth the extra effort.
  22. Thanks for your reply David. The two approaches behave differently but is there a reason why one is better than the other provided I'm aware of these differences and know how and when code changes affects my ability to repeat a test? I tried several different simulators and the performance hit was between 15x and 25x. This seems to be a problem inherent to all constraint solvers. If that is the case wouldn't I be better off using randomization system calls when possible?
  23. While trying out UVM/SystemVerilog with a very simple transaction/constraint class transaction extends uvm_sequence_item; rand int value; constraint c { value inside {[10:20]}; } endclass I found that there is a significant performance difference if I randomize a transaction like this void'(t.randomize()); or if I do t.value = $urandom_range(20, 10); The first approach is 15x to 25x slower depending on simulator! In my case I have a dummy testbench only consisting of a producer sending a large number of transaction to a consumer. Will this also be a concern when scaling up to real-life designs? Are there situation where it's recommended to use $urandom for simple constraints? If not, why?
  24. Thanks @dave_59. That would be the easiest way but in my case I want to verify that the message fields are what I expect, not only that a report has been produced. I ended up pushing the report message to a queue and then pop it in my test to verify the fields.
  25. Thanks @kirloy369! Looks more UVM idiomatic. Doesn't necessarily make it a better solution that what has been proposed before but I wanted to start by trying out how it was intended to be done.
×
×
  • Create New...