Jump to content

tudor.timi

Members
  • Content Count

    289
  • Joined

  • Last visited

  • Days Won

    33

tudor.timi last won the day on July 10 2017

tudor.timi had the most liked content!

4 Followers

About tudor.timi

  • Rank
    Member

Contact Methods

  • Website URL
    http://blog.verificationgentleman.com

Profile Information

  • Gender
    Male
  • Location
    Germany

Recent Profile Visitors

1,630 profile views
  1. Seems like a tool limitation. Create a variable of type event and assign the result of the function call to it. Use this variable in the @(...) statement. Note: a better place for this type of question is /https://stackoverflow.com/. Use the system-verilog tag when posting a question.
  2. Regarding point 4: If you want reusable abstractions, one of them is "register/memory accesses". Most hardware blocks use bus transactions to update/query special function registers or memory locations. This is also an abstraction that software/firmware engineers understand. You should look into that. There is so much variation in bus protocols that it's difficult to talk about a universal abstraction. It's also mostly pointless, as when you're talking about verifying bus level aspects, you're interested in the details of that bus protocol.
  3. The blog post you quoted w.r.t. working with types is correct regarding " The correct thing to do from an OOP perspective is to create a virtual function ", but not regarding the further points. In that case, where a protocol uses heterogeneous transaction types (i.e. different kinds have different properties), you're better off using the visitor pattern. The transactions would have a virtual function accept function.
  4. Regarding point number 3, I don't see why the coupling between transaction, driver and monitor is a bad thing. If you treat transactions as mere data classes, the behavior based on this data will have to be implemented in a different class. Should a transaction know how to drive itself and how to monitor itself? Should it also know how to cover itself? What if you have to add another operation, like tracing itself in a waveform viewer? Do you add that to the transaction class too? This violates the single responsibility principle.
  5. Regarding point number 1: Transactions aren't supposed to model traditional classes (not sure what the correct term for such classes is), which contain behavior (i.e. methods) and make use of polymorphism. Transactions are data classes, where you bundle information together to pass around, similar to plain old structs. Contrast the following examples: // Bad design // Using tag classes, where a "tag" field controls the behavior of methods is a code smell class service; direction_e dir; function void do_stuff(); if (dir == READ) do_read(); else do_write(); endfunction endclass // Better design, have two classes interface class service; pure virtual function void do_stuff(); endclass class read_service; virtual function void do_stuff(); // do read stuff endfunction endclass class write_service; // ... endclass In the case above, it makes sense to create different classes for handling reads and writes, because you have a common abstraction (doing stuff), which comes in two different flavors. How would you handle processing different kinds of transactions in a driver (for example) if you had different classes for read and for write? You'd need to cast, which is very frowned upon (at least in the software world). My point about transactions being data classes isn't strictly true w.r.t how they are currently used in the industry. Transactions are also used for randomization, which is a polymorphic operation. Even here though, assuming you want to generate a list of transactions, where some of them are reads, some of them are writes, it will be impossible to do this in a single step if you build up your class hierarchy in such a way that you have a 'read_transaction' class and a 'write_transaction' class. This is because you can't choose an object's type (and I mean from the point of view of the compiler) via randomization. Finally, why is 'direction' the field you choose to specialize on? Assuming you would also have another field in your transaction class called 'sec_mode', which could be either 'SECURE' or 'NONSECURE', would you be inclined to say that you need to create a 'secure_transaction' and a 'non_secure_transaction' because they are different things? Because you also chose to specialize based on direction, would you have 'secure_read_transaction', 'secure_write_transaction', 'nonsecure_read_transaction' and 'nonsecure_write_transaction'? What would happen if you would add another field called 'priviledge_mode', which could be 'PRIVILEGED' or 'UNPRIVILEGED'?
  6. Regarding point number 2: Having both a write_data and a read_data field in the transaction is bad design. A field called data would be sufficient and it would contain that data being transmitted in that transaction, whether it is a read or a write (i.e. regardless of what direction that data flows). The direction field tells you whether you're dealing with read_data or with write_data. Having both fields makes for a pretty difficult to use API if you want to do things irrespective of the direction: if (trans.direction == READ) do_stuff(trans.read_data); else do_stuff(trans.write_data); You'll find your code repeating these conditional statements all over. Contrast this to the case where you only have data: do_stuff(trans.data);
  7. You're using 'var' as a variable name, but this is an SV keyword. Try naming your variable something different: fork automatic int idx = i; // ! `uvm_do_on(eseq_inst[var], p_sequencer.master_sequencer[var]) join_none
  8. Cryptographic applications generally use long number arithmetic. I get what you mean, that it's possible to split a key, for example, into an array of words of bus width, since the key has to get into a cryptographic accelerator somehow. In a constrained random TB it would be done via a series of bus transfers and in an SoC context via a series of STORE instructions.
  9. Sizing at generation time was what I had in mind. I didn't notice that this was possible.
  10. Not just reduction methods, but also methods like 'has(...)' that take a condition. Even in SV, looking for an item that satisfies a condition takes multiple lines (declare queue for result, call 'find(...)' on array, use result inside 'if' block).
  11. It's easy to see from the examples that the C++ snippets are much longer than the DSL examples. I don't really buy the argument that some people are more accustomed with C++ and would find it easier to work with it. When the announcement was made that a DSL was going to be developed, I wasn't a big fan of the idea: "yet another verification language". I was put off by the time it would take to develop it, but now since the deed is done, I fear it will take more time to move the standard forward with both languages. I would buy into it more if it were possible to specify stuff in C++ and the standard would provide the required library to link into my PSS model in order to compile everything with GCC and get the model in executable form. Back-end tools could take this model and generate verification infrastructure from it. They could also provide a front-end to directly read C++, so that it's easier to set things up. The idea behind using GCC is that it would be easy to build extensions to tools, since the entire model would be traverse-able in user code. I could also build my own little back-end tools without having to build the entire front-end (lexing, parsing, compiling, etc.) if I want to experiment with some new thing. There were some blog posts out there (can't remember where exactly) that mentioned that this won't be the case. It won't be possible to just take a C++ PSS model and put it into GCC, which is kind of a missed opportunity. The PSS library doesn't have to be part of the standard. The current specifications in the standard would suffice and vendors could provide their own libraries that are optimized for traversal speed or whatever else. I would welcome an open-source implementation, though, similar to how UVM has the Accellera SV-BCL. Without this basic ecosystem, I don't see much sense in supporting both languages.
  12. The chapter on HSI is pretty empty. There were questions as to whether it's even going to make it in the first release of the PSS. HSI seems like a really cool idea and its benefits could extend further than just verification. It could, for example, also be used for firmware development. PSS is a verification-oriented standard and will have this reputation. I would assume it's going to be difficult to market it to firmware folks to use the HSI sub-set. Having HSI as an own standard allows it to evolve to better serve both needs (and many more if identified).
  13. The second class definition in the example is unclear: import class ext : base { void ext_method(); } Is base a class that is imported? The syntax doesn't suggest this, since the import is before class_ext. Why not force the user to import base separately and inherit from it in a separate declaration?
  14. Why artificially restrict the length of integral expressions at the language body to 64 bits? Why not also provide (separately if preferred) an extra type/function qualifier on the PSS side and the corresponding header on the C/SV/foreign language side to allow passing of arbitrary data lengths?
×
×
  • Create New...