Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by tudor.timi

  1. Seems like a tool limitation. Create a variable of type event and assign the result of the function call to it. Use this variable in the @(...) statement. Note: a better place for this type of question is /https://stackoverflow.com/. Use the system-verilog tag when posting a question.
  2. tudor.timi

    Transaction Design Patterns

    Regarding point 4: If you want reusable abstractions, one of them is "register/memory accesses". Most hardware blocks use bus transactions to update/query special function registers or memory locations. This is also an abstraction that software/firmware engineers understand. You should look into that. There is so much variation in bus protocols that it's difficult to talk about a universal abstraction. It's also mostly pointless, as when you're talking about verifying bus level aspects, you're interested in the details of that bus protocol.
  3. tudor.timi

    Transaction Design Patterns

    The blog post you quoted w.r.t. working with types is correct regarding " The correct thing to do from an OOP perspective is to create a virtual function ", but not regarding the further points. In that case, where a protocol uses heterogeneous transaction types (i.e. different kinds have different properties), you're better off using the visitor pattern. The transactions would have a virtual function accept function.
  4. tudor.timi

    Transaction Design Patterns

    Regarding point number 3, I don't see why the coupling between transaction, driver and monitor is a bad thing. If you treat transactions as mere data classes, the behavior based on this data will have to be implemented in a different class. Should a transaction know how to drive itself and how to monitor itself? Should it also know how to cover itself? What if you have to add another operation, like tracing itself in a waveform viewer? Do you add that to the transaction class too? This violates the single responsibility principle.
  5. tudor.timi

    Transaction Design Patterns

    Regarding point number 1: Transactions aren't supposed to model traditional classes (not sure what the correct term for such classes is), which contain behavior (i.e. methods) and make use of polymorphism. Transactions are data classes, where you bundle information together to pass around, similar to plain old structs. Contrast the following examples: // Bad design // Using tag classes, where a "tag" field controls the behavior of methods is a code smell class service; direction_e dir; function void do_stuff(); if (dir == READ) do_read(); else do_write(); endfunction endclass // Better design, have two classes interface class service; pure virtual function void do_stuff(); endclass class read_service; virtual function void do_stuff(); // do read stuff endfunction endclass class write_service; // ... endclass In the case above, it makes sense to create different classes for handling reads and writes, because you have a common abstraction (doing stuff), which comes in two different flavors. How would you handle processing different kinds of transactions in a driver (for example) if you had different classes for read and for write? You'd need to cast, which is very frowned upon (at least in the software world). My point about transactions being data classes isn't strictly true w.r.t how they are currently used in the industry. Transactions are also used for randomization, which is a polymorphic operation. Even here though, assuming you want to generate a list of transactions, where some of them are reads, some of them are writes, it will be impossible to do this in a single step if you build up your class hierarchy in such a way that you have a 'read_transaction' class and a 'write_transaction' class. This is because you can't choose an object's type (and I mean from the point of view of the compiler) via randomization. Finally, why is 'direction' the field you choose to specialize on? Assuming you would also have another field in your transaction class called 'sec_mode', which could be either 'SECURE' or 'NONSECURE', would you be inclined to say that you need to create a 'secure_transaction' and a 'non_secure_transaction' because they are different things? Because you also chose to specialize based on direction, would you have 'secure_read_transaction', 'secure_write_transaction', 'nonsecure_read_transaction' and 'nonsecure_write_transaction'? What would happen if you would add another field called 'priviledge_mode', which could be 'PRIVILEGED' or 'UNPRIVILEGED'?
  6. tudor.timi

    Transaction Design Patterns

    Regarding point number 2: Having both a write_data and a read_data field in the transaction is bad design. A field called data would be sufficient and it would contain that data being transmitted in that transaction, whether it is a read or a write (i.e. regardless of what direction that data flows). The direction field tells you whether you're dealing with read_data or with write_data. Having both fields makes for a pretty difficult to use API if you want to do things irrespective of the direction: if (trans.direction == READ) do_stuff(trans.read_data); else do_stuff(trans.write_data); You'll find your code repeating these conditional statements all over. Contrast this to the case where you only have data: do_stuff(trans.data);
  7. The section on arrays doesn't mention any array-pseudo methods, similar to what SystemVerilog had. These were particularly useful in constraints to be able to perform more involved operations that either weren't possible using a foreach or required a lot of support code to work. The way e implemented list methods inside constraints was even more powerful, so one could look there as well for inspiration.
  8. Here's example 57: component top { resource R {} pool[4] R R_pool; bind R_pool *; action A { lock r R; } action B {} action C {} action D { lock r R;} action my_test { activity { schedule { {do A; do B;} {do C; do D;} } } } } The description states that the following execution orders are valid: If there were only one resource R, then a, b in parallel with c, d becomes illegal, due to the lock. Would the execution of a, b and c in parallel and d also be legal? Theoretically, this could happen, since there is no locking going on. If it's not legal, from a PSS semantic point of view, is this because of the statement keeping scheduling dependencies within the sets? If so, an explicit note of this fact would be helpful. Also, what would be a compact way to add the behavior I described?
  9. tudor.timi


    You're using 'var' as a variable name, but this is an SV keyword. Try naming your variable something different: fork automatic int idx = i; // ! `uvm_do_on(eseq_inst[var], p_sequencer.master_sequencer[var]) join_none
  10. Why artificially restrict the length of integral expressions at the language body to 64 bits? Why not also provide (separately if preferred) an extra type/function qualifier on the PSS side and the corresponding header on the C/SV/foreign language side to allow passing of arbitrary data lengths?
  11. Cryptographic applications generally use long number arithmetic. I get what you mean, that it's possible to split a key, for example, into an array of words of bus width, since the key has to get into a cryptographic accelerator somehow. In a constrained random TB it would be done via a series of bus transfers and in an SoC context via a series of STORE instructions.
  12. Sizing at generation time was what I had in mind. I didn't notice that this was possible.
  13. I'm not sure if it's possible to declare an array of variable, but fixed at generation, size. This is really useful when building generic code (e.g. 4 or 8 responses after a request, etc.).
  14. tudor.timi

    Array pseudo-methods missing

    Not just reduction methods, but also methods like 'has(...)' that take a condition. Even in SV, looking for an item that satisfies a condition takes multiple lines (declare queue for result, call 'find(...)' on array, use result inside 'if' block).
  15. The definition of the term coverage is given as Coverage, in the context of covspecs, is used in the document to describe test goals instead of a posteriori measurements. The definition should be updated to reflect this.
  16. I don't see the benefit in mentioning the detail namespace at all, since this is an implementation artifact. It's interesting for a user to know what the functions an API consists of, but I don't see the benefit in knowing that scope inherits from detail::ScopeBase, for example.
  17. Having all of these C++ declarations in parallel to the DSL really ruin the flow. I'm not talking about the examples, there I guess it's not so much of an issue. I'm talking about stuff like the PSS_ENUM macro definition which is provided in-line, instead of in some annex. Full declarations of C++ classes for PSS constructs are also provided, instead of stripped down versions.
  18. SystemVerilog had a cool features where it was possible to specify a sequence of enum literals: typedef enum { SOME_LITERAL[10] } some_enum; This would generate an enum with the literals SOME_LITERAL0, SOME_LITERAL1, ..., SOME_LITERAL9. This was pretty useful in building generic code that can deal with a configurable number of states, transfers, whatever. I noticed this feature is missing in the PSS. This could easily be supported in the DSL, but it's probably more difficult (if at all possible) to implement in C++. Is this the reason it's not in there? There are other sections in the document that specifically state that some feature is not available in the DSL/C++ version, so adding it do the DSL and not to C++ would be consistent.
  19. Is it possible to nest if_then_else statements in PSS/C++? The C++ declaration of it is: class if_then_else : public detail::SharedExpr { public: /// Declare if-then-else activity statement if_then_else (const detail::AlgebExpr& cond, const detail::ActivityStmt& true_expr, const detail::ActivityStmt& false_expr ); }; Is an if_then_else compatible with a detail::ActivityStmt?
  20. I noticed that this document also promotes antiquated coding practices, in this case misguided use of Hungarian notation to postfix identifiers with their types. In the age of IDEs, I don't see the benefit of this. In the even more specific case of examples that are a couple of lines long, any benefits one could come up with are even less relevant. It's also used inconsistently throughout the document, which is even worse. For example, the _s suffix is for both structs and states. There is one example in there, where an identifier is called power_state_s, which is just redundant and reads strange.
  21. The use of the term struct for both the PSS construct and in the C++ examples for declaring classes leads to confusion. Let users use struct in their code all they want, but at least make the examples clearer by using class and public.
  22. When I first read example 22 I was confused about the temporal relations between setup and traffic. The semantics are explained later, but a description would be beneficial along with a link to the relevant sections. One thing that I was wondering about is whether traffic and setup would ever be scheduled at the same time, which would lead to the configuration changing during transmission (this would be bad). I think further sections clarify that this isn't possible (i.e. to put them in a parallel block).
  23. The current implementation of labeling is unintuitive, due to the fact that it allows parts of the labeled hierarchy to go unlabeled. Path statements like foo.bar.goo could have a lot more scopes in between which makes it difficult to reason about it's relation with some_scope.some_other_scope.yet_another_scope. Why not just restrict labeling to parts of the hierarchy that were already labeled?
  24. It's mentioned that constraints and randomization mimic SystemVerilog. Randomization as a term is used everywhere, but the hooks are called pre/post_solve(). Why not just call them pre/post_randomize()? The main audience is for this standard will be users with SystemVerilog experience.
  25. Here's example 102: component top { buffer mem_obj { int val; constraint val%2 == 0; // val must be even } action write1 { output mem_obj out_obj; constraint out_obj.val inside [1..5]; } action write2 { output mem_obj out_obj; constraint out_obj.val inside [6..10]; } action read { input mem_obj in_obj; constraint in_obj.val inside [8..12]; } action test { activity { do write1; do read; } } } The description states that Do tools start to introduce actions just to be able to satisfy preconditions? Do I just specify "do this" and the tool figures out how to get there? If so, this is a really cool feature, which isn't immediately obvious from the PSS document or from the DVCon Europe tutorial I went to. I would insist on it and give it a prominent place in the specification. The Brekker motto of "beginning with the end in mind" comes to mind here. The follow-up question is, can a tool generate all possible ways to get to a certain outcome?