Jump to content
Sign in to follow this  
sas73

Transaction Design Patterns

Recommended Posts

While trying to learn UVM and more specifically the use of uvm_sequence_item I keep running into the same example which looks something like this
 
class bus_transaction extends uvm_sequence_item;
  rand enum {READ, WRITE} operation;
  rand bit [31:0] addr;
  rand bit [31:0] write_data;

  bit [31:0] read_data;
  ...
 
This design pattern raises a number of questions:
 
1. To me read transactions and write transactions are two separate "things" that deserve their own classes (single responsibility principle). The more operations we add to a transaction class the more time we will also spend on randomizing variables not needed by all operations (write_data for the READ operations in this case). Tutorials often point out that read_data shouldn't be randomized for that efficiency reason. Read/write are similar transactions so it may not be a big issue but packet protocols can have many vastly different packet types. What is the rationale for this design pattern?
2. One can also argue that the read response is a separate transaction. I found an explanation for this saying that input and output data should be in the same class because it enables reuse of the same monitor for both the input and the output of a DUT. This didn't convince me. Agents are configured as active or passive. Why can't that configuration be used by the monitor to determine what type of transactions to publish?
3. If all interface operations are put in a single class it's likely to become specific to that interface/driver and there will be no reuse. If there is no reuse it feels like the separation of the abstract transaction from the specific interface pin wiggling is losing purpose. Why not let the transaction class own the pin wiggling knowledge and let the driver be a simple reusable executor of that knowledge?
4. If the idea of reusable transactions holds I would expect to find reusable transaction class libraries providing simple read, write, reset, ... type of transactions and that many providers of verification components would support them in addition to their own more specialized transactions. All I found is TLM 2.0 which has read and write (in a single class just like the example above). Are there any other well supported libraries?

Share this post


Link to post
Share on other sites

Looking at open source UVM repositories I think it safe to say that keeping all transaction types within the same sequence item class is the most common design pattern (the only pattern I found when looking at many of these repos). Luckily, I also found an interesting blog post about type handles that address what I'm looking for.

Quote

As an example, consider a protocol with different packet types.  Each type of packet has different number of bytes, and the way each is mapped to the bus is different.  We want to build a driver that can easily deal with the differences in packet types.  The logical thing is to create a set of sequence items to represent each of the packet types.

There is a way of doing what I'm looking for although standard practice seems to be something else. I guess that keeping everything in one packet is more convenient. I think this answers my first question.

Share this post


Link to post
Share on other sites

The blog post also touches my third question although not providing an answer

Quote

The correct thing to do from an OOP perspective is to create a virtual function (or set of virtual functions) whose role is to map the data to the bus.  Each subclass will have a different implementation that knows how the packet is organized and how it should be mapped to the bus.  The problem is that this may require knowledge of the bus or the bus protocol that you do not want to code into the sequence item.  Only the transactor has knowledge of the bus protocol details.

 

Share this post


Link to post
Share on other sites

The list of open source UVM repositories can also provide an answer to question 4. I couldn't find any well supported (project with many stars) library of sequence items. There are many verification components for various bus protocols but they all have a single sequence items tailored specifically for that bus. This leads me back to question 3...

Share this post


Link to post
Share on other sites

Regarding point number 2:

Having both a write_data and a read_data field in the transaction is bad design. A field called data would be sufficient and it would contain that data being transmitted in that transaction, whether it is a read or a write (i.e. regardless of what direction that data flows). The direction field tells you whether you're dealing with read_data or with write_data. Having both fields makes for a pretty difficult to use API if you want to do things irrespective of the direction:

if (trans.direction == READ)
  do_stuff(trans.read_data);
else
  do_stuff(trans.write_data);

You'll find your code repeating these conditional statements all over.

Contrast this to the case where you only have data:

do_stuff(trans.data);

 

Share this post


Link to post
Share on other sites

Regarding point number 1:

Transactions aren't supposed to model traditional classes (not sure what the correct term for such classes is), which contain behavior (i.e. methods) and make use of polymorphism. Transactions are data classes, where you bundle information together to pass around, similar to plain old structs.

Contrast the following examples:

// Bad design

// Using tag classes, where a "tag" field controls the behavior of methods is a code smell
class service;

  direction_e dir;

  function void do_stuff();
    if (dir == READ)
      do_read();
    else
      do_write();
  endfunction

endclass


// Better design, have two classes

interface class service;
  pure virtual function void do_stuff();
endclass

class read_service;
  
  virtual function void do_stuff();
    // do read stuff
  endfunction

endclass

class write_service;
  // ...
endclass

In the case above, it makes sense to create different classes for handling reads and writes, because you have a common abstraction (doing stuff), which comes in two different flavors.

How would you handle processing different kinds of transactions in a driver (for example) if you had different classes for read and for write? You'd need to cast, which is very frowned upon (at least in the software world).

My point about transactions being data classes isn't strictly true w.r.t how they are currently used in the industry. Transactions are also used for randomization, which is a polymorphic operation. Even here though, assuming you want to generate a list of transactions, where some of them are reads, some of them are writes, it will be impossible to do this in a single step if you build up your class hierarchy in such a way that you have a 'read_transaction' class and a 'write_transaction' class. This is because you can't choose an object's type (and I mean from the point of view of the compiler) via randomization.

Finally, why is 'direction' the field you choose to specialize on? Assuming you would also have another field in your transaction class called 'sec_mode', which could be either 'SECURE' or 'NONSECURE', would you be inclined to say that you need to create a 'secure_transaction' and a 'non_secure_transaction' because they are different things? Because you also chose to specialize based on direction, would you have 'secure_read_transaction', 'secure_write_transaction', 'nonsecure_read_transaction' and 'nonsecure_write_transaction'? What would happen if you would add another field called 'priviledge_mode', which could be 'PRIVILEGED' or 'UNPRIVILEGED'?

Share this post


Link to post
Share on other sites

Regarding point number 3, I don't see why the coupling between transaction, driver and monitor is a bad thing. If you treat transactions as mere data classes, the behavior based on this data will have to be implemented in a different class. Should a transaction know how to drive itself and how to monitor itself? Should it also know how to cover itself? What if you have to add another operation, like tracing itself in a waveform viewer? Do you add that to the transaction class too? This violates the single responsibility principle.

Share this post


Link to post
Share on other sites

The blog post you quoted w.r.t. working with types is correct regarding " The correct thing to do from an OOP perspective is to create a virtual function ", but not regarding the further points. In that case, where a protocol uses heterogeneous transaction types (i.e. different kinds have different properties), you're better off using the visitor pattern. The transactions would have a virtual function accept function.

Share this post


Link to post
Share on other sites

Regarding point 4:

If you want reusable abstractions, one of them is "register/memory accesses". Most hardware blocks use bus transactions to update/query special function registers or memory locations. This is also an abstraction that software/firmware engineers understand. You should look into that.

There is so much variation in bus protocols that it's difficult to talk about a universal abstraction. It's also mostly pointless, as when you're talking about verifying bus level aspects, you're interested in the details of that bus protocol.

Share this post


Link to post
Share on other sites

Thanks for your answers @tudor.timi

Quote

Having both a write_data and a read_data field in the transaction is bad design.


Looking at the examples out there it seems like both the single and double data field approaches are popular. What people prefer depends on their main concerns. You're concerned with the number of if statements but Mentor who takes the double data field approach (https://verificationacademy.com/cookbook/sequences/items) expresses other concerns:

Quote

As sequence_items are used for both request and response traffic and a good convention to follow is that request properties should be rand, and that response properties should not be rand. This optimizes the randomization process and also ensures that any collected response information is not corrupted by any randomization that might take place.


I'm also concerned about randomization performance (http://forums.accellera.org/topic/6275-constrained-random-performance) but splitting into two data fields doesn't improve performance. You still have unnecessary randomization of write_data for read requests. All they've done is not making it worse by also randomizing read_data.

The corruption risk is related to the shared memory approach of the get_next_item/item_done pattern. They avoid that risk by not sharing the data field but I feel that not sharing request and response objects and use the get/put pattern would be a better approach. UVM supports it but maybe there is a good reason why we shouldn't use it?

Since one of my concerns is performance I don't like too many randomized fields that aren't applicable to all commands. The read/write example may not represent a "too many" scenario, it's just a common example where such a problem exists. This gets worse as you add more commands. The address and data fields would for example be completely irrelevant for a reset command. A reset command is also an example of a transaction that would be very reusable if available in isolation.
 

Quote

Finally, why is 'direction' the field you choose to specialize on? Assuming you would also have another field in your transaction class called 'sec_mode', which could be either 'SECURE' or 'NONSECURE', would you be inclined to say that you need to create a 'secure_transaction' and a 'non_secure_transaction' because they are different things?


A randomized sec_mode is a property relevant to both read and write so that would not be a reason for splitting. A delay field is also something that is relevant to both reads and writes but it's also reusable so I can see a reason to have that in a separate transaction anyway

Summary: I'm not looking for the "best" solution to the read/write example. People have different concerns and I accept that. What I wanted to find out was if people are concerned about performance and reuse in such a way that they would consider alternatives to the all-in-one sequence item. If I understand you correctly you wouldn't use the all-in-one pattern for heterogeneous protocols (A "too many" scenario)?

Share this post


Link to post
Share on other sites
Quote

If you want reusable abstractions, one of them is "register/memory accesses". Most hardware blocks use bus transactions to update/query special function registers or memory locations. This is also an abstraction that software/firmware engineers understand. You should look into that.


Being able to do simple reads/writes is indeed a reusable abstraction. I've seen the TLM generic payload but that's also the only attempt for a reusable transaction I've seen. Are there others? 
 

Quote

There is so much variation in bus protocols that it's difficult to talk about a universal abstraction. It's also mostly pointless, as when you're talking about verifying bus level aspects, you're interested in the details of that bus protocol.


To verify a bus interface you need to be concerned about the details but when the focus of your testbench is to verify the functionality which the bus interface is configuring you get far with the simple read/writes. A driver could support both reusable simple read/write, reset, delay and specialized transactions used when fully verifying such an interface. I like to think of the driver as a composition of supported services, some are reused, some are new.

Share this post


Link to post
Share on other sites
Quote

In the case above, it makes sense to create different classes for handling reads and writes, because you have a common abstraction (doing stuff), which comes in two different flavors.


When I failed to see examples of transaction reuse I though that maybe people put their reuse effort elsewhere, for example by moving the pin wiggling functionality (which comes in different flavors) to the transaction class so that the driver becomes more generic.
 

Quote

Transactions are data classes, where you bundle information together to pass around, similar to plain old structs.


I agree that transactions are data classes and I do want to reuse them so moving the pin wiggling into these classes is not something I want. The visitor pattern is also a way to create a more generic driver while not destroying the potential for transaction reuse.
 

Quote

How would you handle processing different kinds of transactions in a driver (for example) if you had different classes for read and for write? You'd need to cast, which is very frowned upon (at least in the software world).


The visitor would remove the need for a $cast.

Share this post


Link to post
Share on other sites
Quote

Even here though, assuming you want to generate a list of transactions, where some of them are reads, some of them are writes, it will be impossible to do this in a single step if you build up your class hierarchy in such a way that you have a 'read_transaction' class and a 'write_transaction' class.


That's true. If I want to fully randomize I need to add some extra code in my sequence. It seems that it also causes a performance hit making that solution slower despite less randomization. However, if I want to do a write-delay-read sequence with random address and data I can express that more explicitly instead of constraining variables to be a fix value. In this case the solution with separate transactions becomes faster. In these tests I used randomize() everywhere and the differences are in the percentage range. I more concerned about the difference between randomize() and $urandom which can be a factor 100x.

Share this post


Link to post
Share on other sites

Note that it's not only transactions you can reuse. You can also reuse the handling of transactions (the visit method in the visitor) in some cases, for example in the case you have a transaction for a time delay. You can "inherit" several such handlers using mixins.Thanks for this mixin post @tudor.timi

Share this post


Link to post
Share on other sites

As a sequence writer I should ideally not be exposed to the implementation details of the driver that have been discussed in this thread. For example, the structure of the sequence item(s), whether get/put, get_next_item/item_done or something else is used. I would like the driver to provide an API excluding all of that. Something similar to this that could be included in the sequence

write(<constraint on address and data>)

. Does SV allow you to pass a constraint as an argument or is there another way of doing that?

Share this post


Link to post
Share on other sites

Reading a bit further I found the concept of API sequences that can be provided by the agent developer. For example a write sequence that hides the details I gave in the example above. The write sequence can then be used in a higher layer sequence (Mentor calls this a worker sequence). The write sequence also provides a write method to start itself and the worker sequence calls that method with a specific address and data. Note that this approach completely overrides randomization of sequence items and moves that responsibility to the sequences.

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×