Jump to content

UVM verification environment structure


Recommended Posts

Hi,

I am trying to build an UVM verification environment for a small project.

Specifications:

- an image is located in a memory;

- the image has several formats: YCC_420, YCC_422, YCC_444;

- each component of the image (Y, Cb, Cr) can be in a different memory location, starting from a base address;

- this image is injected towards the DUT, according with a protocol (DUT request to read data from an address, component by component);

- the received image is processed by DUT;

My approach is this:

- I have a base class for the image (uvm_sequence_item);

- I have an agent, that contains a driver, a sequencer and a monitor;

- the image class has methods for building the image per component and getting data from the image;

- the driver: resets the DUT, configures the registers (for the image width, height, ycc_format), calls function to generate the image (memory allocation, data generation). At the end the image is destroyed;

- the monitor verifies that the protocol is respected;

- a scoreboard will be used to check that DUT data is the expected one.

The problem is that the sequencer class, and sequence classes are not used in my structure.

In the UVM tutorials, I saw that the Driver uses the below methods to get a data from the uvm_sequence_item class

seq_item_port.get_next_item(uvm_sequence_item);

seq_item_port.item_done();

In my case the uvm_sequence_item class has several properties/fields (width, height, y data array, cb data array, cr data array, ycc_format, img_type).

In order to get data needed to be injected on the bus a method from the uvm_sequence_item class is used directly.

Is my approach correct, or the UVM structure should be different?

I want to build a reusable structure, so in case the driver protocol changes, to be able to reuse the UVM structure.

The structure looks like this (I did not put the implementation details: are not relevant).

class mem_image extends uvm_sequence_item;

typedef enum {YCC_420, YCC_422, YCC_444 } img_ycc_format_t;

typedef enum {RANDOM, BLACK, WHITE, INCREMENTAL, PATTERN_A5, EXTERNAL_FILE} img_type_t;

// class properties

rand bit [12:0] img_width;

rand bit [12:0] img_height;

rand bit [7:0] img_y_data_planar[];

rand bit [7:0] img_cb_data_planar[];

rand bit [7:0] img_cr_data_planar[];

rand img_ycc_format_t img_ycc_format;

rand img_type_t img_type;

// class methods

compute_y_size_planar();

compute_c_size_planar();

generate_image():

memory_allocation();

data_generation();

destroy_image();

get_byte(address, data_array[]);

endclass:mem_image

class driver extends uvm_driver #(mem_image);

virtual interfaces needed are declared and used;

task run();

// declare/create a seq_item of mem_image class type

mem_image img = new("img");

uvm_report_info(get_full_name(),"Run", UVM_LOW);

fork

rst_if() ; // reset interface

upd_img_cfg(img); // update image configuration - sets width, height, img_ycc_format, generates_image, destroy_image at the end

drive_controls(); // drive controls

drive_data(img) ; // drive data: using get_byte method from mem_image class, data is gathered (from requested address) and put on the bus according with the protocol

join

endtask:run

endclass:driver

Link to comment
Share on other sites

hi aapafi,

from reading your post some areas are not completely clear to me, such as:

- how does data get into memory? via a bus or via dual port memory or simply via (simulation) backdoor?

- how does you dut send back the result of its operation?

anyway i just make a few assumptions here and it should be simple to midify some of the requirements once you stick to "standard" structure.

i would do the following:

1. create a picture class deriving from uvm_sequence_item holding the picture data as well as the high level properties YUV etc. make it child of uvm_sequence_item allows you to generate specific sequences of pictures easily. sequences of pictures are produced by the picture_sequencer.

2. create a bus driver (or backdoor memory) driver (a "full" uvc with io transaction, driver, sequencer, ...). this uvc should be able to drive transactions on either the bus or the backdoor or directly to the memory depending on your infrastructure.

3. layer #1 over #2, which allows you to store your picture in the memory using the custom driver of #2

(lets assume the dut stores the picture in some other memory area).

the verification env would consist of a virtual sequncer which would run the following sequence.

1. randomize some picture properties (format/size/etc)

2. program the dut via bus uvc#2

3. write picture into memory using picture sequencer+bus uvc

4. start computation

5. wait for end of computation

6. read the dut results via the bus driver

7. compare the dut results with the expected results.

(well this is just a short/initial suggestion, more details/constraints from your side may change the approach).

if you got question about howto structure verification environments with system veriflog and uvm you may want to check out the uvm reference flow http://www.uvmworld.org/uvm-reference-flow.php which has several verification components and approaches captured into in documentation and code.

regards

/uwe

Link to comment
Share on other sites

Hi Uwe,

Thank you for the fast response.

Let me try to answer to your questions:

[uwe] how does data get into memory? via a bus or via dual port memory or simply via (simulation) backdoor?

- the data is already in the memory; the driver will read it from there (from an address) and will send it towards the DUT according with a protocol;

[uwe] how does you dut send back the result of its operation?

- the DUT just reads the data from the memory on a protocol, and send it out on a different protocol;

- in case the data in the memory is stored in planar mode (Y components are stored together in the memory from base address A, Cb components are stored together in the memory from base address B, Cr components are stored together in the memory from base address C), the DUT just reads it and send it out on a different protocol;

- in case the data in the memory is stored in interleaved mode (the Y, Cb, Cr components are stored in the memory starting from base address A interleaved: YCbCrYCbCr ....) the DUT makes some processing (component extraction) and send it out on a different protocol;

[uwe] 1. create a picture class deriving from uvm_sequence_item holding the picture data as well as the high level properties YUV etc. make it child of uvm_sequence_item allows you to generate specific sequences of pictures easily. sequences of pictures are produced by the picture_sequencer.

- I agree with this; the picture class defined by me is mem_image and it is derived from uvm_sequence_item;

- For the moment I do not have a picture_sequencer.

- I did something like this:

- After the DUT registers are configured (width, height, img_ycc_format, mem_format ...etc) at the end an enable is made to the DUT to start processing.

- In this moment, in the bus driver class I have a method used to update the picture class properties and to generate actual data (in 0 time):

upd_img_cfg(img); // update image configuration - sets width, height, img_ycc_format, generates_image

- After this properties are configured, a method of the picture class is called to generate the image:

generate_image(), calls two other methods:

- memory_allocation();

- data_generation();

- I understand from you that this task must be done by a picture_sequencer, and not directly by the bus driver.

- I will modify this. I will add also support for some sequences in order to to generate specific sequences of pictures easily.

[uwe] 2. create a bus driver (or backdoor memory) driver (a "full" uvc with io transaction, driver, sequencer, ...). this uvc should be able to drive transactions on either the bus or the backdoor or directly to the memory depending on your infrastructure.

- I agree with this; the driver defined by me is: driver extends uvm_driver #(mem_image);

- the driver will just:

- drive the controls according with the protocol AND will

- drive the data;

- in order to drive the data, it needs to get it from the memory (picture class) from a requested address, then to pack it and put it on the bus;

- for this in the picture class I have a method: get_byte(req_address) => this will return a byte from a requested address.

So the env will look like this:

- PICTURE class (uvm_sequence_item);

- AGENT_CFG class

- PICTURE_DRIVER_CFG, parameterizable with PICTURE class (generates the DUT configuration, and puts it on the bus - no protocol here, just put the values on the DUT inputs);

- PICTURE_SEQUENCER_CFG (after the DUT is configured, in the run() task, creates the picture in the memory, by making an instance of the PICTURE class, calling new, updating the class properties, allocate memory and generate data);

- AGENT_MEM class

- DRIVER_MEM, parameterizable with PICTURE class (drives controls and the read data, according with the MEM protocol); the read data comes from the PICTURE class by calling the get_byte(req_address) method;

- one question here: the PICTURE class is created in the PICTURE_SEQUENCER_CFG class (by calling new and all the rest ...). Do I need to make a connection in order to access get_byte() method, or it is enough that the DRIVER_MEM class has PICTURE class as parameter;

- maybe a SEQUENCER_MEM class?

What is not clear to me is:

- in order to read data from the memory the DRIVER_MEM calls get_byte(req_address) method;

- In the UVM tutorials, I saw that the Driver uses the below methods to get a data from the uvm_sequence_item class

- seq_item_port.get_next_item(uvm_sequence_item);

- seq_item_port.item_done();

I should not use this functions? (maybe to overwrite them somehow - to call inside the get_byte(req_address)) ?

I think that for this I will need a sequencer (SEQUENCER_MEM) connected to the driver inside the AGENT_MEM.

And maybe the DRIVER_MEM class, should not be parameterizable with PICTURE class, but only with a data element (a byte).

What do you think?

Thanks again for all the inputs and advices.

Regards,

Adi

Link to comment
Share on other sites

hello adi,

still it is not clear to me what you mean with "the data is already in the memory"? do you mean in a memory (like a memory instance in your design/tb) or you have it accessible in your class/tb)? is the tb driving it actively into the dut? or is the dut grabbing the data?

normally i would generate&randomize the picture data in the picture_seqeunce_item class and then drive it into the dut. in this scenario no external memory is involved and the picture properties as well as the contents can be randomized in the sequnce via constraints.

let me add some more clarifications:

- i would separate the abstract picture handling from the dut/bus transaction handling. this would create sort of a picture_sequence_item class together with a picture sequencer (well thats actually just a 3line declaration). then i would create an uvc with driver,monitor, seqeunce_item and sequencer for the bus(or the communication protocol with the DUT). this uvc should be able to handle the complete IO of the dut but only in the sense of send or receive "something" it should not know anything about the picture.

- a sequencer is sort of an "abstract" component and ONLY provides the feature of "producing" a stream of sequence_items. the items, their sequence and their properties are ONLY made by the sequence the sequencer is executing. i havent seen cases which requires addon behaviour being declared in a sequencer.

- a driver takes an abstract sequence item (or transaction) and converts it into lower level transactions (= normally bit-wiggling). this is the place where one has to "interpret" the transaction and perform the appropriate actions on the lower abstraction level.

- very often its best to separate abstraction level. here its best (or i would recommend) to separate "picture" and "bus". therefore i would have the following pieces:

picture uvc:

class picture_sequence_item extends uvm_sequence_item;

class picture_sequencer extends uvm_sequencer#(picture_sequence_item); // there is no need for a driver+monitor as we do not map this into lower level transactions

bus uvc:

class bus_seqeunce_item extends uvm_sequence_item;

class bus_seqeuncer extends uvm_sequencer#(bus_seqeunce_item ); // this probably has address,data,

class bus_monitor extends uvm_component;

class bus_driver extends uvm_driver#(bus_seqeunce_item );

the bus_driver implements get_next_item and always gets an bus_seqeunce_item.

i think most of your questions relate to the fact that it is not clear (at least to me) what you mean with your "memory where the picture is stored". i would be great if you could clarify this (or make a small picture of it)

btw: under normal conditions the pattern between sequencer-driver is fixed ("the-get-next-item-loop") as well as the pattern sequnce-item and sequncer.

regards

/uwe

ps: feel free to contact me via private email under uwes@cadence.com

Link to comment
Share on other sites

Hi Uwe,

Thank you for your responses.

[uwe] still it is not clear to me what you mean with "the data is already in the memory"?

in my test-bench there is no memory instantiated.

The "memory" data is generated by the picture class.

In an old Verilog verification environment this can be seen as:

DUT interfaces:

- CFG interface

- width, height, ycc_format ...etc

- MEM interface

- req

- ack

- address

- burst_data

The verilog Driver:

- shares the MEM interface with the DUT;

- has 3 input files (one for Y, one for Cb, one for Cr) - I took only the case when each component is stored in a separate memory zone;

- when DUT requests data from the "memory", the Driver reads data from the correct file, from the requested address (it goes to the correct position using fseek) and puts the data on the bus.

Thanks for the extra clarifications.

I think I've got the idea.

I will need to break my UVM environment in more smaller pieces, like you suggested.

In this way I would add re-usability to it, and the "bus uvc" can be reused by changing only the "bus_driver" protocol on future projects.

Now it remains only to start the implementation of the new structure :)

All the best,

Adi

Link to comment
Share on other sites

Hi Uwe,

I am not clear the below phrases:

-....

- then i would create an uvc with driver,monitor, seqeunce_item and sequencer for the bus(or the communication protocol with the DUT).

- this uvc should be able to handle the complete IO of the dut but only in the sense of send or receive "something" it should not know anything about the picture.

I do not understand how the bus uvc cannot know nothing about the picture uvc.

The bus uvc needs to get data from the picture uvc class.

I expect a bus_sequence that is registered with the bus_sequencer (and connected with the bus_driver) to have:

- a bus_sequence_item parameter and

- a picture_sequence_item parameter;

Am I wrong?

Regards,

Adi

Link to comment
Share on other sites

hi adi,

picture uvc and bus uvc should be independent. the connection between the two is made via sequence layering (see the uvm reference guide on layering) - only this small piece of code should know howto "translate" a picture into bus transactions.

>Am I wrong?

dont think so. the attached is some code. it is creating a "special" bus sequence which pulls a picture from the higher layer sequence, translates it and sends the resulting bus transfers via the bus)

class uvm_layered_sequence #(type SEQR=uvm_sequencer,
       type UREQ=uvm_sequence_item,
       type URSP=UREQ,
       type LREQ=uvm_sequence_item, 
       type LRSP=LREQ) 
   extends uvm_sequence #(LREQ,LRSP);

   protected SEQR upper_sequencer;
   function new(string name="uvm_layered_sequencer");
       super.new(name);
   endfunction 

   task pre_body();
       uvm_object temp;
       uvm_sequencer_base s = get_sequencer();

       if(!s.get_config_object("upper_layer_sequencer",temp,0))
           uvm_report_fatal("LAYER", "no upper layer sequence pointer", UVM_FATAL);

       if(!$cast(upper_sequencer,temp) || (temp==null)) 
           uvm_report_fatal("LAYER", $psprintf("incompatible upper layer sequencer or null is=%s",temp),UVM_FATAL);

   endtask 
endclass

class uvc2touvc1_seq extends uvm_layered_sequence #(picture_sequencer,picture_trans,picture_trans,bus_trans);
   `uvm_sequence_utils(uvc2touvc1_seq,bus_sequencer)

   LREQ this_trans;

   // Constructor
   function new(string name="uvc2touvc1_seq");
       super.new(name);
   endfunction

   // Sequence body
   virtual task body();
       UREQ t;
       forever begin
           upper_sequencer.get_next_item(t);
           uvm_report_info("LAYER", "conversion of upper to lower (picture to bus_trans)", UVM_INFO);

           // FIXME translate picture into bus transfers
           //`uvm_do(this_trans)
           upper_sequencer.item_done();
       end 
   endtask

endclass : uvc2touvc1_seq

Link to comment
Share on other sites

  • 10 months later...

The get_next_item() and item_done() methods of the sequencer are not documented and comments in the implementation state that they are not part of the standard. Is this because there are drawbacks to accessing sequencers directly from within a sequence instead of layering sequences through ports?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...