Jump to content

good technique to generate a random delay?


Recommended Posts

I have a sequence sending in commands to my DUT.  Following each command, I would like to wait for a random delay.

 

Two possible techniques:

 

1) Use a virtual interface to my system interface; pass a random number to a wait_clk method, which in turn uses a clocking-block.

uint32_t delay;
delay=$urandom_range(0,1000);
vif.wait_clk(delay);

2) Perhaps 1 is overkill ( requires a virtual IF, which adds a dependency, perhaps less re-usable).  Perhaps something more like...

  uint32_t delay;
  delay=$urandom_range(0,1000); 
  //#delay ns; // not correct syntax!

3) pull in the configuration of the agent, using m_sequencer, and use it's methods... maybe this is the most appropriate. 

  virtual task body();

      if( !uvm_config_db #( plb_agent_configuration )::get( m_sequencer, "", "AGENT_CONFIG", cfg ))
        `uvm_fatal(report_id, "cannot find resource plb config" )

      // issue command left out

        begin
          uint32_t num;
          num=$urandom_range(1000,0);
          cfg.wait_clk(num);
        end
      end

    endtask

Share with me your favorite technique

Link to comment
Share on other sites

My favorite technique...

 

1) delay_gen: a class that can provide delay.  We'll create one of these for each channel/interface that needs delays.

   next - function that returns an int, which can be used for delay

   set_mode - function used to select one of multiple constraints in delay_gen, to indicate the values/flavor of ints that next should return

2) delay_gen_container: a class that holds many delay_gens

   add_controller - function with string input which is name of the delay_gen to create. adds a delay_gen to an associate array, accessible by the string name

   next - function with string input (name of delay_gen), used to access associate array and calls next function of the specified delay_gen

   other - other functions to set delays for all members of the delay_gen associative array.  

                ex: set all to max-throughput mode, where all next calls return 0, so we don't have any delays

3) env: the environment in the build_phase creates delay_gen_container, then calls add_controller for each channel that will need delays.  

                ex:   


               m_dg = delay_gen_container::type_id::create("m_dg",this);
                 my_config::set_dg_handle(m_dg);  //set handle in config so others (such as sequences) can access it
                 m_dg.add_controller("channel1");
                 m_dg.add_controller("channel2");
                 m_dg.add_controller("channel3");

4) seq: Whenever a sequence needs a delay, it calls next for the channel it refers to.  (If you have an array of the same channel, that string name just has a number suffix to refer to the specific channel.)  

               ex:


    //Inside a sequence **1
virtual task body();
int gap;
      gap = my_config::m_dg.next("channel2");   //call next to get delay for channel2  **2
      tick(gap);  //wait gap clk cycles

      `uvm_create(req)
start_item(req);
      ....  //here we set the req data to smthg meaningful (or just randomize it)
finish_item(req);
endtask : body

5) tick(n) is just a task in the package for the environment that anyone uses to access 'time'.  It uses a vif as you do in your #1.



   task tick(input int number);
      repeat (number) @(my_config::m_env.m_abcdef_agent.m_vif.smp_cb);
   endtask


 

I don't like to clutter my code with calls to uvm_config_db and checks for return values, so try to minimize them or restrict them to locations where they don't distract too much from the main code.  I try to reduce this as much as possible by using the the scope resolution operator, ::, with my_config (singleton),

my_config::blahblahblah, to get what I want.   I am pretty sure this is verboten in reuse methodology.  But, I am not sure why, as it would just mean that the calls to uvm_config_db could/would be in the my_config.  (That said, I need to learn more about the suggested use of configuration objects.)  I'd love some feedback here about this technique, but don't want to hi-jack this thread.

 

 

Aside: I typically like to perform the wait before selecting the data/transaction that will be sent.  I think this is system-specific, but for the last few modules I worked on, when a transaction's data is set, it is set with the current system state in mind.  If  100 clk cycles elapse before it is input to the DUT, the state may/will have changed and it may be illegal or not-useful.

Even if this is not the case, I try to stick to this ordering

 

1) perform delay

2) set transaction data

3) send transaction

, versus swapping 1 and 2.  (Certainly, some info about the transaction might need to be decided before a wait range is selected.)

I realize this all depends on the type of DUT.

 

If you're local to Si Valley, come by here sometime.  I believe we shared thoughts on this a few months ago in a meeting.


 

all the best,

Link to comment
Share on other sites

Funny you should ask this. I have a book coming out before the end of the year which has a bunch of UVM recipes, with one on this topic. It's similar to Linc's solution above, except that it is a randomizable class instantiated in the environment that either sequencers or drivers can use.

 

Here's what it looks like. For brevity, I'll just post the class here and not all of the explanatory text that comes with it.

class rand_delays_c extends uvm_object;
   //----------------------------------------------------------------------------------------
   // Group: Types
   
   // enum: traffic_type_e
   // Allows for a variety of delay types
   typedef enum { 
      FAST_AS_YOU_CAN, 
      REGULAR,
      BURSTY
   } traffic_type_e;
   
   typedef int unsigned delay_t;

   `uvm_object_utils_begin(rand_delays_c)
      `uvm_field_enum(traffic_type_e, traffic_type, UVM_DEFAULT)
      `uvm_field_int(min_delay,                     UVM_DEFAULT | UVM_DEC)
      `uvm_field_int(max_delay,                     UVM_DEFAULT | UVM_DEC)
      `uvm_field_int(burst_on_min,                  UVM_DEFAULT | UVM_DEC)
      `uvm_field_int(burst_on_max,                  UVM_DEFAULT | UVM_DEC)
      `uvm_field_int(burst_off_min,                 UVM_DEFAULT | UVM_DEC)
      `uvm_field_int(burst_off_max,                 UVM_DEFAULT | UVM_DEC)
      `uvm_field_int(wait_timescale,                UVM_DEFAULT | UVM_DEC)
   `uvm_object_utils_end

   //----------------------------------------------------------------------------------------
   // Group: Random Fields

   // var: traffic_type
   rand traffic_type_e traffic_type;
   
   // var: min_delay, max_delay
   // Delays used for REGULAR traffic types
   rand delay_t min_delay, max_delay;
         
   // var: burst_on_min, burst_on_max
   // Knobs that control the random length of bursty traffic
   rand delay_t burst_on_min, burst_on_max;

   // var: burst_off_min, burst_off_max
   // Knobs that control how long a burst will be off
   rand delay_t burst_off_min, burst_off_max;
   
   // var: wait_timescale
   // The timescale to use when wait_next_delay is called
   time wait_timescale = 1ns;

   //----------------------------------------------------------------------------------------
   // Group: Local Fields
   
   // var: burst_on_time
   // When non-zero, currently burst mode is on for this many more calls
   delay_t burst_on_time = 1;

   //----------------------------------------------------------------------------------------
   // Group: Constraints
   
   // constraint: delay_L0_cnstr
   // Keep min knobs <= max knobs
   constraint delay_L0_cnstr {
      traffic_type == REGULAR -> (min_delay <= max_delay);
      traffic_type == BURSTY -> (burst_on_min <= burst_on_max) && (burst_off_min <= burst_off_max);
   }

   // constraint: delay_L1_cnstr
   // Safe delays
   constraint delay_L1_cnstr {
      max_delay <= 500;
      burst_on_max <= 500;
      burst_off_max <= 500;
   }
   
   //----------------------------------------------------------------------------------------
   // Group: Methods
   
   ////////////////////////////////////////////
   // func: new
   function new(string name="rand_delay");
      super.new(name);
   endfunction : new
   
   ////////////////////////////////////////////
   // func: get_next_delay
   // Return the length of the next delay
   virtual function delay_t get_next_delay();
      case(traffic_type)
         FAST_AS_YOU_CAN: get_next_delay = 0;
         REGULAR: begin
            std::randomize(get_next_delay) with {
               get_next_delay inside {[min_delay:max_delay]};
            };
         end
         BURSTY: begin
            if(burst_on_time) begin
               burst_on_time -= 1;
               get_next_delay = 0;
            end else begin
               std::randomize(get_next_delay) with {
                  get_next_delay inside {[burst_off_min:burst_off_max]};
               };
               std::randomize(burst_on_time) with {
                  burst_on_time inside {[burst_on_min:burst_on_max]};
               };
            end
         end
      endcase      
   endfunction : get_next_delay

   ////////////////////////////////////////////
   // func: wait_next_delay
   // Wait for the next random period of time, based on the timescale provided
   virtual task wait_next_delay();
      delay_t delay = get_next_delay();
      #(delay * wait_timescale);
   endtask : wait_next_delay  
endclass : rand_delays_c
Link to comment
Share on other sites

Wow two great responses. :D  I hadn't thought to use an OO approach (yet) before I had some techniques down; I like the idea of creating a "delay factory" in order to create a particular "traffic profile/distribution" (like BURSTY!, my fav).  You could come up with lots of other interesting profiles too (INTERMITTENT) or something.  These techniques seem a little heavy handed at first blush, but they are OO.

 

Linc,

These two solutions seem to mesh well together.  I'll include more of the delay_gen talk in bhunters response so I can ask you about the config object; and don't think it so much as hi-jacking the post, but more as fork join_none'ing a new topic.  (you can use that joke in your next meeting; I'm not responsible for any looks you'll get )

1.  verboten.. fantastic word

2.  tudor has mentioned this before, but "singleton" just means... there's 1, right?  I have configuration object for my design, but I use the config_db, and I never call it a singleton.  Trying to learn the context.

It appears the advantage here is that you don't have to "pull" it in from anywhere, you simply "scope" in to the methods wherever/whenever you want.  What's the disadvantage; seems pretty sweet.  Do you actually "create" it somewhere, like a test base?  Maybe you could flesh this out a bit more.

 

I'm in Alabama; if I ever make it over there for any of the conventions I'll look you up.

 

bhunter,

Sounds like a solid delay generator; good recipe for your book!  Perhaps an option to use a VIF clocking block in addition to a straight delay would be nice.  The reason I mention this (and perhaps I have set myself up for failure by doing this), is that I use BFMs to drive bus activity (instead directly by the driver); these BFMs of course use clocking blocks.  If, anywhere in my sequence, I were to simply #1ns (or whatever), the simulation time would move off of the clock edge ("out of sync"), any clocking block calls to drive data will fail until I "re-sync" to the clocking block (@cb).  Failure here means it just never goes out on the bus, like it never happened.  (inhale.....) So, I have to pay close attention to sync to the clocking block once, at the beginning of simulation time, and never call a straight time delay.  This may be an error with my tool.  Don't try to solve this.  I may just go back to code my BFMs to always call a @cb before sending out a burst of data, and this would then allow sequences the freedom to #wait all day, and no worry about the low level mechanics of the BFM.  You know what, I just talked myself into it. 

Anyway, all that having been said, it still might be nice to provide a VIF handle optionally, and simply wait delay cycles.  Maybe not; maybe it's too cluttered, or should be in an alternate delay_gen, like a clock delay_gen; or seperate the waiting from the delay_gen (say, it's job is simply to provide unsigned integers), a little like Linc's.  food for thought

Link to comment
Share on other sites

Wow two great responses. :D  I hadn't thought to use an OO approach (yet) before I had some techniques down; I like the idea of creating a "delay factory" in order to create a particular "traffic profile/distribution" (like BURSTY!, my fav).  You could come up with lots of other interesting profiles too (INTERMITTENT) or something.  These techniques seem a little heavy handed at first blush, but they are OO.

 

Linc,

These two solutions seem to mesh well together.  I'll include more of the delay_gen talk in bhunters response so I can ask you about the config object; and don't think it so much as hi-jacking the post, but more as fork join_none'ing a new topic.  (you can use that joke in your next meeting; I'm not responsible for any looks you'll get )

1.  verboten.. fantastic word

2.  tudor has mentioned this before, but "singleton" just means... there's 1, right?  I have configuration object for my design, but I use the config_db, and I never call it a singleton.  Trying to learn the context.

It appears the advantage here is that you don't have to "pull" it in from anywhere, you simply "scope" in to the methods wherever/whenever you want.  What's the disadvantage; seems pretty sweet.  Do you actually "create" it somewhere, like a test base?  Maybe you could flesh this out a bit more.

 

I'm in Alabama; if I ever make it over there for any of the conventions I'll look you up.

 

bhunter,

Sounds like a solid delay generator; good recipe for your book!  Perhaps an option to use a VIF clocking block in addition to a straight delay would be nice.  The reason I mention this (and perhaps I have set myself up for failure by doing this), is that I use BFMs to drive bus activity (instead directly by the driver); these BFMs of course use clocking blocks.  If, anywhere in my sequence, I were to simply #1ns (or whatever), the simulation time would move off of the clock edge ("out of sync"), any clocking block calls to drive data will fail until I "re-sync" to the clocking block (@cb).  Failure here means it just never goes out on the bus, like it never happened.  (inhale.....) So, I have to pay close attention to sync to the clocking block once, at the beginning of simulation time, and never call a straight time delay.  This may be an error with my tool.  Don't try to solve this.  I may just go back to code my BFMs to always call a @cb before sending out a burst of data, and this would then allow sequences the freedom to #wait all day, and no worry about the low level mechanics of the BFM.  You know what, I just talked myself into it. 

Anyway, all that having been said, it still might be nice to provide a VIF handle optionally, and simply wait delay cycles.  Maybe not; maybe it's too cluttered, or should be in an alternate delay_gen, like a clock delay_gen; or seperate the waiting from the delay_gen (say, it's job is simply to provide unsigned integers), a little like Linc's.  food for thought

 

Well, like I said, for brevity's sake I excluded the text from the book. But in short:  you can use the wait_next_delay() within a sequence to wait a period of time, or you can embed this delay object in your driver and wait for a number of clocks. 

 

Maybe I'm being too idealist, but you never want to tie your sequences to a number of "clocks". Sequences have no notions of clocks or interfaces. Drivers see clocks, sequences see time. There are good reasons for this, but that would be a longer conversation.

 

Your issue with regards to falling off of the clock boundary in your driver is a known one, so the book will show you how to avoid that. In short, your driver should try to fetch the next item, but if there isn't one it should call get_next_item() and then wait for a clock cycle. Then it drives. This allows back-to-back items if the sequencer is pushing quickly. After driving an item, you would call get_next_delay() and wait that number of clock cycles.

 

This has probably gone on too long. The bigger answer to your question that I think Linc and I would agree upon is to create a reusable OO solution.

Link to comment
Share on other sites

Maybe I'm being too idealist, but you never want to tie your sequences to a number of "clocks". Sequences have no notions of clocks or interfaces. Drivers see clocks, sequences see time. There are good reasons for this, but that would be a longer conversation.

 

I love ideals, and that makes perfect sense.

 

Your issue with regards to falling off of the clock boundary in your driver is a known one, so the book will show you how to avoid that. In short, your driver should try to fetch the next item, but if there isn't one it should call get_next_item() and then wait for a clock cycle. Then it drives. This allows back-to-back items if the sequencer is pushing quickly. After driving an item, you would call get_next_delay() and wait that number of clock cycles

 

 

Wow... that was easy.  These few lines of code just solved a problem I've struggled with since I started SV 3 years ago.  How to get back-to-back behavior, but also keep the clock sync'ed.  THANK YOU :lol: My sequence just waited 1ns, and my driver handled it's business.

Link to comment
Share on other sites

Regarding c4brian comment: 


"It appears the advantage here is that you don't have to "pull" it in from anywhere, you simply "scope" in to the methods wherever/whenever you want.  What's the disadvantage; seems pretty sweet.  Do you actually "create" it somewhere, like a test base?  Maybe you could flesh this out a bit more."


You summed it up very well.  All of the 'pulling' would happen in one centralized file.   I'll start a new thread about this so as not to change the topic here, where I think you might add other comments and thoughts from our side discussion.  (new thread: http://forums.accellera.org/topic/5234-uvm-config-db-and-hierarchical-access/ )

 

---

"tudor has mentioned this before, but "singleton" just means... there's 1, right?"

Yup.**  (There could be some code inside the class to restrict instantiation of the class to one object (as I recall), but I just instantiate it once.)

 

---

Regarding bhunter1972 comment: 

"you never want to tie your sequences to a number of "clocks"."



I've heard (and read, I think) that sequences should not have any concept of time.  This is smthg I need to explore more.

 

---

I've totally mixed you two Brians up in the past on this forum.  It's nice to have this thread appear as a way to straighten things out in my mind.

 

 

**Tudor also got me thinking about composition vs. inheritance more.

Link to comment
Share on other sites

I've heard (and read, I think) that sequences should not have any concept of time.  This is smthg I need to explore more.

 

 

mentor uvm cookbook : Sequences/API : Coding Guidelines (p 202) "Sequence code should not explicitly consume time by using delay statements.  They should only consume time by virtue of the process of sending sequence_items to a driver"

 

Shrug; I still use delays in my sequences.  They seem like a natural fit there.

Link to comment
Share on other sites

  • 6 months later...

mentor uvm cookbook : Sequences/API : Coding Guidelines (p 202) "Sequence code should not explicitly consume time by using delay statements.  They should only consume time by virtue of the process of sending sequence_items to a driver"

 

Shrug; I still use delays in my sequences.  They seem like a natural fit there.

 

Well, a bit late replyiing but...  Mentor only says this as they want you to write everything so it could maybe run on Veloce. It's not realistic though.   Simple example would be multiple sequences running in parallel generating traffic at different rates and making use of arbitration algorithm (possibly custom) in the sequencer.   You can't push these delays off to the driver without seriously impacting how your system operates.  In most situations I've seen you want late generation ( in other words, don't generate until you are read to send so you can take into account current state of system). If you go trying to push off your delays to the driver this could succeed with a single sequence but you quickly run into problems with multiple sequences running in parallel.

 

You could still do the above and be Veloce compatible by using events but it's not really a natural way to code so.... 

Link to comment
Share on other sites

  • 1 year later...
  • 4 years later...

Your for-loop approach looks very much like RTL. At any rate the for-loop will create a lot of context switches and slow down the simulation. It is much better to use #(delay_val*1ns);

 

10 hours ago, DopplerEffect96 said:

I am very new to this forum.

 

I generally use a delay task ->

  task delay_loop(bit[31:0] delay_val){
    for (int i=0; i<delay_val; i++)begin //{
        #1ns;
  end //}
  endtask //}

 

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...