Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by bhunter1972

  1. Who can provide a summary of what is new and what has changed?
  2. UVM requires that the sequencer first stop its sequences and then the driver must be certain to not call item_done on any outstanding sequences. To get this working correctly, there's a simple recipe in this paper from a few years ago, and also in the book: http://sunburst-design.com/papers/HunterSNUGSV_UVM_Resets_paper.pdf
  3. Only if recording is enabled for that transaction. To do that, you need to call the enable_recording() function and give it a stream.
  4. I can do even better. You can go here and press Click Inside to look at it yourself: http://tinyurl.com/h7nfgbz
  5. I find it to be a perfectly reasonable alternative to writing your own comparator. I'm not sure why back in 2012 was considered old. But it does only do one thing. If you need any more advanced comparisons, then it's not that useful. Implementing do_compare in your class is a pretty simple thing to do. Once done, the do_compare provides the ability to create your own custom comparators that can provide more flexibility.
  6. Your sequence has a handle to the sequencer it runs on, called p_sequencer. You can create a TLM imp there, and a TLM port in your block_cfg_mngr component, and then write the data from the port to the imp. Your sequence can then see the value using its handle to the sequencer.
  7. Agree, although I cannot speak for everyone's design or algorithm that they are trying to verify. There are generally two types of predictors: "white models" and "scoreboards." There are plenty of other names for the same things, but generally they boil down to those two. The difference between them is that the former tries to emulate exactly what the RTL does and perform a cycle-by-cycle comparison, whereas the latter sees stimulus input and predicts what the design will do at some point in the future. Of course, there are varying degrees to which these descriptions hold true. For example, a white model might not handle error cases very cleanly, but generally gets everything else correct. The advantage of the white model is that an error is generated very quickly once the device gets out of sync. The disadvantage, as you said, is that as the implementation matures the white model must be re-coded every time. This, in my experience, is a fatal flaw. But again, it depends on what you're trying to verify. The scoreboard approach, meanwhile, provides flexibility for the design to change over time. It also usually has less "granularity" to its predictions. For example, if a series of memory writes to contiguous addresses are expected, followed by a "DONE" cycle, a less granular scoreboard would not predict each and every memory write transaction and the done transaction. Instead, it would model the memory and only look at the final results once the DONE is seen. This approach gives the design the flexibility to chop up these writes as it sees fit and makes things easier for us. Especially in a constrained random environment, because the variety of stimulus can be arbitrary and outrageous, the white model is generally unacceptable.
  8. So, it was fixed? Was a new version of UVM 1.2 released? Or is it part of UVM 1.3?
  9. At Cavium, we hand edited our version of UVM (egads!) to explicitly allow periods. Although, we felt that asterisks and a few others were probably a bad idea. I know I'll probably get hate mail for abusing the standard like that, but we've got chips to make. Here is a patch that you can apply: Index: 1_2/src/base/uvm_resource.svh =================================================================== --- 1_2/src/base/uvm_resource.svh (revision 329382) +++ 1_2/src/base/uvm_resource.svh (working copy) @@ -1412,7 +1412,8 @@ `ifndef UVM_NO_DEPRECATED begin for(int i=0;i<name.len();i++) begin - if(name.getc(i) inside {".","/","[","*","{"}) begin + // CAVM: Permit periods inside instance names + if(name.getc(i) inside {"/","[","*","{"}) begin `uvm_warning("UVM/RSRC/NOREGEX", $sformatf("a resource with meta characters in the field name has been created \"%s\"",name)) break; end
  10. Come say hi at DVCon 2016 in San Jose on March 2nd at the Real Intent panel! http://dvcon.org/content/event-details?id=199-126
  11. I've been told that the explanation I have in my book is clear, so I'll just put this here. The diagram in the text helps a lot, but hopefully this will work: When initiating transactions up through a hierarchy, ports can talk to other ports just fine. Imps, though, are always the end of the line. To push transactions down through a hierarchy, TLM provides exports. Exports promote an imp to a higher level in the hierarchy. From another component’s point of view, they look exactly like an imp, but the real imp is buried someplace within the hierarchy. With exports, the external component need not know anything about the lower-level hierarchy. Knowing that I shouldn't pass up a chance to advertise, you can get the book at: http://tinyurl.com/AdvancedUVM????
  12. The data array will be created implicitly based on the .size(), if it isn't already created, whenever you randomize the class. However, if you allow the simulator to do this implicitly, then you lose the ability to constrain any of the array's variables, which may or may not be something you care about. If you do, then here's how I do this. First, in my class's new function, I new the array to whatever the largest possible size will be. This is important, and perhaps a little wasteful. Second, I constrain the data.size() as you have above. Third, I can then constrain the data bytes, usually using a foreach constraint. When the class is randomized, the data will automatically re-size itself and constrain the data appropriately. So, like this: class pushk_c extends uvm_object; rand int unsigned data_len; constraint data_len_cnstr { data_len < MAX_DATA_LEN; // some parameter or constant } rand byte data[]; constraint data_cnstr { data.size() == data_len; foreach(data[idx]) { data[idx] == idx; } } function new(string name="pushk"); super.new(name); data = new[MAX_DATALEN]; endfunction : new endclass : pushk_c I would prefer not to self-promote, but I will anyway. I demonstrate this a few times in my book: http://tinyurl.com/AdvancedUVM, or for the e-book: http://tinyurl.com/AdvancedUVM-ebook
  13. For your specific case, you might consider an alternative frontdoor path. With this method, you create a sequence that runs on--well, any sequencer really--and is assigned to the A & B registers using set_frontdoor. This sequence extends uvm_reg_frontdoor and contains an 'rw_info' variable. This rw_info variable holds the information of the transaction. Based on the value of C, the registers can then be manipulated like any other. Consider: uvm_reg chosen_reg; uvm_reg_status status; chosen_reg = (p_sequencer.reg_block.C.SELECTOR.get() == <A_SELECTED>)? p_sequencer.reg_block.A : p_sequencer.reg_block.B; if(rw_info.kind == WRITE) chosen_reg.write(status, rw_info.value[0]); else begin uvm_reg_data_t value; rw_info.value = new[1]; chosen_reg.mirror(status, value); rw_info.value[0] = value; end It's something like that. Not to be too self-aggrandizing, but my book demonstrates this method more clearly: http://tinyurl.com/AdvancedUVM. or the e-book: http://tinyurl.com/AdvancedUVM-ebook.
  14. Well, like I said, for brevity's sake I excluded the text from the book. But in short: you can use the wait_next_delay() within a sequence to wait a period of time, or you can embed this delay object in your driver and wait for a number of clocks. Maybe I'm being too idealist, but you never want to tie your sequences to a number of "clocks". Sequences have no notions of clocks or interfaces. Drivers see clocks, sequences see time. There are good reasons for this, but that would be a longer conversation. Your issue with regards to falling off of the clock boundary in your driver is a known one, so the book will show you how to avoid that. In short, your driver should try to fetch the next item, but if there isn't one it should call get_next_item() and then wait for a clock cycle. Then it drives. This allows back-to-back items if the sequencer is pushing quickly. After driving an item, you would call get_next_delay() and wait that number of clock cycles. This has probably gone on too long. The bigger answer to your question that I think Linc and I would agree upon is to create a reusable OO solution.
  15. Funny you should ask this. I have a book coming out before the end of the year which has a bunch of UVM recipes, with one on this topic. It's similar to Linc's solution above, except that it is a randomizable class instantiated in the environment that either sequencers or drivers can use. Here's what it looks like. For brevity, I'll just post the class here and not all of the explanatory text that comes with it. class rand_delays_c extends uvm_object; //---------------------------------------------------------------------------------------- // Group: Types // enum: traffic_type_e // Allows for a variety of delay types typedef enum { FAST_AS_YOU_CAN, REGULAR, BURSTY } traffic_type_e; typedef int unsigned delay_t; `uvm_object_utils_begin(rand_delays_c) `uvm_field_enum(traffic_type_e, traffic_type, UVM_DEFAULT) `uvm_field_int(min_delay, UVM_DEFAULT | UVM_DEC) `uvm_field_int(max_delay, UVM_DEFAULT | UVM_DEC) `uvm_field_int(burst_on_min, UVM_DEFAULT | UVM_DEC) `uvm_field_int(burst_on_max, UVM_DEFAULT | UVM_DEC) `uvm_field_int(burst_off_min, UVM_DEFAULT | UVM_DEC) `uvm_field_int(burst_off_max, UVM_DEFAULT | UVM_DEC) `uvm_field_int(wait_timescale, UVM_DEFAULT | UVM_DEC) `uvm_object_utils_end //---------------------------------------------------------------------------------------- // Group: Random Fields // var: traffic_type rand traffic_type_e traffic_type; // var: min_delay, max_delay // Delays used for REGULAR traffic types rand delay_t min_delay, max_delay; // var: burst_on_min, burst_on_max // Knobs that control the random length of bursty traffic rand delay_t burst_on_min, burst_on_max; // var: burst_off_min, burst_off_max // Knobs that control how long a burst will be off rand delay_t burst_off_min, burst_off_max; // var: wait_timescale // The timescale to use when wait_next_delay is called time wait_timescale = 1ns; //---------------------------------------------------------------------------------------- // Group: Local Fields // var: burst_on_time // When non-zero, currently burst mode is on for this many more calls delay_t burst_on_time = 1; //---------------------------------------------------------------------------------------- // Group: Constraints // constraint: delay_L0_cnstr // Keep min knobs <= max knobs constraint delay_L0_cnstr { traffic_type == REGULAR -> (min_delay <= max_delay); traffic_type == BURSTY -> (burst_on_min <= burst_on_max) && (burst_off_min <= burst_off_max); } // constraint: delay_L1_cnstr // Safe delays constraint delay_L1_cnstr { max_delay <= 500; burst_on_max <= 500; burst_off_max <= 500; } //---------------------------------------------------------------------------------------- // Group: Methods //////////////////////////////////////////// // func: new function new(string name="rand_delay"); super.new(name); endfunction : new //////////////////////////////////////////// // func: get_next_delay // Return the length of the next delay virtual function delay_t get_next_delay(); case(traffic_type) FAST_AS_YOU_CAN: get_next_delay = 0; REGULAR: begin std::randomize(get_next_delay) with { get_next_delay inside {[min_delay:max_delay]}; }; end BURSTY: begin if(burst_on_time) begin burst_on_time -= 1; get_next_delay = 0; end else begin std::randomize(get_next_delay) with { get_next_delay inside {[burst_off_min:burst_off_max]}; }; std::randomize(burst_on_time) with { burst_on_time inside {[burst_on_min:burst_on_max]}; }; end end endcase endfunction : get_next_delay //////////////////////////////////////////// // func: wait_next_delay // Wait for the next random period of time, based on the timescale provided virtual task wait_next_delay(); delay_t delay = get_next_delay(); #(delay * wait_timescale); endtask : wait_next_delay endclass : rand_delays_c
  16. Some components may have a run-time phase which operates in a "Run forever and wake up when there's something to do" mode, like this simplistic driver: virtual task run_phase(uvm_phase phase); forever begin seq_item_pull_port.get(req); phase.raise_objection(phase); drive_my_request(req); phase.drop_objection(phase); end endtask : run_phase When there are no objections remaining, all of the tasks spawned by all of the component's run phases are killed. This is a feature, not a bug. It just wouldn't do to kill this task's run phase while it was driving a request. And UVM cannot implicitly know how long that will take. Therefore, the objection software pattern is used for all run-time phases.
  17. Here's a high-level overview: When the build phase starts, the test component is created and its build phase is run. As the test creates more components, each component registers with the UVM environment and is added to a list of components whose build phases haven't run yet. When the test's build phase is complete, UVM then goes through the list of components and runs their build phases. Each, in turn, may add more sub components to the list. Wash, rinse, repeat. When the list is empty, then all components have run through their build phases and the UVM build phase is complete. Same thing happens with the connect phase and all the rest, except that in those no more components will be added. So, you see, it's just like a car.
  18. Based on what the developers have told me, phase jumping to a pre-run phase might "kinda-sorta work so don't do it." Put more simply, phase jumping was not intended to go back to a pre-run phase, so even though the functions may get called again this will lead to issues. Probably they should have more clearly made it NOT work, but whatever.
  19. I do not believe that it is "legal" to jump back to the function phases. You can jump forward to the check phase, for example, and you can jump back to the reset phase. But you definitely cannot go back and redo the build phase.
  20. It's for this reason that I never use parameterized interfaces. I know that probably sounds a bit ridiculous, but I find that it's easier to make an interface wide/large enough to contain all the necessary bits for all types of instantiations. Say you have a 64-bit interface that is sometimes driven at 32-bits. In my drivers and monitors I use configurable fields to tell how wide the bus is and work from there. In the interface, I might create a field called 'bus_width' and assign it such that I can use what is there to control assertions or any other active logic. From there, I can plunk down the interface and let random constraints or configuration options choose how wide my interface is today. It may get a little complicated in places, but I've found this approach to be worth the effort. Most importantly, it eliminates me having to type my_pkg::drv_c#(BUS_WIDTH), my_pkg::agent_c#(BUS_WIDTH), and on and on and on.
  21. Totally disagree with the above answer. The best is to use the macros. Religion aside for a second, it all depends on the situation: If you want to create a sequence on a sequencer, randomize it, and send it, then `uvm_do does all that for you. If you want to create a sequence, set some values inside of it, and then send it, then the combination of `uvm_create and `uvm_send is best. If you need constraints, use the _with macros. If it's on another sequencer, use the _on macros. Mix and match as appropriate. As for whether to use macros or not, I agree that it's best to know what they all do under the hood. However, macros always have two benefits: 1. They reduce the amount of code that you have to type, which reduces the number of mistakes you need to debug. 2. If the underlying code of the macros is later improved, you don't need to change any code anywhere at any time. You get the improvements for free.
  22. Not that it matters much to you, but there's a lot of things in those UVM Guidelines that I wouldn't agree with. So many, in fact, that I think that might be a very old list of guidelines that came out around the time that UVM was still just OVM with all the o's changed to u's. We rely heavily on phases here, and if they were being deprecated, there would be a considerable uproar. All in all, I think this is the typical EDA vendor posturing, where Mentor/Cadence complain about what Synopsys recommends, and vice-versa.
  23. Without seeing the code, I cannot be certain. However, I suspect that in your call to create the lower-level objects, you have provided a this pointer. In other words, I think you're doing this: my_cfg = my_cfg_c::type_id::create("my_cfg", this); When you should just be doing this: my_cfg = my_cfg_c::type_id::create("my_cfg"); Only components get a this pointer.
  24. Yes, you can! The sequencer is basically a big arbiter that decides what to do next. Doing things in random order is essential to finding those tough bugs. You can control which sequence occurs next with priorities and other algorithms, but letting the sequencer decide is good enough to get you started. Here's how you might do it: task body(); easy_seq_c easy_seq; ugly_seq_c ugly_seq; hairy_seq_c hairy_seq; fork `uvm_do(easy_seq) `uvm_do(ugly_seq) `uvm_do(hairy_seq) join endtask : body Of course, running lots of sequences and virtual sequences all at the same time is important so for that you would need loops and lots of other random thingies.
  • Create New...