Jump to content

c4brian

Members
  • Posts

    82
  • Joined

  • Last visited

  • Days Won

    3

c4brian last won the day on April 19 2017

c4brian had the most liked content!

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

1,297 profile views

c4brian's Achievements

Advanced Member

Advanced Member (2/2)

5

Reputation

  1. try this. the '-' means left aligned. choose a size larger than your max, for each field, as in my example. class Packet; int seed[]='{54445,745,11209}; string out_str[]='{"imabigstring","tiny","medium"}; int out_num[]='{88,353,1}; function display (); foreach (out_num) $display("TEST%04d: seed=%-8d Out_str: %-16s Out_Num: %-8d", i, seed, out_str, out_num); endfunction endclass https://www.edaplayground.com/x/2_vR
  2. In the Mentor cookbook, they create it in the test, and then pass a handle to the env config object. This resource is typically used by the sequence (also in test) so the structure is fine with me. The use model is that you'll need to have a RAL declared for block, sub-system, and SoC levels. This way you can specify offset for sub-level register blocks, hdl path slices for backdoor access (optional), etc.
  3. Since that is cleared up, let's say I've got the following DUT. I guess this would called a "data-flow" DUT. The master is initialized via control interface B, then the master awaits commands on the same interface. I've got block level environments for components A, B, C, D. I need top level functional verification, of course, so I need to verify the behavior of the Master. The Master performs all kinds of low-level reads and writes on the shared peripheral bus. Option 1 (traditional): Have a scoreboard verify every peripheral bus transaction. This sounds like a bad decision. There are countless transactions, and I'd end up a 100% accurate Master model which is would change every time the Master software changed. I really don't care if the Master performed a read-modify-write of a register. I just care about the final result. Option 2: Use the RAL, or similar technique, to verify the correctness of data moved around the peripheral bus, at particular intervals, or instances. Example use-case: Master receives command "Component Status?". Master talks to all components, gathering status values. Master writes Status-Packet to Component A. I'm not interested in HOW it gathered the status, I just want to verify the contents of the response packet. Option 3...etc. Am I on the right track? Thoughts?
  4. Tudor, Thanks for the reply. "properly it's necessary (but not sufficient) for the blocks to work properly." - What do you mean by "but not sufficient" ? I understand and agree. Revisiting this simple example I realized something so trivial it's almost embarrassing to admit ; I am still getting the top level functional checks using the block level environments. My concern was this: Using block level environments, each model gets its input from an agent, and not from a previous model output. In my mind, that implied the input might be incorrect; messed up possibly by the RTL block. However, I can guarantee the correctness to any stage because it was already checked in the previous stage. In short, I am an idiot.
  5. Let's say I have the following DUT. The UVM environment contains a chain of models/predictors. Input data flows down this chain and generates the expected CHIP output, which is compared to actual. Pros: verifies top-level functionality. Cons: Does not verify block level functionality. A good start, but I'd like to also verify the blocks in a system setting. So, I create block-level environments, then reuse them at the top level. Awesome, but wait a minute. I still need the top-level verification (Input-to-Output) like in the first example. However, all 3 of my block predictors are being used in their corresponding environments' scoreboards, hooked up to the RTL, via agents. How does one do both? Surely I'm not supposed to instantiate duplicate copies of my block level predictors to create the end-to-end model chain...
  6. class my_sequencer extends uvm_sequencer #(my_txn_type) ; `uvm_component_utils(my_sequencer); virtual interface my_vif; function new(string name, uvm_component parent); super.new(name, parent); endfunction endclass populate my_vif inside the agent, after creating the sequencer. code not shown. access your testbench resource from a sequence: class my_seq extends uvm_sequence #(my_txn_type); `uvm_object_utils(my_seq); `uvm_declare_p_sequencer(my_sequencer); // creates p_sequencer handle function new(string name=""); super.new(name); endfunction virtual task body(); p_sequencer.my_vif.wait_clock(5); // example endtask endclass ps I've run into some opposition on using interface calls in a sequence, but anyway you can use it for accessing other things too.
  7. The models we have used are untimed, behavioral models. They are not white models. I think what you described about the memory example is what we are looking for. For example. If we receive an "INIT" command on interface A, we should expect a series of writes to occur on interface B. The predictor can predict the final state of the memory/register space of the device hanging off interface B. Use case: Write Register 0: 0xAB Write Register 1: 0xCD Read Register 2 (for read-modify-write) Write Register 2: bit 7 asserted Write Memory locations 0 - 63: 0xFF As the verification engineer, I don't want to worry about things like if the designer had to do a read-modify-write, or what order he decided to write the memory (perhaps he started at location 63, and went backwards to 0?). I want to know, at the end (like a DONE signal), that "actual" memory reflects what I predicted. does that sound right
  8. What we've done in the past is have "phantom models" that are highly coupled with the RTL state machine. E.g. The RTL got a command, and performed: Read 0x1, Read 0x2, Write 0x1, Write 0x2. our "model/predictor" would do the exact same thing. Now, if the RTL changes at all, adds an extra write, or changes the order, the model is wrong. I think our paradigm of performing checking, and building models is incorrect. It's coupled too tightly to the implementation details. Agree? I'm guessing the model needs to care more about FUNCTION, and less about IMPLEMENTATION. Agree?
  9. Alan, Ive got +access+r specified. I'll try your break technique. Just ran with the new RP, and 85% of my CPU cycles are still dumped into "other code", in the hierarchy tab. All my testbench classes are simply listed as a list of packages, and none of them have any cycles associated with them. I'm not sure how it should be displayed, but I thought perhaps it might have a hierarchical display like test - env - children - etc. could someone give me some kind of ballpark metric, like for a single component testbench, the HDL might eat up say, 50% of your CPU, and the testbench another 50%, or is it 90/10.. etc? According to my report, my DUT is only pulling around 2% of the CPU.
  10. This is mostly likely some hang-up between my architecture, and my tool (RivieraPro), but my profiler results basically have 84.55% CPU used by "other-code" with no details as to what that means, and the remaining percent distributed to my top testbench files (and all their children). Anyone run into anything like this in RP, or another tool? Something is eating my simulation alive the longer it runs...
  11. anyone know a slick way to determine the number of database access which have occurred in a given sim run? Id love it to print upon sim end.
  12. will display wall-clock simulation run time upon simulation end; thanks Sunil at Aldec: set start_time [clock clicks -millisec] run $SIMTIME ; # pass sim time (you must set this previously) set run_time [expr ([clock clicks -millisec]-$start_time)/1000.] puts "Sim Time: $run_time sec. \n"
  13. ah haha.. sorry that was confusing... I meant YES = example using the config db, NO = example not using the config db. Your new name makes a little more sense. It could probably be a uvm_object unless you need it to have a hierarchical path, which it doesnt sound like. I'll file this technique under "innocent until proven guilty".
  14. A coworker and I were discussing this question... In order to run a regression suite, or multiple regression suites, etc, would you... run a test, stop the simulator, log any results (pass/fail), etc, then start the simulator again on a new test with a new random seed OR... Run a single test, which runs several high level sequences, one after the other?
  15. Also, my simulation gets slower the longer it runs. Is this a sign I'm doing something wrong?
×
×
  • Create New...