Jump to content

c4brian

Members
  • Content Count

    82
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by c4brian

  1. Hello, I've scoured the internet and this forum for what I imagine will have a very simple solution. Apparently I am not describing it sufficiently. I have a SystemC model comprised of a handful of registers, and single memory bank. I would like to implement TWO blocking transport interfaces to this model. If I implement a single blocking transport ( via inheritance of b_transport ), the method has access to all the model resources. If I want two b_transport interfaces, I've had to move them to channels, which are then instantiated in the model. These channels do not have access to the model resources of course, because they are declared seperately from the model. They communicate transactions to the model via an event (or using a fifo). Therefore, the blocking transport will issue a wait(sc_zero_time) for the transaction to reach the model and perform some action. *Perhaps this solution is already what I should be doing. However, I like the idea of a blocking transport actually setting the value and returning (with no wait statements); but, I cannot do this because I need TWO of these methods. Brian
  2. I have a sequence sending in commands to my DUT. Following each command, I would like to wait for a random delay. Two possible techniques: 1) Use a virtual interface to my system interface; pass a random number to a wait_clk method, which in turn uses a clocking-block. uint32_t delay; delay=$urandom_range(0,1000); vif.wait_clk(delay); 2) Perhaps 1 is overkill ( requires a virtual IF, which adds a dependency, perhaps less re-usable). Perhaps something more like... uint32_t delay; delay=$urandom_range(0,1000); //#delay ns; // not correct syntax! 3) pull in the configuration of the agent, using m_sequencer, and use it's methods... maybe this is the most appropriate. virtual task body(); if( !uvm_config_db #( plb_agent_configuration )::get( m_sequencer, "", "AGENT_CONFIG", cfg )) `uvm_fatal(report_id, "cannot find resource plb config" ) // issue command left out begin uint32_t num; num=$urandom_range(1000,0); cfg.wait_clk(num); end end endtask Share with me your favorite technique
  3. Let's say I have the following DUT. The UVM environment contains a chain of models/predictors. Input data flows down this chain and generates the expected CHIP output, which is compared to actual. Pros: verifies top-level functionality. Cons: Does not verify block level functionality. A good start, but I'd like to also verify the blocks in a system setting. So, I create block-level environments, then reuse them at the top level. Awesome, but wait a minute. I still need the top-level verification (Input-to-Output) like in the first example. However, all 3 of my block predictors are being used in their corresponding environments' scoreboards, hooked up to the RTL, via agents. How does one do both? Surely I'm not supposed to instantiate duplicate copies of my block level predictors to create the end-to-end model chain...
  4. try this. the '-' means left aligned. choose a size larger than your max, for each field, as in my example. class Packet; int seed[]='{54445,745,11209}; string out_str[]='{"imabigstring","tiny","medium"}; int out_num[]='{88,353,1}; function display (); foreach (out_num) $display("TEST%04d: seed=%-8d Out_str: %-16s Out_Num: %-8d", i, seed, out_str, out_num); endfunction endclass https://www.edaplayground.com/x/2_vR
  5. In the Mentor cookbook, they create it in the test, and then pass a handle to the env config object. This resource is typically used by the sequence (also in test) so the structure is fine with me. The use model is that you'll need to have a RAL declared for block, sub-system, and SoC levels. This way you can specify offset for sub-level register blocks, hdl path slices for backdoor access (optional), etc.
  6. Since that is cleared up, let's say I've got the following DUT. I guess this would called a "data-flow" DUT. The master is initialized via control interface B, then the master awaits commands on the same interface. I've got block level environments for components A, B, C, D. I need top level functional verification, of course, so I need to verify the behavior of the Master. The Master performs all kinds of low-level reads and writes on the shared peripheral bus. Option 1 (traditional): Have a scoreboard verify every peripheral bus transaction. This sounds like a bad decision. There are countless transactions, and I'd end up a 100% accurate Master model which is would change every time the Master software changed. I really don't care if the Master performed a read-modify-write of a register. I just care about the final result. Option 2: Use the RAL, or similar technique, to verify the correctness of data moved around the peripheral bus, at particular intervals, or instances. Example use-case: Master receives command "Component Status?". Master talks to all components, gathering status values. Master writes Status-Packet to Component A. I'm not interested in HOW it gathered the status, I just want to verify the contents of the response packet. Option 3...etc. Am I on the right track? Thoughts?
  7. Tudor, Thanks for the reply. "properly it's necessary (but not sufficient) for the blocks to work properly." - What do you mean by "but not sufficient" ? I understand and agree. Revisiting this simple example I realized something so trivial it's almost embarrassing to admit ; I am still getting the top level functional checks using the block level environments. My concern was this: Using block level environments, each model gets its input from an agent, and not from a previous model output. In my mind, that implied the input might be incorrect; messed up possibly by the RTL block. However, I can guarantee the correctness to any stage because it was already checked in the previous stage. In short, I am an idiot.
  8. class my_sequencer extends uvm_sequencer #(my_txn_type) ; `uvm_component_utils(my_sequencer); virtual interface my_vif; function new(string name, uvm_component parent); super.new(name, parent); endfunction endclass populate my_vif inside the agent, after creating the sequencer. code not shown. access your testbench resource from a sequence: class my_seq extends uvm_sequence #(my_txn_type); `uvm_object_utils(my_seq); `uvm_declare_p_sequencer(my_sequencer); // creates p_sequencer handle function new(string name=""); super.new(name); endfunction virtual task body(); p_sequencer.my_vif.wait_clock(5); // example endtask endclass ps I've run into some opposition on using interface calls in a sequence, but anyway you can use it for accessing other things too.
  9. What we've done in the past is have "phantom models" that are highly coupled with the RTL state machine. E.g. The RTL got a command, and performed: Read 0x1, Read 0x2, Write 0x1, Write 0x2. our "model/predictor" would do the exact same thing. Now, if the RTL changes at all, adds an extra write, or changes the order, the model is wrong. I think our paradigm of performing checking, and building models is incorrect. It's coupled too tightly to the implementation details. Agree? I'm guessing the model needs to care more about FUNCTION, and less about IMPLEMENTATION. Agree?
  10. c4brian

    predictor / TLM model paradigm

    The models we have used are untimed, behavioral models. They are not white models. I think what you described about the memory example is what we are looking for. For example. If we receive an "INIT" command on interface A, we should expect a series of writes to occur on interface B. The predictor can predict the final state of the memory/register space of the device hanging off interface B. Use case: Write Register 0: 0xAB Write Register 1: 0xCD Read Register 2 (for read-modify-write) Write Register 2: bit 7 asserted Write Memory locations 0 - 63: 0xFF As the verification engineer, I don't want to worry about things like if the designer had to do a read-modify-write, or what order he decided to write the memory (perhaps he started at location 63, and went backwards to 0?). I want to know, at the end (like a DONE signal), that "actual" memory reflects what I predicted. does that sound right
  11. This is mostly likely some hang-up between my architecture, and my tool (RivieraPro), but my profiler results basically have 84.55% CPU used by "other-code" with no details as to what that means, and the remaining percent distributed to my top testbench files (and all their children). Anyone run into anything like this in RP, or another tool? Something is eating my simulation alive the longer it runs...
  12. Alan, Ive got +access+r specified. I'll try your break technique. Just ran with the new RP, and 85% of my CPU cycles are still dumped into "other code", in the hierarchy tab. All my testbench classes are simply listed as a list of packages, and none of them have any cycles associated with them. I'm not sure how it should be displayed, but I thought perhaps it might have a hierarchical display like test - env - children - etc. could someone give me some kind of ballpark metric, like for a single component testbench, the HDL might eat up say, 50% of your CPU, and the testbench another 50%, or is it 90/10.. etc? According to my report, my DUT is only pulling around 2% of the CPU.
  13. anyone know a slick way to determine the number of database access which have occurred in a given sim run? Id love it to print upon sim end.
  14. I would like to hear your best practices for ensuring simulation efficiency. I'm open to all ideas; the profiler in my tool is nightmare to make sense of. I'm especially interested in... good coding practices checks for incorrect garbage collection, etc (if such a thing exists) etc. Thanks!
  15. will display wall-clock simulation run time upon simulation end; thanks Sunil at Aldec: set start_time [clock clicks -millisec] run $SIMTIME ; # pass sim time (you must set this previously) set run_time [expr ([clock clicks -millisec]-$start_time)/1000.] puts "Sim Time: $run_time sec. \n"
  16. ah haha.. sorry that was confusing... I meant YES = example using the config db, NO = example not using the config db. Your new name makes a little more sense. It could probably be a uvm_object unless you need it to have a hierarchical path, which it doesnt sound like. I'll file this technique under "innocent until proven guilty".
  17. A coworker and I were discussing this question... In order to run a regression suite, or multiple regression suites, etc, would you... run a test, stop the simulator, log any results (pass/fail), etc, then start the simulator again on a new test with a new random seed OR... Run a single test, which runs several high level sequences, one after the other?
  18. Also, my simulation gets slower the longer it runs. Is this a sign I'm doing something wrong?
  19. Thanks for the reply. This sounds like a good idea to practice in my sequences, and also in my monitors. Then, if a scoreboard needs a copy, let it clone it. Agree? About some of the other topics... have you been able quantify your performance gains? For example, you set verbosity from UVM_DEBUG to UVM_LOW, and run a regression suite... Short of timing it with a stopwatch, do you SEE the gain somewhere: "ah, the profiler said my simulation took up 5B less cycles" or "the tool informed me the simulation took 5 minutes less wall clock time to run!" That black on white comparison is what I'm going for, because I'd really like to flesh out how little changes affect my performance (which I should see if I discover any "offenders" of the 80/20).
  20. You create a uvm_component to store your configuration; I've never seen that, was it on purpose? So basically you have a full STATIC configuration singleton (no automatic variables I assume). I am mainly jealous because I'm having to fool around with... 1. NO config_db: p_sequencer (no config_db, but I still have to remember to populate my "goodie" handles) 2. YES config_db: m_sequencer (using the config_db scope to snag agent configuration items, and then use methods there for hardware synchronization or whwatever) 3. NO config_db: In the case of transient objects like sequence items, or sequences, assigning handles to configuration objects from a parent component (test-> virtual sequence, sequence) etc, for use in constraints, for example. I digress. Please weigh in on the singleton configuration approach.
  21. mentor uvm cookbook : Sequences/API : Coding Guidelines (p 202) "Sequence code should not explicitly consume time by using delay statements. They should only consume time by virtue of the process of sending sequence_items to a driver" Shrug; I still use delays in my sequences. They seem like a natural fit there.
  22. c4brian

    Array Assignment in Sequences

    Is tx_data_byte a fixed-size array? If you are just trying to randomize the elements, just declare it as rand in the transaction, and remove data_byte, and the constraints, etc. If its a dynamic array, then use tx_data_byte.size()==tx_num_of_bytes (inside of the transaction class), then you don't have even add it in everytime you randomize the transaction. Did that make sense?
  23. I love ideals, and that makes perfect sense. Wow... that was easy. These few lines of code just solved a problem I've struggled with since I started SV 3 years ago. How to get back-to-back behavior, but also keep the clock sync'ed. THANK YOU My sequence just waited 1ns, and my driver handled it's business.
  24. Wow two great responses. I hadn't thought to use an OO approach (yet) before I had some techniques down; I like the idea of creating a "delay factory" in order to create a particular "traffic profile/distribution" (like BURSTY!, my fav). You could come up with lots of other interesting profiles too (INTERMITTENT) or something. These techniques seem a little heavy handed at first blush, but they are OO. Linc, These two solutions seem to mesh well together. I'll include more of the delay_gen talk in bhunters response so I can ask you about the config object; and don't think it so much as hi-jacking the post, but more as fork join_none'ing a new topic. (you can use that joke in your next meeting; I'm not responsible for any looks you'll get ) 1. verboten.. fantastic word 2. tudor has mentioned this before, but "singleton" just means... there's 1, right? I have configuration object for my design, but I use the config_db, and I never call it a singleton. Trying to learn the context. It appears the advantage here is that you don't have to "pull" it in from anywhere, you simply "scope" in to the methods wherever/whenever you want. What's the disadvantage; seems pretty sweet. Do you actually "create" it somewhere, like a test base? Maybe you could flesh this out a bit more. I'm in Alabama; if I ever make it over there for any of the conventions I'll look you up. bhunter, Sounds like a solid delay generator; good recipe for your book! Perhaps an option to use a VIF clocking block in addition to a straight delay would be nice. The reason I mention this (and perhaps I have set myself up for failure by doing this), is that I use BFMs to drive bus activity (instead directly by the driver); these BFMs of course use clocking blocks. If, anywhere in my sequence, I were to simply #1ns (or whatever), the simulation time would move off of the clock edge ("out of sync"), any clocking block calls to drive data will fail until I "re-sync" to the clocking block (@cb). Failure here means it just never goes out on the bus, like it never happened. (inhale.....) So, I have to pay close attention to sync to the clocking block once, at the beginning of simulation time, and never call a straight time delay. This may be an error with my tool. Don't try to solve this. I may just go back to code my BFMs to always call a @cb before sending out a burst of data, and this would then allow sequences the freedom to #wait all day, and no worry about the low level mechanics of the BFM. You know what, I just talked myself into it. Anyway, all that having been said, it still might be nice to provide a VIF handle optionally, and simply wait delay cycles. Maybe not; maybe it's too cluttered, or should be in an alternate delay_gen, like a clock delay_gen; or seperate the waiting from the delay_gen (say, it's job is simply to provide unsigned integers), a little like Linc's. food for thought
×