Jump to content


  • Content Count

  • Joined

  • Last visited

Everything posted by gordon

  1. 1,292 downloads

    UVM Connect is a package providing complete SystemC interop support for SystemVerilog UVM/OVM via TLM1/TLM2 to easily integrate models in either language, supports any compliant simulator, and works with both UVM and OVM. Donated to Accellera by Mentor Graphics. The UVM Connect package builds on existing standards: SystemVerilog, SystemC and UVM, allowing TLM models in each language to communicate with each other. The package also includes an API that allows SystemC to interact with, and control the execution of, UVM testbenches. With version 2.2, the UVM Connect package supports OVM as well as UVM, preserving an easy migration path for SystemC elements when the time comes to migrate from OVM the UVM. Who would use UVM Connect? The UVM Connect package enables a variety of use models as Verification IP developed in one language can be used by the other: reuse of SystemC models as reference models in SV using SystemC virtual platforms with SV RTL hardware descriptions integration of off-the-shelf VIP in either language using TLM1, TLM2, and Analysis Ports using SystemVerilog random stimulus or UVM sequences with a SystemC platform How is the Connection implemented? UVM Connect is open and standards-based. It is implemented as a SystemVerilog package and a SystemC namespace. These packages contain function calls that allow transactions to be passed between the two languages, using the SystemVerilog Direct Programming Interface (DPI), that enables SystemVerilog to make and accept C function calls. The API supports: one-line SV/DPI/SystemC socket connection mechanism for TLM1, TLM2 and Analysis Ports easy transaction conversion support, built-in Generic Payload support command API for interaction or control of the UVM testbench Compatibility: The UVM Connect package is compatible with any simulator, using the IEEE 1800 SystemVerilog and IEEE 1666 SystemC standards. It has been tested by several verification teams in the industry and can accommodate various inter-language instantiation schemes. Support, training, documentation available at Verification Academy
  2. 1&2: Yes, exactly. you can choose whether the container has the coverage inside or outside. I would add an analysis port, and put the coverage outside for a degree of decoupling (e.g. vertical reuse might want some predict/scoreboard functionality but not reuse of coverage). Not essential though. 3: no 'magic' configuration mechanisms for this - just make your own controls - you would use the standard UVM mechanisms for set/get config using the config/resource database, and have control over what coverage is turned on or off, across all the coverage in your bench. Vertical reuse also benefits from this kind of config, as typically unit coverage would be turned off one level up. Different tests can turn on/off different coverage object / covergroup instantiations. I don't have an example handy, there is probably one to be found on Verification Academy.
  3. No problem, you are welcome: 1 & 2: you would need to draw a box around the {multi-stage predictors and scoreboard} and put them in a single 'container' component, and give that component an analysis port. The transaction emitted would need to be a compound of the various original traffic transactions processed by your predictors (not the 'converted' items sent to the scoreboard, but more likely need to be the 'original' items), and also have a handle to the register model. 3: interesting idea. I think the way forward might be to get the software tools we use to generate the register model (which also generate the 'free' register coverage) to optionally generate a user-defined design coverage, from a template of some sort. You would designate which registers/fields you wanted to participate in that extra set of covergroup code. More work required to flesh this idea out. Maintaining two sets of registers sounds like a cumbersome task too, but I see why you are thinking of that. Good luck, happy to help further.
  4. I think Adiel addressed the first part of your question: coverage for register testing, but not the main point which is coverage for functional testing. The kind of coverage that comes 'for free' with the UVM Register model and generator tools is for register testing only. The main thing to think about for functional testing is that it is insufficient to measure what the register bits are set to, and separately measure some functional activity on an interface of your DUT. Instead, we have to combine those together, and measure (a) that the registers were set a certain way and ( that some sufficient functional activity or traffic took place across your DUT that used that particular register setting in a meaningful way. My approach here is to attach a coverage model (that you design yourself, it is design specific) to your scoreboard, and make the register model state available to it (the handle to the model is probably already used by your scoreboard anyway) along with analysis reports of the relevant functional traffic. Hook up the scoreboard to the coverage model with an analysis port which carries a container type, which contains transactions and register model. This allows you to write meaningful cross coverage specific to your test plan, even though the register and various functional transactions are temporally separate. So we have two completely separate coverage models, but the UVM Register model participates in both. To 'manage them' do the following with standard UVM configuration techniques: - enable the register test coverage model only for register tests - enable the functional test coverage model only for functional tests Remember that 'free coverage' from UVM Registers is good, but you get what you pay for, and it's not the whole story :-) See my DvCon paper on Scoreboarding which has a section on Coverage describing the technique above.
  5. Accellera are still developing the use models and APIs for new UVM phasing as it relates to sequences, transactors. In the mean time, our recommendation is to wait until that work is done and the API is stable. For VIP design, code your driver and monitor to use only run_phase() and take care of reset separately - there are normally only 2 situations: (1) the driver/monitor is aware of reset as part of the protocol interface, and adjusts driven signals and monitored reports accordingly (to avoid the kind of signal conflict problem you refer to), and (2) the driver and monitor ignore reset, and some other part of the environment does any required cleanup of stimulus and checks (there should be no signal conflict here - one VIP per signal-group). [if your particular situation dictates that you _must_ use {reset/configure/main}_phase(), then we recommend you use them only to control what the driver/monitor does in run_phase(), not to directly drive signals. Use the existing semaphore/barrier mechanisms.] Using only run_phase is backwards-compatible as you say; but it is also forwards-compatible.
  6. Hi, thanks for your bug report, we will take a look.
  7. The text is just referring to the fact that 'new' is not just an ordinary function method, it does two entirely different things at once: (context 1) when you 'call' it, it acts as an R-value of an expression, appearing to act like a static function method which 'returns' a handle for the new object. It's role appears to be just 'allocate some memory and return a handle' (context 2) but under the hood, the object is created FIRST by the simulator and then the new() method is called with that object as 'context'. The role here is to set up the initial dynamic state of the new object, including that of its superclass hierarchy if any.
  8. Hi Mike, Are you calling uvm_top.print() too soon? build_phase() is top-down. Try calling uvm_top.print() from end_of_elaboration_phase() which will give you the complete picture!
  9. I had a gut feel this might be a package problem, I've seen this `include problem before - but I was thrown off by the 5 fields discrepancy. This is a common problem - and note that a simulator would not normally emit a warning for duplicate class names (that's what package namespaces is intended to allow). And by $cast time the relevant information is often optimized out. I wish it were easier to spot this. My recommendation for package management on your UVC: have two packages: one for the UVC structure and one for its data: Data package: //my_data.svh package my_data; class my_item extends uvm_sequence_item; ... endclass endpackage UVC package: //my_uvc.sv `include "my_interface.svh" `include "my_data.svh" package my_uvc; import uvm_pkg::*; `include "uvm_macros.svh" import my_data::*; `include "my_config.svh" `include "my_sequencer.svh" `include "my_driver.svh" `include "my_monitor.svh" `include "my_coverage.svh" `include "my_agent.svh" endpackage That way you can reuse the 'my_data' package elsewhere e.g. in a scoreboard in your env, hooked to the analysis port on your UVC. Those analysis components should import the data package, not the full UVC package. Circular dependencies can be avoided. Put further transactions or sequences which extend the base data type either in the UVC package or in a third my_seqlib package to suit your component's needs and dependencies. But keep the my_data package to just the minimum necessary for reuse, normally a single class.
  10. Please see http://go.mentor.com/uvm1-0-questa for instructions for running Questa 10.0a with the UVM.
  11. 3) Agent's driver can be configured as a Master or Slave, and in both cases operates autonomously with the interface pins without taking any direction from the monitor. Monitor remains independent and aloof. Yes, that means you might have some duplicate code in monitor and {slave, or master} driver. Shared code in configuration objects referred to by both may help. But important that both remain autonomous. Think of 'Slave Driver' as just another protocol, same as 'Master Driver'. Monitor should be common to both though. A driver of the Slave end of a protocol could use configuration either at build time or dynamically by a sequence connection, to specify any knobs you have on the Slave end of your protocol (e.g. response delay or kind)
  12. The OVM/UVM factory automation / macros have limitations; it is unfortunate that the presence of a macro implies that the technique you're experimenting with is useful or appropriate. I don't think parameters are the right solution for you here for what is really a runtime soft configuration requirement. The best advice I can give you is to keep your data/sequence objects un-parameterized and use soft configuration instead. They are intended to be abstractions after all. Only parameterize your structural elements (i.e. the DUT, its interfaces, the env/agents/monitors/drivers that connect to them) but not your data/sequence/scoreboards/coverage. In your particular case, you are using parameters to specify some behavior (e.g. a constraint on a clock high / low time period?) which you should instead set either by constraints or randomize..with{} or by extending a sequence and adding a constraint or by giving sequence access to an env-wide configuration resource. In each case you are using a soft technique (i.e. runtime integers) to solve this problem rather than compile/elaboration-time overhead and complexity of parameters. Just my opinion...
  • Create New...