Jump to content

David Black

Members
  • Posts

    690
  • Joined

  • Last visited

  • Days Won

    154

Everything posted by David Black

  1. It might have to do with the order of compilation, but your description is unclear. Please provide a minimal code example of your situation.
  2. It is worth noting that a hierarchical channel is really just an sc_module that implements an sc_interface. It's purpose is for creating complex abstract channels (e.g. PCIe or AXI). Primitive channels are for smaller (primitive) transfers such as wires (i.e. sc_signal<>).
  3. On the other hand, in order to use the macros correctly, you must learn what each of the macros does. Depending on whether you are using these for sequence_item's (transactions) or sub-sequences, there are only three or four basic steps in either one. Since the experts disagree, we suggest you learn both ways and only once you fully appreciate the differences make a selection. If you don't learn both ways, you may find yourself confused when you stumble over code written by somebody else.
  4. Instead of creating virtual classes, you are probably better off using interface classes introduced in SystemVerilog 2012. You can then simply require derived classes to implement the interfaces as needed. Interface classes of course would not use an UVM machinery since all their methods are pure virtual.
  5. I don't believe task based "Common Phases" are meant to replace sequences. Rather they are a means to provide synchronization between components. Used with domains, Common Phases can provide a very nice mechanism for synchronization for major activities (e.g. reset); however, they do have some caveats. John Aynsley wrote a really nice paper on this topic for DVCON 2015 (San Jose) titled "Run-Time Phasing in UVM: Ready for the Big Time or Dead in the Water?" discussing the best way to use these. It should appear on the Doulos website (www.doulos.com/knowhow/systemverilog/uvm) in the near future if you are unable to find it at DVCon's website. I strongly doubt the Common Phases will ever be deprecated. I also know many folks prefer to use the run_phase and other mechanisms to achieve synchronization. In any case, drivers and monitors should always use the run_phase.
  6. Usually when features are deprecated because either (A) there is a safer/better way to do the same thing (i.e. the deprecated approach duplicates existing functionality), or ( the feature is deemed dangerous and has caused many problems. With this in mind, you can either: (A) stick with an older version of the standard, ( update the code to not use deprecated features, or © use both techniques, but use a version ifdef to choose between the two. Much C/C++ code takes the last approach © for compatibility, but over time this can lead to very hard to read code. Here is an example where the feature has been removed from versions after 1.1 and we need to write code that works in both: `ifdef UVM_POST_VERSION_1_1 seq.set_starting_phase(phase); `else seq.starting_phase = phase; `endif
  7. Your 'port not bound' error is because you don't have all the ports connected. In SystemC, you generally cannot have unconnected ports. port_9 indicates it's the tenth port (since ports are numbered 0...N-1). You can get more sensible port names if you specify names in the constructor of your Queue via the initializer list. If you really want an unconnected port (e.g. a dangling output port), you will need to specify a non-default policy for each port that has this feature AND you will need to be certain to not access that port during simulation (i.e. no writes) by testing the ports size() method for validity. The easiest solution is to bind a dangling signal.
  8. Several topics here... EFFICIENCY/PERFORMANCE I hear so often the comment that SC_THREAD is less efficient than SC_METHOD, but the claim is incorrect. It is true that in the Accellera implementation there is a small difference favoring SC_METHOD. I am aware of SystemC implementations where exactly the opposite is true. In general, there are much better things to worry about: Coding efficiency (how fast can you get a correctly working model written)? Does your code work? Is this the most natural way to write the code? Is your code easy to understand? Are you writing at the correct level of abstraction (RTL is typically inappropriate)? If you are concerned about execution performance, you should be more concerned about the amount of context switching. In other words, how frequently are your SC_METHOD's called and how frequently do your SC_THREAD's call wait(). If you really want to focus on this, remove those clocks. Clocks have two context switches per clock period (rising and falling). You can synchronize to clock timing by treating time as modulus a clock_period variable. The intent of SystemC is to write models and verify (you do hopefully write unit tests) quickly. You should be able to write loosely timed models of your SoC in hours or days - not weeks or months. Approximately timed models may take a bit more time (perhaps a few days to a couple of weeks). SYNCHRONIZATION As regards synchronization, you need to completely understand the event-driven simulator mechanisms, and use proper handshaking of events or channels to accomplish your tasks. Mutex or semaphore may be appropriate, but not always. Sometimes you need to write your own custom channels. FIFO's are an excellent synchronization method, and make many modeling tasks trivial.
  9. There are two premises you need to consider: Sequences are supposed to generate transactions in a manner that allows for late randomization. This allows randomization to use current state of the system to guide/constrain the randomization. When you return from start_item(), the driver wants it's transaction request to be honored ASAP. Drivers are typically slaved to hardware. If your sequence blocks (waits), then the driver needs to be written in a manner that doesn't block the way seq_item_port.get_next_item() does. In other words, sequences with waits have implications on the driver. GIven the above, I would say #2 is preferable because the wait doesn't come after start_item() returns and cause delays in the randomization. In either case, since you have a wait in the sequence, you should be certain to design the driver to be tolerant of those waits. Often this means using seq_item_port.try_next_item(). .
  10. initiator_socket's connect to target_socket's when traversing the hierarchy horizontally. initiator_socket's can also connect to other initiator_socket's if the directing is upwards and out of a hierarchy. target sockets can be hierarchically connected in a downward manner.
  11. I quickly ran your program without any problems. I am using clang++ with SystemC 2.3.1 on a MacBook running OS X 10.10.2 (Yosemite). What version of SystemC are you using? What compiler? I should note, your coding style is a long ways from where I usually start, but it works non-the-less. quick-macosx64.x SystemC 2.3.1-Accellera --- Sep 20 2014 09:20:36 Copyright © 1996-2014 by all Contributors, ALL RIGHTS RESERVED 0 sHello World! 0 sHello World! 10 nsHello World! 20 nsHello World! 30 nsHello World! 40 nsHello World! 50 nsHello World! 60 nsHello World! 70 nsHello World! 80 nsHello World! 90 nsHello World! ...
  12. It is carefully described in the IEEE-1666-2011 specification. Or you might consider taking a class on TLM-2.0.
  13. SC_METHOD's and SC_THREAD's are always called once at the start of simulation (to allow reset/startup behavior). If you use dont_initialize() after each registration, you can avoid that: SC_METHOD(my_func); sensitive << my_event; dont_initialize(); SC_METHOD(other_func); sensitive << other_event; dont_initialize();
  14. SystemVerilog functions consume no time. The concept of a delta cycle doesn't even come into consideration. All non-task based phases (i.e. FUNCTIONS) are by definition happening in zero time. In the default setup, there are four non-task based phases that execute at time zero (i.e. directly after 'initial begin'); however, they also execute in a very specific order: build_phase connect_phase end_of_elaboration_phase start_of_simulation_phase For all components, their individual build_phase's all execute to completion before any connect_phase is executed. The order in which build_phases occur between components is top-down. Once build_phase is complete, there is no going backwards and the connect_phase begins. Similar to build_phase, all components execute their connect_phase before moving to the next phase. When all of those startup non-task based phases have completed in the stated order, then the task-based phases begin. They consume time (usually) and their order can be quite different depending on their domain. See a paper by John Aynsley presented at the recently completed San Jose DVCon 2015 on the topic for more information. You will soon be able to find and view it on the Doulos web site under the KnowHow section for SystemVerilog/UVM.
  15. @kartikkg is correct; however, which variables are missing is highly dependent on your makefile. Many systemc designs I encounter have the variables SYSTEMC or SYSTEMC_HOME and TARGET_ARCH. Also, if you are using csh or tcsh, then the .bashrc file won't help because you need to modify the .cshrc file with something like: setenv SYSTEMC /path/to/your/systemc/installation/directory setenv TARGET_ARCH linux ;#<< or linux64 For Linux, SystemC must be compiled and installed per the instructions in the download.
  16. Wayne, which edition of SystemC: From the Ground Up are you referencing?
  17. Having successfully used SystemC 2.3.1 with VS 2013 on a 64-bit Window 7 OS, I can assure you that it is not hard. Basic steps are: 1. Download the latest version of SystemC from Accellera.org 2. Open the archive and find the MSC80 directory where some old project solutions await you. 3. Open the solution, ignore the complaints, and build the library. 4. Back in your project simply adjust the settings for C++ include files to use the src directory from you build. Also, adjust the linker to point to the recently built library files under the MSC80/Debug directory. Go!
  18. I would suggest using the multi-sockets (see section 17.1.4 of IEEE-1666-2011). You did not mention how priorities are determined. If the communications carries the priority as separate signals, then I suggest you consider using an ignorable extension (sections 15.2 and 15.21). If contained in the payload, then you can inspect it directly. You can find more information at doulos.com/knowhow/systemc/ and a good video tutorial on the application of extensions at http://www.accellera.org/resources/videos/tlm20extensions/
  19. Please consult Synopsys documentation or call your Synopsys support person for help on this. This forum is for UVM support and issues related to how UVM may differ on various simulators. To my knowledge $assertvacuousoff is not part of the Proof-of-Concept simulator nor is it directly related to UVM. UVM does not address issues associated with SystemVerilog Assertions (SVA). You might want to try $assertcontrol, since $assertvacuousoff is simply a convenience task. Again, consult the vendor's documentation. In general, simulation vendors do not have uniform nor complete support for all the features of IEEE 1800-2012 yet.
  20. You can definitely model the problem using LT. It's just modeling and you can choose whether to block or not. Your buffer would probably have a depth (possibly infinite). You choose when implementing the b_transport what to do if the buffer is full or empty. You could return an error when getting from an empty buffer. Or you could block and wait for an internal event that you create. It's all up to you.
  21. bhunter1972 is exactly correct. Sequencers are designed to do arbitration and they do so exceedingly well. Yes, your driver will take one transaction at a time, but that is to be expected. If you have a driver than can bundle transactions or send them in parallel, you could do something in the driver like: seq_item_port.get(req1); seq_item_port.get(req2); and then handle both of them; however, this would be a very strange driver and probably not what most designs need.
  22. Cliff, No good reason that I can see. When you migrate a model to the SoC level (where driving is inappropriate), just change the agent to passive.
  23. What if you have more than one monitored type of transaction (some monitors pull out multiple types)? If you consistently name your monitor and analysis port, do you even need to have an agent analysis port (just connect directly to agent.m_monitor.ap)?
  24. A number of universities use SystemC: From the Ground Up, and I am told they are reasonably successful. I am slightly biased as I am one of the authors. I do lecture occasionally at the university level.
  25. Agreeing with Alan, but adding some more detail... SystemC simulates parallel/concurrent processes, but uses a single-threaded cooperative multi-tasking to accomplish that goal similar to what Verilog, SystemVerilog, and VHDL do.
×
×
  • Create New...