Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 03/06/2020 in Posts

  1. 2 points
    Eyck

    sc_clock Doubt

    sc_clock triggers itself based on the period and the (in your case default) constructor settings. The period is the default_time_unit.
  2. 1 point
    I suggest to move to SystemC 2.3.3, if possible. (The error message indicates, that you seem to be using SystemC 2.3.1). Secondly, can you show the derived class of the fifo as well (including its constructor)?
  3. 1 point
    Your problem is the use of POD (plain old datatype) for communication between the processes. Why? As soon as you write onto it all other processes see the update. So the behavior of your design depends on the scheduling of the processes. If Design::process is scheduled before Design::second_stage_process the second_stage_process sees the updates of process. Actually there are 2 ways out: you just have one thread and call the function from the output to the input: void Design::process() { txProcess(); intermediateProcess(); rxProcess(); } Although it will work in your case this approach will not scale. As soon as you cross sc_module boundaries you cannot control the order of calling. you use a primitive channel instead of POD. In your case you might use a sc_core::sc_fifo with a depth of one. And you should use sc_vector instead of the POD array type since they need the proper initialization. Why does it help? New values being written will be visible at the output of the fifo in the next delta cycle. So no matter in which order the threads and methods are invoked they will read the 'old' values despite 'new' values have been written. HTH
  4. 1 point
    Eyck

    Handling SystemC engine from a Qt5 thread

    Esp. as educational project you should implement it in 2 threads which communicate with each other. Since they run in the same process space you can access data safely once the simulation is in paused state. But you cannot restart the simulation without restarting the process it self. SystemC uses static global variables storing the simulation state and those get only initialized at program start.
  5. 1 point
    David Black

    sc_clock Doubt

    There is no default_time_unit in SystemC; however, the sc_clock default constructor does supply a default value of 1 ns. Be careful you don't set the time resolution larger than 1 ns, if you are going to use the default time. You could of course be more explict: sc_time clock{ "clock", 1, SC_NS }; //< assumes C++11 or better and using namespace sc_core
  6. 1 point
    Eyck

    SC_METHOD/SC_THREAD Synchronization

    There is no guarantee which method or thread is activated first There is no means to give priority. Why would you like to do this? In my experience you have a thought problem if you believe you need to do this. thread activation is more expensive (in terms of computing power) than method as the thread context needs to be restored and saved. But threads keep an internal state so they are good to describe state machines.
  7. 1 point
    Ah, I see what you mean by the 64 as opposed to 0, but it's still initialized correctly: // Constructor implementation function uvm_packer::new(string name=""); super.new(name); flush(); endfunction The constructor is just relying on flush() for initialization, so that we don't implement the same code twice. I also agree with the base/implementation comment... that's more legacy than strict intent. It's a good enhancement request for the library though (and the standard in general)! We did it with the uvm_report_server back in UVM 1.2, it makes sense to do it for the policy classes. Finally, you may wish to make to the following events errors in your contribution: [un]pack_object(null)/is_null() - If an object does any of these, chances are they're going to get unexpected behavior at some point Calling pack_* after calling set_packed_bytes/ints/longints - The m_packed pointer is going to be in a bad state here Calling unpack_* without calling set_packed_* - Any subsequent get_packed_* calls are going to get unexpected behavior. Technically the problem only occurs if you call get_packed_* after calling unpack_* Thanks again! -Justin
  8. 1 point
    @tymonx- Thanks again for the responses, I didn't get a notification about them, otherwise I would have responded a bit faster 🙂 The UVM Packer is not specified as packing bits in any particular format... if the a developer or end user requires a specific format, then they are free to implement their own within the standard. If you've come up with an alternative and you think it'd be useful, please post it! That said, while the format/structure of the bits isn't specified, the LRM is very clear about how packers behave... it seems that the behavior just isn't exactly what you were expecting. I can understand the frustration (FWIW: there's plenty of places wherein the library acts in a way I would consider unexpected, you're not alone there!). The largest disconnect here seems to be with how UVM handles "metadata", ie. data that describes the data contained within the stream. There are 3 basic forms of metadata that Accellera's implementation is concerned with: The current position of pointers in the packer stream (Your magic 8 bytes) The size of a string The validity of an object handle The use of a fixed-size array is an Accellera implementation artifact. It was chosen back in the pre-UVM days, but I believe the basic reasoning for it is that accessing data within a fixed size array tends to be faster than constantly (re)allocating data inside of a dynamic array. The reference implementation does allow the user to easily change the size of the fixed size array, but it will still be a fixed size array. To be clear though, this is an Accellera decision, and you are free to implement a packer which uses a dynamic array instead. Should you choose to make that a queue instead of a dynamic array, then you no longer need pointers for pack/unpack. Your unpack pointer is always 0, your pack pointer is always [$]. Strings can generally be dealt with one of two ways: {size,string} or {string,'\0'}. Accellera goes with the latter, but really either is fine so long as you're consistent. The validity of an object handle is called out by the LRM. The LRM dictates that is_null returns 1 if the next item in the stream is a null object, 0 otherwise. Unfortunately, this is the one place wherein the LRM truly requires _some_ form of metadata being present. You are absolutely free to create a packer which doesn't support this method, but then your packer won't work for 100% of the objects out there. As to your other concerns: The initial values of the member variables is fine because they're 2-state ints, not 4-state integers. They automatically initialize to a value of 0. The library will also automatically flush any packer passed to uvm_object::pack_*, so long as that packer is not actively packing another object (refer to 16.1.3.4 in the 2017 LRM here). Having the pack_* methods use the default packer was another case wherein simplicity/performance was chosen over strict thread safety (again, pre-UVM). I would argue that instead of changing the behavior of "Default/Null packer" to clone, it would be cleaner to simply remove the option altogether. Make it an error to pass null to pack_*, and now the user has to be explicit. No more thread safety concerns (for the library), no more potential for unexpected behavior. Downside? Breaks a _ton_ of existing code, some of which dates back to before UVM existed. The packer is actually one of the more heavily documented features of UVM, even going so far as separating those methods which packer developers need to worry about (16.5.3) from those that the users generally interact with (16.5.4). The fact that the LRM doesn't dictate the format of the bitstream isn't a bug or an omission, it's an intenitonal feature. It's left at the discretion of the developer. The "do one thing, well" philosophy is a bit alive and well: the one thing is that the packer allows you convert a sequence of pack_*/unpack_* calls to/from a bitstream. A quick side story: During the discussions of the packer during the development of the 1800.2 standard, an example was shown wherein all of the methods in 16.5.4 didn't actually modify a bitstream at all, instead they simply pushed/popped their values in separate local queues of bits, bytes, ints, longints, uvm_integral_t, uvm_bitstream_t, and strings. A hook was present which allowed a user to control how that data was eventually packed/unpacked inside of the set/get_packed_state methods. In theory this implementation could be significantly faster, because the packer could choose the optimal layout for each type. This was just an example though, the full source code was not provided. A final note on the fact that the Accellera implementation doesn't exactly match the LRM: You're 100% correct, which is why the release notes include the following: The inconsistency between sections 5.3 and 16.5.3 are being addressed by the IEEE in the next revision. -Justin PS- I get that it's just an example, and therefore I can't tell if protocol_c is meant to extend from uvm_object or not, but you should never call packer.flush in a uvm_object::do_pack call, unless you explicitly created the packer! If the packer has any data in it (including but not limited to metadata), then you just cleared all of that data!
  9. 1 point
    sas73

    UVM Library Test Suite and Git Repository

    The github repositories are the actively developed code for Accellera’s reference implementation (sourceforge was made read-only when github was spun up). That being said, github stores the active development for the reference implementation, not for the standard itself. The class reference (ie. The “Standard”), as well as the Accellera Reference Implementation are officially published on accellera.org: The UVM 1.2 Standard: http://accellera.org/images/downloads/standards/uvm/UVM_Class_Reference_Manual_1.2.pdf The UVM 1.2 Reference Implementation: http://accellera.org/images/downloads/standards/uvm/uvm-1.2.tar.gz
×
×
  • Create New...