Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 03/04/2020 in all areas

  1. 2 points
    Eyck

    sc_clock Doubt

    sc_clock triggers itself based on the period and the (in your case default) constructor settings. The period is the default_time_unit.
  2. 1 point
    I suggest to move to SystemC 2.3.3, if possible. (The error message indicates, that you seem to be using SystemC 2.3.1). Secondly, can you show the derived class of the fifo as well (including its constructor)?
  3. 1 point
    Eyck

    Handling SystemC engine from a Qt5 thread

    Esp. as educational project you should implement it in 2 threads which communicate with each other. Since they run in the same process space you can access data safely once the simulation is in paused state. But you cannot restart the simulation without restarting the process it self. SystemC uses static global variables storing the simulation state and those get only initialized at program start.
  4. 1 point
    sas73

    UVM Library Test Suite and Git Repository

    Hi, I just downloaded the UVM library but I couldn't find any tests verifying its functionality. Are such tests available? Also, is the git repository from which the library was released open? Thanks
  5. 1 point
    David Black

    sc_clock Doubt

    There is no default_time_unit in SystemC; however, the sc_clock default constructor does supply a default value of 1 ns. Be careful you don't set the time resolution larger than 1 ns, if you are going to use the default time. You could of course be more explict: sc_time clock{ "clock", 1, SC_NS }; //< assumes C++11 or better and using namespace sc_core
  6. 1 point
    Eyck

    SC_METHOD/SC_THREAD Synchronization

    There is no guarantee which method or thread is activated first There is no means to give priority. Why would you like to do this? In my experience you have a thought problem if you believe you need to do this. thread activation is more expensive (in terms of computing power) than method as the thread context needs to be restored and saved. But threads keep an internal state so they are good to describe state machines.
  7. 1 point
    Ah, I see what you mean by the 64 as opposed to 0, but it's still initialized correctly: // Constructor implementation function uvm_packer::new(string name=""); super.new(name); flush(); endfunction The constructor is just relying on flush() for initialization, so that we don't implement the same code twice. I also agree with the base/implementation comment... that's more legacy than strict intent. It's a good enhancement request for the library though (and the standard in general)! We did it with the uvm_report_server back in UVM 1.2, it makes sense to do it for the policy classes. Finally, you may wish to make to the following events errors in your contribution: [un]pack_object(null)/is_null() - If an object does any of these, chances are they're going to get unexpected behavior at some point Calling pack_* after calling set_packed_bytes/ints/longints - The m_packed pointer is going to be in a bad state here Calling unpack_* without calling set_packed_* - Any subsequent get_packed_* calls are going to get unexpected behavior. Technically the problem only occurs if you call get_packed_* after calling unpack_* Thanks again! -Justin
  8. 1 point
    @tymonx- Thanks again for the responses, I didn't get a notification about them, otherwise I would have responded a bit faster 🙂 The UVM Packer is not specified as packing bits in any particular format... if the a developer or end user requires a specific format, then they are free to implement their own within the standard. If you've come up with an alternative and you think it'd be useful, please post it! That said, while the format/structure of the bits isn't specified, the LRM is very clear about how packers behave... it seems that the behavior just isn't exactly what you were expecting. I can understand the frustration (FWIW: there's plenty of places wherein the library acts in a way I would consider unexpected, you're not alone there!). The largest disconnect here seems to be with how UVM handles "metadata", ie. data that describes the data contained within the stream. There are 3 basic forms of metadata that Accellera's implementation is concerned with: The current position of pointers in the packer stream (Your magic 8 bytes) The size of a string The validity of an object handle The use of a fixed-size array is an Accellera implementation artifact. It was chosen back in the pre-UVM days, but I believe the basic reasoning for it is that accessing data within a fixed size array tends to be faster than constantly (re)allocating data inside of a dynamic array. The reference implementation does allow the user to easily change the size of the fixed size array, but it will still be a fixed size array. To be clear though, this is an Accellera decision, and you are free to implement a packer which uses a dynamic array instead. Should you choose to make that a queue instead of a dynamic array, then you no longer need pointers for pack/unpack. Your unpack pointer is always 0, your pack pointer is always [$]. Strings can generally be dealt with one of two ways: {size,string} or {string,'\0'}. Accellera goes with the latter, but really either is fine so long as you're consistent. The validity of an object handle is called out by the LRM. The LRM dictates that is_null returns 1 if the next item in the stream is a null object, 0 otherwise. Unfortunately, this is the one place wherein the LRM truly requires _some_ form of metadata being present. You are absolutely free to create a packer which doesn't support this method, but then your packer won't work for 100% of the objects out there. As to your other concerns: The initial values of the member variables is fine because they're 2-state ints, not 4-state integers. They automatically initialize to a value of 0. The library will also automatically flush any packer passed to uvm_object::pack_*, so long as that packer is not actively packing another object (refer to 16.1.3.4 in the 2017 LRM here). Having the pack_* methods use the default packer was another case wherein simplicity/performance was chosen over strict thread safety (again, pre-UVM). I would argue that instead of changing the behavior of "Default/Null packer" to clone, it would be cleaner to simply remove the option altogether. Make it an error to pass null to pack_*, and now the user has to be explicit. No more thread safety concerns (for the library), no more potential for unexpected behavior. Downside? Breaks a _ton_ of existing code, some of which dates back to before UVM existed. The packer is actually one of the more heavily documented features of UVM, even going so far as separating those methods which packer developers need to worry about (16.5.3) from those that the users generally interact with (16.5.4). The fact that the LRM doesn't dictate the format of the bitstream isn't a bug or an omission, it's an intenitonal feature. It's left at the discretion of the developer. The "do one thing, well" philosophy is a bit alive and well: the one thing is that the packer allows you convert a sequence of pack_*/unpack_* calls to/from a bitstream. A quick side story: During the discussions of the packer during the development of the 1800.2 standard, an example was shown wherein all of the methods in 16.5.4 didn't actually modify a bitstream at all, instead they simply pushed/popped their values in separate local queues of bits, bytes, ints, longints, uvm_integral_t, uvm_bitstream_t, and strings. A hook was present which allowed a user to control how that data was eventually packed/unpacked inside of the set/get_packed_state methods. In theory this implementation could be significantly faster, because the packer could choose the optimal layout for each type. This was just an example though, the full source code was not provided. A final note on the fact that the Accellera implementation doesn't exactly match the LRM: You're 100% correct, which is why the release notes include the following: The inconsistency between sections 5.3 and 16.5.3 are being addressed by the IEEE in the next revision. -Justin PS- I get that it's just an example, and therefore I can't tell if protocol_c is meant to extend from uvm_object or not, but you should never call packer.flush in a uvm_object::do_pack call, unless you explicitly created the packer! If the packer has any data in it (including but not limited to metadata), then you just cleared all of that data!
  9. 1 point
    Thanks for the spirited comments @Tymonx! I'll try and respond to all of your messages in one comment, apologies in advance if it's overly long. To the concerns re: the packing of the m_pack_iter and m_unpack_iter into the byte stream, that is a consequence of how the 1800.2 LRM defines the packer's functionality and compatibility. Specifically sections 16.5.3.1 (State assignment) and 16.5.3.2 (State retrieval), which effectively allow one to dump the packer's state, and safely load it into another packer, regardless of what operations have been performed, and in what order, on the original packer. In this way, the state assignment/retrieval methods can be compared to SystemVerilog's process::set/get_randstate methods. They contain enough information to put you in _exactly_ the same state. As an additional aside, it's also important to acknowledge that while uvm_object does provide a pack/do_pack/do_unpack interface, there's zero restrictions on where a packer can actually be used. Users can create/use packers anywhere in their code, not just in the context of a UVM object. As to m_pack_iter and m_unpack_iter being magic numbers, you're absolutely right... but to that end, the entire bit stream dumped by the packer is a magic number. The 1800.2 standard intentionally leaves the formatting of the bitstream to be not just UVM Library dependent, but uvm_packer implementation dependent. This allows for alternative implementations based on performance or other requirements. Additionally, as m_pack/unpack_iter are values of type int, they auto-initialize to 0. For some background on "Why did 1800.2 add the state methods? Why the changes?": Pre-1800.2 it was impossible to use a packer _without_ pushing it through a uvm_object of some kind first, unless you opened up the source code to see how it worked. FWIW: This is precisely what UVM-SystemC did. The 1800.2 standard reversed the polarity though, saying that it's not important that everyone pack/unpack byte streams in the same exact way, but instead it's important that the standard allows for developers to define their own bit packing mechanisms, so long as they all present the same interface to the user. This is also why the various control knobs which altered string and array packing were removed. IOW: If library X wants to pack/unpack with UVM, they don't need to know how Accellera packs/unpacks, they just need to make "class X_packer extends uvm_packer", and follow the rules. So long as they do that, they're safe to use whatever bitpacking mechanism is best for their needs. Your final concerns re: thread safety are valid, but are not constrained to uvm_packer. Unfortunately, the UVM library in general isn't uber-thread-safe, primarily because SystemVerilog isn't particularly thread safe in "zero time" (ie. functions). The same basic singleton flaw surrounds all of the singletons in the library... the default printer, copier, report server... you name it, it's not thread safe. Unfortunately, SystemVerilog doesn't provide any mechanism for "atomic" access to a variable without consuming time. The 1800.2 standard and the Accellera reference implementation do their best to "thread the needle" between general functionality, performance, and strict thread safety (and generally bias in that order). That said, the best place to attack this may be inside of the uvm_coreservice itself... an alternative "thread safer" version could be created that constructed clones of the policies when get_* was called. You'd then have the option of perf. vs (relative) thread safety. I hope that helps to shed some light on the questions, Justin R.
  10. 1 point
    The Accellera UVM Working Group has released the UVM 2017 0.9 reference implementation. This implementation is available as a SystemVerilog class library and is fully compatible with the IEEE 1800.2-2017 standard as defined in the Language Reference Manual. The library can be downloaded for free here. The IEEE 1800.2-2017 standard is available free of charge from the IEEE Get program, courtesy of Accellera. We encourage you to use this forum to provide feedback, ask questions, and engage in discussions.
  11. 1 point
    2 days! That's fast response Exactly! If you're not open in the design/pre-release phase you're likely to miss use cases and if the members have committed themselves to solutions and switched their focus to other tasks I imagine that there will be an unwillingness to go back and redo things even if new important insights have been revealed. I think most users would like a code base they can build upon, not one that needs adaptations to make it work. Being fully transparent about the code in the making will reduce the risk for such adaptations What I'm suggesting is free and efficient access to the collective intelligence of the entire community at a point in the development cycles where it makes the most difference. I'm not suggesting a shift in the rights to make the final decisions. That's exclusive to the paying members. What's preventing this from happening within Accellera?
  12. 1 point
    Unfortunately I'm not with a member company. I was hoping that I'd have read permissions regardless of my current affiliation. As a user I'd like to see the connection between discussions in the official forum, the issues reported to the issue management system, and the code being developed in response to that. The ability to immediately test that code and possibly give feedback as code comments or a pull request. More like Github, Gitlab and other platforms. Seems to me that this would be a more efficient way to give and get user feedback.
  13. 1 point
    Are there any plans on continuing using the GitHub repository for such releases or has it been discontinued?
  14. 1 point
    sas73

    UVM Library Test Suite and Git Repository

    The github repositories are the actively developed code for Accellera’s reference implementation (sourceforge was made read-only when github was spun up). That being said, github stores the active development for the reference implementation, not for the standard itself. The class reference (ie. The “Standard”), as well as the Accellera Reference Implementation are officially published on accellera.org: The UVM 1.2 Standard: http://accellera.org/images/downloads/standards/uvm/UVM_Class_Reference_Manual_1.2.pdf The UVM 1.2 Reference Implementation: http://accellera.org/images/downloads/standards/uvm/uvm-1.2.tar.gz
  15. 1 point
    sas73

    UVM Library Test Suite and Git Repository

    @David Black I did stumble upon an accellera/uvm repository on GitHub. Seems to be what I was looking for although it has been dead since UVM 1.2. Why have it there and not use that platform?
  16. 1 point
    sas73

    UVM Library Test Suite and Git Repository

    Thanks David. It sounds like no such tests are available. Open source projects in general are not always good at providing their test suites but I find it a bit odd that on open source library for verification doesn't provide the test suites showing how the library itself is verified. It would be easier for people to suggest improvements if they can verify whether or not such a modification breaks something else.
  17. 0 points
    This is not official Accellera repository. But in case your company is Accellera member you can request access to Accellera UVM git repositories.
  18. 0 points
    If your question on UVM is whether the repository is open to modifications, the answer is no. The UVM proof of concept library is carefully managed by Accellera as part of the standard's development. I cannot answer the question about test suites.
×
×
  • Create New...