Jump to content

jrefice

Members
  • Content Count

    24
  • Joined

  • Last visited

  • Days Won

    2

jrefice last won the day on March 11

jrefice had the most liked content!

About jrefice

  • Rank
    Member

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Ah, I see what you mean by the 64 as opposed to 0, but it's still initialized correctly: // Constructor implementation function uvm_packer::new(string name=""); super.new(name); flush(); endfunction The constructor is just relying on flush() for initialization, so that we don't implement the same code twice. I also agree with the base/implementation comment... that's more legacy than strict intent. It's a good enhancement request for the library though (and the standard in general)! We did it with the uvm_report_server back in UVM 1.2, it makes sense to do it for the policy classes. Finally, you may wish to make to the following events errors in your contribution: [un]pack_object(null)/is_null() - If an object does any of these, chances are they're going to get unexpected behavior at some point Calling pack_* after calling set_packed_bytes/ints/longints - The m_packed pointer is going to be in a bad state here Calling unpack_* without calling set_packed_* - Any subsequent get_packed_* calls are going to get unexpected behavior. Technically the problem only occurs if you call get_packed_* after calling unpack_* Thanks again! -Justin
  2. @tymonx- Thanks again for the responses, I didn't get a notification about them, otherwise I would have responded a bit faster 🙂 The UVM Packer is not specified as packing bits in any particular format... if the a developer or end user requires a specific format, then they are free to implement their own within the standard. If you've come up with an alternative and you think it'd be useful, please post it! That said, while the format/structure of the bits isn't specified, the LRM is very clear about how packers behave... it seems that the behavior just isn't exactly what you were expecting. I can understand the frustration (FWIW: there's plenty of places wherein the library acts in a way I would consider unexpected, you're not alone there!). The largest disconnect here seems to be with how UVM handles "metadata", ie. data that describes the data contained within the stream. There are 3 basic forms of metadata that Accellera's implementation is concerned with: The current position of pointers in the packer stream (Your magic 8 bytes) The size of a string The validity of an object handle The use of a fixed-size array is an Accellera implementation artifact. It was chosen back in the pre-UVM days, but I believe the basic reasoning for it is that accessing data within a fixed size array tends to be faster than constantly (re)allocating data inside of a dynamic array. The reference implementation does allow the user to easily change the size of the fixed size array, but it will still be a fixed size array. To be clear though, this is an Accellera decision, and you are free to implement a packer which uses a dynamic array instead. Should you choose to make that a queue instead of a dynamic array, then you no longer need pointers for pack/unpack. Your unpack pointer is always 0, your pack pointer is always [$]. Strings can generally be dealt with one of two ways: {size,string} or {string,'\0'}. Accellera goes with the latter, but really either is fine so long as you're consistent. The validity of an object handle is called out by the LRM. The LRM dictates that is_null returns 1 if the next item in the stream is a null object, 0 otherwise. Unfortunately, this is the one place wherein the LRM truly requires _some_ form of metadata being present. You are absolutely free to create a packer which doesn't support this method, but then your packer won't work for 100% of the objects out there. As to your other concerns: The initial values of the member variables is fine because they're 2-state ints, not 4-state integers. They automatically initialize to a value of 0. The library will also automatically flush any packer passed to uvm_object::pack_*, so long as that packer is not actively packing another object (refer to 16.1.3.4 in the 2017 LRM here). Having the pack_* methods use the default packer was another case wherein simplicity/performance was chosen over strict thread safety (again, pre-UVM). I would argue that instead of changing the behavior of "Default/Null packer" to clone, it would be cleaner to simply remove the option altogether. Make it an error to pass null to pack_*, and now the user has to be explicit. No more thread safety concerns (for the library), no more potential for unexpected behavior. Downside? Breaks a _ton_ of existing code, some of which dates back to before UVM existed. The packer is actually one of the more heavily documented features of UVM, even going so far as separating those methods which packer developers need to worry about (16.5.3) from those that the users generally interact with (16.5.4). The fact that the LRM doesn't dictate the format of the bitstream isn't a bug or an omission, it's an intenitonal feature. It's left at the discretion of the developer. The "do one thing, well" philosophy is a bit alive and well: the one thing is that the packer allows you convert a sequence of pack_*/unpack_* calls to/from a bitstream. A quick side story: During the discussions of the packer during the development of the 1800.2 standard, an example was shown wherein all of the methods in 16.5.4 didn't actually modify a bitstream at all, instead they simply pushed/popped their values in separate local queues of bits, bytes, ints, longints, uvm_integral_t, uvm_bitstream_t, and strings. A hook was present which allowed a user to control how that data was eventually packed/unpacked inside of the set/get_packed_state methods. In theory this implementation could be significantly faster, because the packer could choose the optimal layout for each type. This was just an example though, the full source code was not provided. A final note on the fact that the Accellera implementation doesn't exactly match the LRM: You're 100% correct, which is why the release notes include the following: The inconsistency between sections 5.3 and 16.5.3 are being addressed by the IEEE in the next revision. -Justin PS- I get that it's just an example, and therefore I can't tell if protocol_c is meant to extend from uvm_object or not, but you should never call packer.flush in a uvm_object::do_pack call, unless you explicitly created the packer! If the packer has any data in it (including but not limited to metadata), then you just cleared all of that data!
  3. Thanks for the spirited comments @Tymonx! I'll try and respond to all of your messages in one comment, apologies in advance if it's overly long. To the concerns re: the packing of the m_pack_iter and m_unpack_iter into the byte stream, that is a consequence of how the 1800.2 LRM defines the packer's functionality and compatibility. Specifically sections 16.5.3.1 (State assignment) and 16.5.3.2 (State retrieval), which effectively allow one to dump the packer's state, and safely load it into another packer, regardless of what operations have been performed, and in what order, on the original packer. In this way, the state assignment/retrieval methods can be compared to SystemVerilog's process::set/get_randstate methods. They contain enough information to put you in _exactly_ the same state. As an additional aside, it's also important to acknowledge that while uvm_object does provide a pack/do_pack/do_unpack interface, there's zero restrictions on where a packer can actually be used. Users can create/use packers anywhere in their code, not just in the context of a UVM object. As to m_pack_iter and m_unpack_iter being magic numbers, you're absolutely right... but to that end, the entire bit stream dumped by the packer is a magic number. The 1800.2 standard intentionally leaves the formatting of the bitstream to be not just UVM Library dependent, but uvm_packer implementation dependent. This allows for alternative implementations based on performance or other requirements. Additionally, as m_pack/unpack_iter are values of type int, they auto-initialize to 0. For some background on "Why did 1800.2 add the state methods? Why the changes?": Pre-1800.2 it was impossible to use a packer _without_ pushing it through a uvm_object of some kind first, unless you opened up the source code to see how it worked. FWIW: This is precisely what UVM-SystemC did. The 1800.2 standard reversed the polarity though, saying that it's not important that everyone pack/unpack byte streams in the same exact way, but instead it's important that the standard allows for developers to define their own bit packing mechanisms, so long as they all present the same interface to the user. This is also why the various control knobs which altered string and array packing were removed. IOW: If library X wants to pack/unpack with UVM, they don't need to know how Accellera packs/unpacks, they just need to make "class X_packer extends uvm_packer", and follow the rules. So long as they do that, they're safe to use whatever bitpacking mechanism is best for their needs. Your final concerns re: thread safety are valid, but are not constrained to uvm_packer. Unfortunately, the UVM library in general isn't uber-thread-safe, primarily because SystemVerilog isn't particularly thread safe in "zero time" (ie. functions). The same basic singleton flaw surrounds all of the singletons in the library... the default printer, copier, report server... you name it, it's not thread safe. Unfortunately, SystemVerilog doesn't provide any mechanism for "atomic" access to a variable without consuming time. The 1800.2 standard and the Accellera reference implementation do their best to "thread the needle" between general functionality, performance, and strict thread safety (and generally bias in that order). That said, the best place to attack this may be inside of the uvm_coreservice itself... an alternative "thread safer" version could be created that constructed clones of the policies when get_* was called. You'd then have the option of perf. vs (relative) thread safety. I hope that helps to shed some light on the questions, Justin R.
  4. Thien- Agreed, there's a lot of performance lost in the factory, although we usually see that on the name-based side of town (as opposed to the type-based). Do you have a proposal/code edit that we could inspect and test? This sounds strikingly similar to an enhancement that we've been looking at on the uvm_callbacks side of town as well. -Justin PS- UVM 1.2 is technically deprecated at this point. Any changes would actually be ported forward to the next 1800.2 release.
  5. @thorsten69 : Would it be possible to generate a small testcase illustrating this problem? We're looking into it, but it'd be much easier to traige if we had a failing testcase to execute. Thanks! -Justin
  6. Thorsten- Thanks for the question! I've opened up a mantis to track the issue: https://accellera.mantishub.io/view.php?id=6966 We'll discuss it in today's working group meeting, and update the status as is appropriate. -Justin
  7. Access to the mantis database is restricted to Accellera members. I can tell you that the mantis in question was resolved in the UVM 2017 release, and the bug no longer exists.
  8. It's difficult to provide more details without knowing what your environment looks like. Would you be able to provide a loose description of your environment (What sequencers and drivers exist)? Generally speaking, I'm talking about cloning the sequence (or sequence items) after the randomization has occurred, so that you have 2 independent threads operating in parallel and doing the same things on different DUTs.
  9. Would it be sufficient to clone the original sequence, and execute the clone in parallel on the TLM model's sequencer/driver?
  10. @sharvil111- This would appear to be a valid bug in the RTL, and it is now being tracked in Mantis (https://accellera.mantishub.io/view.php?id=6745). We'll post an update here when the bug is resolved, hopefully in time for the 2017 1.0 release. -Justin
  11. Wow, starting the questions off with a (not entirely unexpected) doozy! ? Unfortunately there's no single document which states "Here's a full list of everything that changed". This is because a large number of changes were performed by the Accellera UVM WG prior to the IEEE UVM WG spinning up, so there was never a single "central" location wherein everything was tracked. Many of the changes were also "non-functional", ie. the removal of user's guide-esque material and removing the accidental documentation of implementation-specific artifacts. The best list we've got of functional changes is the bug database where we tracked the changes required to convert the UVM 1.2 reference implementation to UVM 2017 0.9, and that's not really intended for mass consumption. If I were to go through that list and pick "Justin's Top-N": Accessors- Almost all fields (e.g. printer knobs, etc) have been replaced with set_/get_<field> accessors. Besides for simply being a better coding convention, this allows for greater extensibility. Opening up the core services- The user now has the ability to insert their own code within the core services of UVM. One common use case for this would be to create a factory debugger, such that all calls to the factory get reported, or a resource pool with a more performant implementation. It's even possible to implement one's own uvm_root, however that has some additional restrictions called out by the LRM. In the past all of these would have required the user to hack inside their own version of the library. Library initialization- The library no longer mandates that initialization happen during static init. It must happen during time 0, but any time during time 0 is sufficient. This allows the user to leverage #2, but it also allows for new use cases (e.g. "parameterized classes participating in the name-based factory"). There's also new hooks in the library which allow the user to "start and stop with run_test". Removing the black magic- Anyone brave enough to expand the field macros would know that pre-2017, there was some scary and completely undocumented stuff going on there. This has all been refactored, and just as importantly, documented in 1800.2-2017, such that users can now implement their own macros and/or policies without having to worry about how it would interact with the Accellera library's implementation. Policy changes- Lots of extensibility changes here. Printer, packer, et. al now derive from a single common source (uvm_policy). They all support arbitrary extensions, similar to TLM2's Generic Payload, allowing for VIP-specific information to be communicated during their processing (e.g. masking fields from a print or compare operation...). Additionally, the printer now has a much more robust mechanism for tracking the structure being printed, making it easy to implement new printers (XML, YAML, JSON, etc.). Registers- Surprisingly few changes here. The most obvious change is that you can now unlock a model after it's been locked, which allows you to remove/replace/etc registers within the model during runtime. For SoCs which support hotplugging, or are generally re-configurable, this was a huge gap in 1.2 functionality. At DVCon 2017 & 2018, there were tutorials which covered all of the above and more, with detailed examples. Aside from #1, most of those changes are for advanced use cases, or providers of infrastructure. Day-to-day users shouldn't necessarily see a drastic change. Looking at the new Accellera implementation specifically, I'd say that the most impactful change is actually in the handling of deprecation. Pre-IEEE, the library would keep code in deprecation for as long as humanly possible so as to limit exposure to backwards incompatibility. Post-IEEE, the library is still using deprecation, but we are limiting ourselves to the previous LRM version. In other words: If something was deprecated as of 1.2, it has been flat out removed in the implementation of 2017. Additionally, the API for enabling deprecated code has been inverted... instead of defaulting to "enable deprecated APIs", the library defaults to "disable deprecated APIs". Hopefully that helps shed some light on your question, -Justin
  12. A common question, and a topic of many discussions within the Accellera UVM Working Group (UVM-WG), was: What should the version of the new UVM release be? Since 1.0-ea, UVM releases have followed the conventions of Semantic Versioning, with MAJOR and MINOR version numbers, and an optional PATCH version letter and/or additional labels. For example, 1.1d-rc2 was MAJOR=1, MINOR=1, PATCH=d, LABEL=rc2. The UVM-WG wanted to maintain semantic versioning moving forward, but at the same time we wanted to indicate what year/revision of the 1800.2 standard we were implementing. The simplest answer was to set the implementation's MAJOR number to the 1800.2 revision year. The library may have MINOR and PATCH updates, but all versions with MAJOR number 2017 will be compatible with 1800.2-2017. If/when the 1800.2 standard is revised in the future, then the MAJOR number will change. That said, why start with 0.9 as opposed to 1.0? This was more of a subjective decision... The 2017 0.9 release is fully functional, but effectively undocumented beyond a basic README.md which describes how to install the kit. Releasing it as "1.0" was viewed as disingenuous, as we knew it was incomplete from a documentation perspective. Why not 0.0 then, or 1.0-ea? Again, subjective. The UVM-WG didn't want to release it under the heading of “0.0”, "beta", "early adopter", "release candidate", etc. because all of those imply a certain lack of functionality. When all was said and done, the UVM-WG settled on "0.9", as it made it clear that something was missing, while not necessarily indicating that the library was in some way "unsafe to use". The UVM-WG is hard at work on "2017 1.0", which will have the complete documentation of the APIs that the implementation supports above and beyond the 1800.2 LRM. In the meantime, we will be actively monitoring this forum, hoping to answer any questions you may have regarding the library. Thanks for your participation! Sincerely, Justin Refice (UVM-WG Chair) & Mark Strickland (UVM-WG Vice-Chair)
  13. This is a catch-22 in the library. There's no guarantee that finish_item() actually indicates that the driver has completed processing the item, and there's no guarantee that the driver is going to trigger the appropriate events. Ideally, if the driver detects that seq_item_port.is_auto_item_recording_enabled() returns '0', then the driver should manually call accept/begin/end_tr. These calls will cause the appropriate events to trigger inside of the item.
  14. Martin- I've opened two mantis items for your requests. http://www.eda.org/mantis/view.php?id=4998 http://www.eda.org/mantis/view.php?id=4997 Thanks!
  15. I've opened up a Mantis for your request, thanks! http://www.eda.org/mantis/view.php?id=4996
×
×
  • Create New...