Jump to content


  • Content Count

  • Joined

  • Last visited

About jrefice

  • Rank

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thien- Agreed, there's a lot of performance lost in the factory, although we usually see that on the name-based side of town (as opposed to the type-based). Do you have a proposal/code edit that we could inspect and test? This sounds strikingly similar to an enhancement that we've been looking at on the uvm_callbacks side of town as well. -Justin PS- UVM 1.2 is technically deprecated at this point. Any changes would actually be ported forward to the next 1800.2 release.
  2. @thorsten69 : Would it be possible to generate a small testcase illustrating this problem? We're looking into it, but it'd be much easier to traige if we had a failing testcase to execute. Thanks! -Justin
  3. Thorsten- Thanks for the question! I've opened up a mantis to track the issue: https://accellera.mantishub.io/view.php?id=6966 We'll discuss it in today's working group meeting, and update the status as is appropriate. -Justin
  4. Access to the mantis database is restricted to Accellera members. I can tell you that the mantis in question was resolved in the UVM 2017 release, and the bug no longer exists.
  5. It's difficult to provide more details without knowing what your environment looks like. Would you be able to provide a loose description of your environment (What sequencers and drivers exist)? Generally speaking, I'm talking about cloning the sequence (or sequence items) after the randomization has occurred, so that you have 2 independent threads operating in parallel and doing the same things on different DUTs.
  6. Would it be sufficient to clone the original sequence, and execute the clone in parallel on the TLM model's sequencer/driver?
  7. @sharvil111- This would appear to be a valid bug in the RTL, and it is now being tracked in Mantis (https://accellera.mantishub.io/view.php?id=6745). We'll post an update here when the bug is resolved, hopefully in time for the 2017 1.0 release. -Justin
  8. Wow, starting the questions off with a (not entirely unexpected) doozy! ? Unfortunately there's no single document which states "Here's a full list of everything that changed". This is because a large number of changes were performed by the Accellera UVM WG prior to the IEEE UVM WG spinning up, so there was never a single "central" location wherein everything was tracked. Many of the changes were also "non-functional", ie. the removal of user's guide-esque material and removing the accidental documentation of implementation-specific artifacts. The best list we've got of functional changes is the bug database where we tracked the changes required to convert the UVM 1.2 reference implementation to UVM 2017 0.9, and that's not really intended for mass consumption. If I were to go through that list and pick "Justin's Top-N": Accessors- Almost all fields (e.g. printer knobs, etc) have been replaced with set_/get_<field> accessors. Besides for simply being a better coding convention, this allows for greater extensibility. Opening up the core services- The user now has the ability to insert their own code within the core services of UVM. One common use case for this would be to create a factory debugger, such that all calls to the factory get reported, or a resource pool with a more performant implementation. It's even possible to implement one's own uvm_root, however that has some additional restrictions called out by the LRM. In the past all of these would have required the user to hack inside their own version of the library. Library initialization- The library no longer mandates that initialization happen during static init. It must happen during time 0, but any time during time 0 is sufficient. This allows the user to leverage #2, but it also allows for new use cases (e.g. "parameterized classes participating in the name-based factory"). There's also new hooks in the library which allow the user to "start and stop with run_test". Removing the black magic- Anyone brave enough to expand the field macros would know that pre-2017, there was some scary and completely undocumented stuff going on there. This has all been refactored, and just as importantly, documented in 1800.2-2017, such that users can now implement their own macros and/or policies without having to worry about how it would interact with the Accellera library's implementation. Policy changes- Lots of extensibility changes here. Printer, packer, et. al now derive from a single common source (uvm_policy). They all support arbitrary extensions, similar to TLM2's Generic Payload, allowing for VIP-specific information to be communicated during their processing (e.g. masking fields from a print or compare operation...). Additionally, the printer now has a much more robust mechanism for tracking the structure being printed, making it easy to implement new printers (XML, YAML, JSON, etc.). Registers- Surprisingly few changes here. The most obvious change is that you can now unlock a model after it's been locked, which allows you to remove/replace/etc registers within the model during runtime. For SoCs which support hotplugging, or are generally re-configurable, this was a huge gap in 1.2 functionality. At DVCon 2017 & 2018, there were tutorials which covered all of the above and more, with detailed examples. Aside from #1, most of those changes are for advanced use cases, or providers of infrastructure. Day-to-day users shouldn't necessarily see a drastic change. Looking at the new Accellera implementation specifically, I'd say that the most impactful change is actually in the handling of deprecation. Pre-IEEE, the library would keep code in deprecation for as long as humanly possible so as to limit exposure to backwards incompatibility. Post-IEEE, the library is still using deprecation, but we are limiting ourselves to the previous LRM version. In other words: If something was deprecated as of 1.2, it has been flat out removed in the implementation of 2017. Additionally, the API for enabling deprecated code has been inverted... instead of defaulting to "enable deprecated APIs", the library defaults to "disable deprecated APIs". Hopefully that helps shed some light on your question, -Justin
  9. A common question, and a topic of many discussions within the Accellera UVM Working Group (UVM-WG), was: What should the version of the new UVM release be? Since 1.0-ea, UVM releases have followed the conventions of Semantic Versioning, with MAJOR and MINOR version numbers, and an optional PATCH version letter and/or additional labels. For example, 1.1d-rc2 was MAJOR=1, MINOR=1, PATCH=d, LABEL=rc2. The UVM-WG wanted to maintain semantic versioning moving forward, but at the same time we wanted to indicate what year/revision of the 1800.2 standard we were implementing. The simplest answer was to set the implementation's MAJOR number to the 1800.2 revision year. The library may have MINOR and PATCH updates, but all versions with MAJOR number 2017 will be compatible with 1800.2-2017. If/when the 1800.2 standard is revised in the future, then the MAJOR number will change. That said, why start with 0.9 as opposed to 1.0? This was more of a subjective decision... The 2017 0.9 release is fully functional, but effectively undocumented beyond a basic README.md which describes how to install the kit. Releasing it as "1.0" was viewed as disingenuous, as we knew it was incomplete from a documentation perspective. Why not 0.0 then, or 1.0-ea? Again, subjective. The UVM-WG didn't want to release it under the heading of “0.0”, "beta", "early adopter", "release candidate", etc. because all of those imply a certain lack of functionality. When all was said and done, the UVM-WG settled on "0.9", as it made it clear that something was missing, while not necessarily indicating that the library was in some way "unsafe to use". The UVM-WG is hard at work on "2017 1.0", which will have the complete documentation of the APIs that the implementation supports above and beyond the 1800.2 LRM. In the meantime, we will be actively monitoring this forum, hoping to answer any questions you may have regarding the library. Thanks for your participation! Sincerely, Justin Refice (UVM-WG Chair) & Mark Strickland (UVM-WG Vice-Chair)
  10. This is a catch-22 in the library. There's no guarantee that finish_item() actually indicates that the driver has completed processing the item, and there's no guarantee that the driver is going to trigger the appropriate events. Ideally, if the driver detects that seq_item_port.is_auto_item_recording_enabled() returns '0', then the driver should manually call accept/begin/end_tr. These calls will cause the appropriate events to trigger inside of the item.
  11. Martin- I've opened two mantis items for your requests. http://www.eda.org/mantis/view.php?id=4998 http://www.eda.org/mantis/view.php?id=4997 Thanks!
  12. I've opened up a Mantis for your request, thanks! http://www.eda.org/mantis/view.php?id=4996
  13. Bart- That's definitely a bug in the library, and there's a mantis item open for it at: http://www.eda.org/mantis/view.php?id=4969 For the record, "get_starting_phase()" isn't 'optional', it's the official mechanism for retrieving the starting phase. The old "starting_phase" variable is considered deprecated in UVM 1.2... so if there are no references to it whatsoever, then everything works. The bug is in the deprecation code, which isn't properly setting the old variable. Thanks for the help, -Justin
  14. Sorry, I should have been more clear: I was suggesting A, B ~AND~ C, not A or B or C So there would be a mechanism to get raw unfiltered access for those who want it. There would ~also~ be a way to extend the command line processor and provide filtered access. Since we're talking about extending the command line processor, that means we're potentially going to have multiple command line processors. If we have more than one, then we should be providing a mechanism to set/get the default. You're absolutely right about the new() method, it isn't protected, which means you can create your own command line processor, but there's a very limited set of reasons to do that when all of the get_* methods are non-virtual :-p I can definitely open a Mantis on behalf of this topic, I'm just trying to nail down the requirements.
  15. The UVM resources facility is extremely powerful, however it's hamstrung by the fact that resources are strictly tied to the resource pool. This prevents them from being a generalized solution, and forces them into a very specific usage model which is riddled with performance concerns. Looking at the resource classes: uvm_resource_base: precedence default_precedence set_scope get_scope *match_scope *set_priority uvm_resource#(T): set set_override *get_by_name *get_by_type *set_priority *get_highest_precedence All of those methods and fields are strictly bound to the resource pool, and the '*' methods are just helpers which redirect to resource pool methods. If resources had no knowledge of the pool, then they become a simple storage structure which can be used as a basis for TLM GP Extensions, Report Message Elements, Configuration, etc., instead of the disparate solutions that the standard presents right now.
  • Create New...