Justin Refice
Members-
Posts
33 -
Joined
-
Days Won
5
Justin Refice last won the day on October 22 2021
Justin Refice had the most liked content!
Profile Information
-
Gender
Not Telling
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
Justin Refice's Achievements
Advanced Member (2/2)
6
Reputation
-
Thanks for posting @rtdk! Please open this as an issue in the GitHub Issue Tracker (https://github.com/accellera-official/uvm-core/issues). Instructions for what information to include when reporting an issue can be found at https://github.com/accellera-official/uvm-core/blob/main/CONTRIBUTING.md. Sincerely, -Justin
-
Why do you say "before UVM 2017 v1.1 Library Code for IEEE 1800.2 flush function is correct?" Both versions of the code declare 'r' to be of type 'bit,' and therefore, both could get into the same problem of an even positive integer value incorrectly reducing to 0. I've opened https://accellera.mantishub.io/view.php?id=7933 to track the fix.
-
This is a known issue tracked in Mantis 7641, but we didn't have a fix available for 2020-2.0. This issue touches more than just the uvm_mem_mam; various APIs within UVM RAL are arbitrarily tied to 32b sizes. It's not as simple as converting from "int unsigned" to "longint" or "uvm_integral_t," because the LRM dictates the size of the APIs, and user code could already be extending them. Hopefully, we can get a safe update in the next release.
-
bobliu started following Justin Refice
-
Somasekhar reacted to a post in a topic: Removal of runtime phases in future UVM versions
-
Hi there, thanks for the post! I'm not sure where that image is from, but there's no plans to remove the runtime phases from UVM.
- 1 reply
-
- phasing
- phases in uvm
-
(and 3 more)
Tagged with:
-
This is a known issue, covered by the currently open mantis: https://accellera.mantishub.io/view.php?id=6913 In a nutshell: RAL doesn't currently provide any API for indicating that a register is "temporarily volatile"... basically something has happened which causes the model to lose track of the expected state of the register, but it's a potentially recoverable event. We're currently working on how such an API could/should work. Thanks for the information!
-
Taylor reacted to a post in a topic: uvm_reg_hw_reset_seq exclusions appear incorrect
-
tymonx reacted to a post in a topic: Annoying and useless packer implementation in the UVM 2017 v1.0 Library Code release
-
Ah, I see what you mean by the 64 as opposed to 0, but it's still initialized correctly: // Constructor implementation function uvm_packer::new(string name=""); super.new(name); flush(); endfunction The constructor is just relying on flush() for initialization, so that we don't implement the same code twice. I also agree with the base/implementation comment... that's more legacy than strict intent. It's a good enhancement request for the library though (and the standard in general)! We did it with the uvm_report_server back in UVM 1.2, it makes sense to do it for the policy classes. Finally, you may wish to make to the following events errors in your contribution: [un]pack_object(null)/is_null() - If an object does any of these, chances are they're going to get unexpected behavior at some point Calling pack_* after calling set_packed_bytes/ints/longints - The m_packed pointer is going to be in a bad state here Calling unpack_* without calling set_packed_* - Any subsequent get_packed_* calls are going to get unexpected behavior. Technically the problem only occurs if you call get_packed_* after calling unpack_* Thanks again! -Justin
-
tymonx reacted to a post in a topic: Annoying and useless packer implementation in the UVM 2017 v1.0 Library Code release
-
@tymonx- Thanks again for the responses, I didn't get a notification about them, otherwise I would have responded a bit faster 🙂 The UVM Packer is not specified as packing bits in any particular format... if the a developer or end user requires a specific format, then they are free to implement their own within the standard. If you've come up with an alternative and you think it'd be useful, please post it! That said, while the format/structure of the bits isn't specified, the LRM is very clear about how packers behave... it seems that the behavior just isn't exactly what you were expecting. I can understand the frustration (FWIW: there's plenty of places wherein the library acts in a way I would consider unexpected, you're not alone there!). The largest disconnect here seems to be with how UVM handles "metadata", ie. data that describes the data contained within the stream. There are 3 basic forms of metadata that Accellera's implementation is concerned with: The current position of pointers in the packer stream (Your magic 8 bytes) The size of a string The validity of an object handle The use of a fixed-size array is an Accellera implementation artifact. It was chosen back in the pre-UVM days, but I believe the basic reasoning for it is that accessing data within a fixed size array tends to be faster than constantly (re)allocating data inside of a dynamic array. The reference implementation does allow the user to easily change the size of the fixed size array, but it will still be a fixed size array. To be clear though, this is an Accellera decision, and you are free to implement a packer which uses a dynamic array instead. Should you choose to make that a queue instead of a dynamic array, then you no longer need pointers for pack/unpack. Your unpack pointer is always 0, your pack pointer is always [$]. Strings can generally be dealt with one of two ways: {size,string} or {string,'\0'}. Accellera goes with the latter, but really either is fine so long as you're consistent. The validity of an object handle is called out by the LRM. The LRM dictates that is_null returns 1 if the next item in the stream is a null object, 0 otherwise. Unfortunately, this is the one place wherein the LRM truly requires _some_ form of metadata being present. You are absolutely free to create a packer which doesn't support this method, but then your packer won't work for 100% of the objects out there. As to your other concerns: The initial values of the member variables is fine because they're 2-state ints, not 4-state integers. They automatically initialize to a value of 0. The library will also automatically flush any packer passed to uvm_object::pack_*, so long as that packer is not actively packing another object (refer to 16.1.3.4 in the 2017 LRM here). Having the pack_* methods use the default packer was another case wherein simplicity/performance was chosen over strict thread safety (again, pre-UVM). I would argue that instead of changing the behavior of "Default/Null packer" to clone, it would be cleaner to simply remove the option altogether. Make it an error to pass null to pack_*, and now the user has to be explicit. No more thread safety concerns (for the library), no more potential for unexpected behavior. Downside? Breaks a _ton_ of existing code, some of which dates back to before UVM existed. The packer is actually one of the more heavily documented features of UVM, even going so far as separating those methods which packer developers need to worry about (16.5.3) from those that the users generally interact with (16.5.4). The fact that the LRM doesn't dictate the format of the bitstream isn't a bug or an omission, it's an intenitonal feature. It's left at the discretion of the developer. The "do one thing, well" philosophy is a bit alive and well: the one thing is that the packer allows you convert a sequence of pack_*/unpack_* calls to/from a bitstream. A quick side story: During the discussions of the packer during the development of the 1800.2 standard, an example was shown wherein all of the methods in 16.5.4 didn't actually modify a bitstream at all, instead they simply pushed/popped their values in separate local queues of bits, bytes, ints, longints, uvm_integral_t, uvm_bitstream_t, and strings. A hook was present which allowed a user to control how that data was eventually packed/unpacked inside of the set/get_packed_state methods. In theory this implementation could be significantly faster, because the packer could choose the optimal layout for each type. This was just an example though, the full source code was not provided. A final note on the fact that the Accellera implementation doesn't exactly match the LRM: You're 100% correct, which is why the release notes include the following: The inconsistency between sections 5.3 and 16.5.3 are being addressed by the IEEE in the next revision. -Justin PS- I get that it's just an example, and therefore I can't tell if protocol_c is meant to extend from uvm_object or not, but you should never call packer.flush in a uvm_object::do_pack call, unless you explicitly created the packer! If the packer has any data in it (including but not limited to metadata), then you just cleared all of that data!
-
tymonx reacted to a post in a topic: Annoying and useless packer implementation in the UVM 2017 v1.0 Library Code release
-
Thanks for the spirited comments @Tymonx! I'll try and respond to all of your messages in one comment, apologies in advance if it's overly long. To the concerns re: the packing of the m_pack_iter and m_unpack_iter into the byte stream, that is a consequence of how the 1800.2 LRM defines the packer's functionality and compatibility. Specifically sections 16.5.3.1 (State assignment) and 16.5.3.2 (State retrieval), which effectively allow one to dump the packer's state, and safely load it into another packer, regardless of what operations have been performed, and in what order, on the original packer. In this way, the state assignment/retrieval methods can be compared to SystemVerilog's process::set/get_randstate methods. They contain enough information to put you in _exactly_ the same state. As an additional aside, it's also important to acknowledge that while uvm_object does provide a pack/do_pack/do_unpack interface, there's zero restrictions on where a packer can actually be used. Users can create/use packers anywhere in their code, not just in the context of a UVM object. As to m_pack_iter and m_unpack_iter being magic numbers, you're absolutely right... but to that end, the entire bit stream dumped by the packer is a magic number. The 1800.2 standard intentionally leaves the formatting of the bitstream to be not just UVM Library dependent, but uvm_packer implementation dependent. This allows for alternative implementations based on performance or other requirements. Additionally, as m_pack/unpack_iter are values of type int, they auto-initialize to 0. For some background on "Why did 1800.2 add the state methods? Why the changes?": Pre-1800.2 it was impossible to use a packer _without_ pushing it through a uvm_object of some kind first, unless you opened up the source code to see how it worked. FWIW: This is precisely what UVM-SystemC did. The 1800.2 standard reversed the polarity though, saying that it's not important that everyone pack/unpack byte streams in the same exact way, but instead it's important that the standard allows for developers to define their own bit packing mechanisms, so long as they all present the same interface to the user. This is also why the various control knobs which altered string and array packing were removed. IOW: If library X wants to pack/unpack with UVM, they don't need to know how Accellera packs/unpacks, they just need to make "class X_packer extends uvm_packer", and follow the rules. So long as they do that, they're safe to use whatever bitpacking mechanism is best for their needs. Your final concerns re: thread safety are valid, but are not constrained to uvm_packer. Unfortunately, the UVM library in general isn't uber-thread-safe, primarily because SystemVerilog isn't particularly thread safe in "zero time" (ie. functions). The same basic singleton flaw surrounds all of the singletons in the library... the default printer, copier, report server... you name it, it's not thread safe. Unfortunately, SystemVerilog doesn't provide any mechanism for "atomic" access to a variable without consuming time. The 1800.2 standard and the Accellera reference implementation do their best to "thread the needle" between general functionality, performance, and strict thread safety (and generally bias in that order). That said, the best place to attack this may be inside of the uvm_coreservice itself... an alternative "thread safer" version could be created that constructed clones of the policies when get_* was called. You'd then have the option of perf. vs (relative) thread safety. I hope that helps to shed some light on the questions, Justin R.
-
It's difficult to provide more details without knowing what your environment looks like. Would you be able to provide a loose description of your environment (What sequencers and drivers exist)? Generally speaking, I'm talking about cloning the sequence (or sequence items) after the randomization has occurred, so that you have 2 independent threads operating in parallel and doing the same things on different DUTs.