Jump to content

tudor.timi

Members
  • Posts

    289
  • Joined

  • Last visited

  • Days Won

    33

Everything posted by tudor.timi

  1. Seems like a tool limitation. Create a variable of type event and assign the result of the function call to it. Use this variable in the @(...) statement. Note: a better place for this type of question is /https://stackoverflow.com/. Use the system-verilog tag when posting a question.
  2. Regarding point 4: If you want reusable abstractions, one of them is "register/memory accesses". Most hardware blocks use bus transactions to update/query special function registers or memory locations. This is also an abstraction that software/firmware engineers understand. You should look into that. There is so much variation in bus protocols that it's difficult to talk about a universal abstraction. It's also mostly pointless, as when you're talking about verifying bus level aspects, you're interested in the details of that bus protocol.
  3. The blog post you quoted w.r.t. working with types is correct regarding " The correct thing to do from an OOP perspective is to create a virtual function ", but not regarding the further points. In that case, where a protocol uses heterogeneous transaction types (i.e. different kinds have different properties), you're better off using the visitor pattern. The transactions would have a virtual function accept function.
  4. Regarding point number 3, I don't see why the coupling between transaction, driver and monitor is a bad thing. If you treat transactions as mere data classes, the behavior based on this data will have to be implemented in a different class. Should a transaction know how to drive itself and how to monitor itself? Should it also know how to cover itself? What if you have to add another operation, like tracing itself in a waveform viewer? Do you add that to the transaction class too? This violates the single responsibility principle.
  5. Regarding point number 1: Transactions aren't supposed to model traditional classes (not sure what the correct term for such classes is), which contain behavior (i.e. methods) and make use of polymorphism. Transactions are data classes, where you bundle information together to pass around, similar to plain old structs. Contrast the following examples: // Bad design // Using tag classes, where a "tag" field controls the behavior of methods is a code smell class service; direction_e dir; function void do_stuff(); if (dir == READ) do_read(); else do_write(); endfunction endclass // Better design, have two classes interface class service; pure virtual function void do_stuff(); endclass class read_service; virtual function void do_stuff(); // do read stuff endfunction endclass class write_service; // ... endclass In the case above, it makes sense to create different classes for handling reads and writes, because you have a common abstraction (doing stuff), which comes in two different flavors. How would you handle processing different kinds of transactions in a driver (for example) if you had different classes for read and for write? You'd need to cast, which is very frowned upon (at least in the software world). My point about transactions being data classes isn't strictly true w.r.t how they are currently used in the industry. Transactions are also used for randomization, which is a polymorphic operation. Even here though, assuming you want to generate a list of transactions, where some of them are reads, some of them are writes, it will be impossible to do this in a single step if you build up your class hierarchy in such a way that you have a 'read_transaction' class and a 'write_transaction' class. This is because you can't choose an object's type (and I mean from the point of view of the compiler) via randomization. Finally, why is 'direction' the field you choose to specialize on? Assuming you would also have another field in your transaction class called 'sec_mode', which could be either 'SECURE' or 'NONSECURE', would you be inclined to say that you need to create a 'secure_transaction' and a 'non_secure_transaction' because they are different things? Because you also chose to specialize based on direction, would you have 'secure_read_transaction', 'secure_write_transaction', 'nonsecure_read_transaction' and 'nonsecure_write_transaction'? What would happen if you would add another field called 'priviledge_mode', which could be 'PRIVILEGED' or 'UNPRIVILEGED'?
  6. Regarding point number 2: Having both a write_data and a read_data field in the transaction is bad design. A field called data would be sufficient and it would contain that data being transmitted in that transaction, whether it is a read or a write (i.e. regardless of what direction that data flows). The direction field tells you whether you're dealing with read_data or with write_data. Having both fields makes for a pretty difficult to use API if you want to do things irrespective of the direction: if (trans.direction == READ) do_stuff(trans.read_data); else do_stuff(trans.write_data); You'll find your code repeating these conditional statements all over. Contrast this to the case where you only have data: do_stuff(trans.data);
  7. You're using 'var' as a variable name, but this is an SV keyword. Try naming your variable something different: fork automatic int idx = i; // ! `uvm_do_on(eseq_inst[var], p_sequencer.master_sequencer[var]) join_none
  8. The philosophy behind nested classes is that they have access to private members of the parent class. If you want to do scoping, you're better off using packages (though it's not possible to define a nice hierarchical structure of packages).
  9. Are you trying to compile it on Windows? I don't think QuestaSim supports DPI under Windows (or at least not easily). At the same time, you don't really need to compile UVM itself when running QuestaSim, because the simulator comes packaged with the library and can reference that. * You can change the Makefile to not compile UVM anymore, only the testbench code for the example. * You can also disable DPI by adding the UVM_NO_DPI define (here you might also need to remove the lines from the Makefile that try to compile the C code). * Finally, what's missing there is the path to 'vpi_user.h'. You can add a '-I /path/to/vpi/user/h/inside/questa/installation/' to the GCC call, to tell it explicitly where it can find the file.
  10. The second argument for new(...) is the number of memory locations. You should call: new(..., 2048, 64, ...);
  11. It's not clear in this case what 'valid' means. I don't understand why in your case, when the model would generate '0100', the DUT will respond with '0101' after a '0101'.
  12. The "env.spi_m[0].reg2spi_adapter.*" won't work in there, because the adapter doesn't know under which env it was instantiated. You can check this by calling get_full_name() from the adapter. That's what you need to pass as a context.
  13. When using RAL like this, items aren't created by your agents, but by the adapters interfacing with those agents. If you set up the contexts properly when creating items inside your adapters, it's going to be possible to do instance-based overrides: class spi_adapter extends uvm_reg_adapter; function uvm_sequence_item reg2bus(...); // notice the 'get_full_name()' here spi_item bus_item = spi_item::type_id::create("bus_item", null, get_full_name()); // ... endfunction endclass The extra argument to create(...) sets the context for items create under the adapter as being "<adapter_name>.*". In your environment code, you can set instance overrides like this: class some_tb_env; spi_adapter adapter0; spi_adapter adapter1; function void end_of_elaboration_phase(...); spi_mem_tr::type_id::set_inst_override(spi_mem_tr_0::get_type(), null, "adapter0.*"); spi_mem_tr::type_id::set_inst_override(spi_mem_tr_1::get_type(), null, "adapter1.*"); endfunction endclass Your adapters need to have different names otherwise you won't be able to differentiate between items create by one and ones created by the other. Note: Don't confuse the context passed into create(...) with instance paths of uvm_components. The two are closely related, but aren't the same thing.
  14. Your links are broken (there are some extra characters after the '4Tyw' part in the URL). The code is OK and Riviera-Pro does support it.
  15. It's not sufficient that all the blocks work properly, because it might be the case that they aren't properly connected to each other. Outputs from some block are left hanging and the corresponding inputs are tied to some values. Cascaded block level checks can't really find this, if your observation points for each block level environment are its corresponding design block. Example: A can start read or write transactions, but the direction signal doesn't get passed to B, where it's tied to read. The A or B env checks won't fail, but the whole system is buggy.
  16. Correction to picture 1: it does verify block level functionality, but you get poor visibility in case something doesn't work, it's available late, etc. (all the disadvantages of not splitting up design/verification into blocks). For the whole system to be working properly it's necessary (but not sufficient) for the blocks to work properly. If you have a system that is a simple cascade like this, the only thing you'd want to check is that everything is stitched together properly (e.g. outputs from A get routed properly to B's inputs and so on). You could do this via a formal app for connectivity. You could also make sure do it in simulation by having your agents tap on the other side of the connection. What I mean by this is that, for example, your B interface agent for the A env gets connected to the B block's signals and not to the ones in the A block. This way you implicitly check that stuff coming out of A reaches B. If you want to have an end-to-end check like this, you'd need to build the chain yourself. This shouldn't be a big problem, since your "models" (or predictors or whatever you call them) should have some kind of predict(...) function that only returns the expected output transaction given some input transaction. This means you only need to do: class e2e_scoreboard; // ... virtual function void write_input(input_trans_type input_trans); expected_trans_fifo.try_put(c_model.predict(b_model.predict(a_model.predict(input_trans)))); endfunction // push output transaction to actual FIFO // pop from both FIFOs and compare endclass
  17. There's a 'recording_detail' configuration property that all UVM components have. I'd use this to enable/disable recording, since you can configure it via the command line.
  18. I'm being overly pedantic here, but p_INT: sc_in<bool> doesn't really seem all that TLM to me. TLM would be stuff like sockets and ports. You seem to have two RTL-ish views, one in VHDL and one in SystemC.
  19. The </ipxact:generatorChain> closing tag is missing at the end of the file.
  20. There is acomment in the sample design: <!-- Export Master interface -- will be used for TLM to RTL conversion --> According to the XML specification, it's illegal to have '--' inside comments (except when followed immediately by '>'), which causes parsing to fail.
  21. There is a TODO comment in the example component: <!-- TODO: MISSING definition of resetType in document --> I'm not sure what this refers to, since there is a definition for 'resetType' in the standard and there is also an example resetType defined.
  22. The verilog file set of the component example lists the same file (component.v) twice: <ipxact:name>VerilogFiles</ipxact:name> <!-- LINK: file: see 6.15.2, file --> <ipxact:file> <ipxact:name>../src/component.v</ipxact:name> <ipxact:fileType>verilogSource</ipxact:fileType> <ipxact:isStructural>true</ipxact:isStructural> </ipxact:file> <ipxact:file> <ipxact:name>../src/component.v</ipxact:name> <ipxact:fileType>verilogSource</ipxact:fileType> </ipxact:file> </ipxact:fileSet>
  23. B12 lists http://www.accellera.org/tech/refs/toolnames. as the source for tool names compatible with the envIdentifier field. The URL doesn't exist.
  24. The second remap state is called 'Nornmal', but I think the intention was to have it be 'Normal'.
×
×
  • Create New...