Jump to content

Search the Community

Showing results for tags 'transaction'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Accellera Systems Initiative
    • Information
    • Announcements
    • In the News
  • SystemC
    • SystemC Language
    • SystemC AMS (Analog/Mixed-Signal)
    • SystemC TLM (Transaction-level Modeling)
    • SystemC Verification (UVM-SystemC, SCV, CRAVE, FC4SC)
    • SystemC CCI (Configuration, Control & Inspection)
    • SystemC Datatypes
  • UVM (Universal Verification Methodology)
    • UVM (IEEE 1800.2) - Methodology and BCL Forum
    • UVM SystemVerilog Discussions
    • UVM Simulator Specific Issues
    • UVM Commercial Announcements
    • UVM (Pre-IEEE) Methodology and BCL Forum
  • Portable Stimulus
    • Portable Stimulus Discussion
    • Portable Stimulus 2.0 Public Review Feedback
  • IP Security
    • SA-EDI Standard Discussion
    • IP Security Assurance Whitepaper Discussion
  • IP-XACT
    • IP-XACT Discussion
  • SystemRDL
    • SystemRDL Discussion
  • IEEE 1735/IP Encryption
    • IEEE 1735/IP Encryption Discussion
  • Commercial Announcements
    • Announcements

Categories

  • SystemC
  • UVM
  • UCIS
  • IEEE 1735/IP Encryption

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests


Biography


Location


Interests


Occupation


Company

Found 4 results

  1. Hello all, Today, I ended up in a situation where the transaction boxes in Cadence Simvision will not at the exact time where they were supposed to be. This resulted in a drift of the boxes represented in the waves. On looking closer, the box begin time happened only at 1ns resolution, whereas my agent package's resolution and even the default simulation timescale was 1ns/1ps. So I expected the boxes' begin times to be possible at 1ps resolution (not 1ns). Here are the prints from the uvm_info debug statements in my agent monitor: UVM_INFO @2832.523ns ch0_TX_QEC_OUT: after end_tr of micro_tr :DEBUG_TR UVM_INFO @2832.523ns ch0_TX_QEC_OUT: before begin_tr of micro_tr :DEBUG_TR UVM_INFO @2832.523ns ch0_TX_QEC_OUT: after begin_tr of micro_tr :DEBUG_TR UVM_INFO @2833.540ns ch0_TX_QEC_OUT: after wait for slave_cb :DEBUG_TR UVM_INFO @2833.540ns ch0_TX_QEC_OUT: before write of micro_tr :DEBUG_TR UVM_INFO @2833.540ns ch0_TX_QEC_OUT: before end_tr of micro_tr :DEBUG_TR UVM_INFO @2833.540ns ch0_TX_QEC_OUT: after end_tr of micro_tr :DEBUG_TR UVM_INFO @2833.540ns ch0_TX_QEC_OUT: before begin_tr of micro_tr :DEBUG_TR The attached figure shows how it looks in waves. When I then printed tr_micro.print() I saw that the begin_time and end_time variables in there had lost the precision in picoseconds! So I start grepping UVM source code to see how the begin_time was assigned, and I find this: function integer uvm_transaction::m_begin_tr (time begin_time=0, integer parent_handle=0); time tmp_time = (begin_time == 0) ? $realtime : begin_time; uvm_recorder parent_recorder; I believe the bug is wherever in UVM the $realtime is assigned to a time type variable. If the time type is refactored to realtime type, this issue will be fixed. With the clock period of 1.018ns, I expected each transaction box to be 1.018ns wide. But with this loss of precision in begin_time and end_time, the transaction recording happens incorrectly and shows grossly wrong transaction representation in the waves. This is highlighted with the fact that the clock period (1.018ns) is close to the timescale step of 1ns. Here is a minimal SystemVerilog example that shows the information loss in time when $realtime value is assigned to a time variable: `timescale 1ns/1ps module top; //In uvm_transaction.svh (UVM 1.2), we have: // // // function integer uvm_transaction::m_begin_tr (time begin_time=0, // integer parent_handle=0); // time tmp_time = (begin_time == 0) ? $realtime : begin_time; // // .. // //This example shows how when $realtime is assign to tmp_time of //type time, all the precision is lost. So above is as good as //assigning the time from $time instead of $realtime. // function void print_now(); realtime nowr; time now; begin nowr = $realtime; now = $realtime; // "now" loses the 1ps resolution compared to "nowr" $display("time: %t, realtime: %t", now, nowr); end endfunction : print_now initial begin $timeformat(-9, 3, " ns", 11); // units, precision, suffix_string, minimum_field_width print_now(); // -> time: 0.000 ns, realtime: 0.000 ns #1.1; print_now(); // -> time: 1.000 ns, realtime: 1.100 ns #1.23; print_now(); // -> time: 2.000 ns, realtime: 2.330 ns #1.456; print_now(); // -> time: 4.000 ns, realtime: 3.786 ns $finish; end endmodule : top
  2. Hi everyone, For a processor model, I need to be able to reset or kill a transaction sent across an interface and stored in a Payload Event Queue. How can I do that? If I initiate a transaction like this : tlm::tlm_generic_payload* trans = new tlm::tlm_generic_payload; tlm::tlm_phase* trans_phase = new tlm::tlm_phase; sc_time* delay = new sc_time; tlm::tlm_sync_enum* transStatus = new tlm::tlm_sync_enum; *trans_phase = tlm::BEGIN_REQ; *delay = SC_ZERO_TIME; // Or any delay trans->set_command(tlm::TLM_WRITE_COMMAND); trans->set_dmi_allowed(false); trans->set_response_status(tlm::TLM_INCOMPLETE_RESPONSE); *transStatus = InitSocket.nb_transport_fw(*trans, *trans_phase, *delay); How could I, afterwards and before the specified delay has elapsed, kill this transaction or remove it from the target Payload Event Queue? Can I keep a copy of the 'trans' pointer to reset the transaction content if needed? Thank you for your help! Regards, J-B
  3. Hi All, I am wondering whether there is an add-on tool or post-processing script to count the occurences of a specific transaction form an existing waveform file, something like all reads from this address range or all the burst from a specific peripheral. Thanks in advance for your help. Cheers, Alfonso
  4. We are using TLM to pass transactions from SystemVerilog to SystemC. I have two cases where I am stuck. Actually, it is the same case, but I have two angles to my question. 1) Is it possible to still use a TLM setup, but without a transaction type. (I realize that this is contradictory to the acronym.) A c-model has a debug function which takes no input arguments. So, when the SV testbench runs into a problem, it can call this function in the SystemC/c-model. As all of our connections now are sc_port/sc_export, with TLM, I'd like to stick with that flow if possible, rather than adding DPIs/VPIs/(PLIs) or any other mechanism to communicate between languages. However, since the function has no input arguments, I don't need a transaction type. So, is there a way to do a TLM call without a transaction type? (I suppose I could just use another transaction type and ignore the data.) 2) Imagine the above c-model input function that takes no input arguments. Let's say now that the c-model function takes a single integer as its input. So, now I do have a transaction type, but a very simple one. It seems like overkill, but do I still need to define matching .h and .svh (that extends uvm_sequence_item) transcation types and the related do_pack, do_unpack, etc. routines? It seems like overkill. I suspect that I must, if I want to use TLM. (Given that the answer to this question must be, yes, does anyone out there just use a generic grab-bag transaction type for cases like this?) //my thought of passing a transaction which is just an int in sv tb: uvm_blocking_put_port #(int) sb_debug_call1_to_cmodel; in sc c-model public tlm::tlm_blocking_put_if<sc_int <32>> //or smthg like that Any thoughts? I know I just need to refresh myself on DPIs, but answers to the above question are welcome.
×
×
  • Create New...