Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 12/20/2018 in all areas

  1. 2 points
    Thanks! I can reproduce the behavior and verified that removing the dynamic sensitivity in sc_thread_process::kill_process fixes the issue: void sc_thread_process::kill_process(sc_descendant_inclusion_info descendants ) { // ... if ( sc_is_running() && m_has_stack ) { m_throw_status = THROW_KILL; m_wait_cycle_n = 0; remove_dynamic_events(); // <-- add this line to avoid the exception simcontext()->preempt_with(this); } // ... } I'm not sure, if it is necessary to do the same for the static sensitivity. At least I haven't come up with a similar scenario, where the error is actually "incorrect".
  2. 1 point
    Simple answer: there really is no straightforward way to do this. Of course if you wanted to get into the internals of the SystemC implementation, it is possible; however, that completely violates the standard. As to why this is not allowed: performance. The desire to have efficiency overrides the desire for corner cases such as yours. How can you work around this? Several possible approaches, but here one I would use: Create a new event class that adds tracking and an API to access it. This is easy to do if you know your C++ and SystemC well enough. Trivial outline of concept:: static const sc_core::sc_process_handle NONEXISTANT; struct Tracked_Event { void notify( void ); // sets m_process_handle and notifies m_event void notify( sc_core::sc_time delay ); // same except waits for delay sc_process_handle get_trigger( void ) { return m_process_handle; } void clear( void ) { m_process_handle = NONEXISTANT; }; private: sc_core::sc_event m_event; sc_core::sc_processhandle m_process_handle; }; struct Event_array { Event_array( int depth ); void wait( void ); // clears all m_process on entry, then waits for an m_event Tracked_event operator[]( int ); sc_vector<sc_core::sc_process_handle> get_trigger_list( void ); private: sc_vector<Tracked_Event> m_event_array; sc_core::uint64 m_delta_count; };
  3. 1 point
    Eyck

    Checking ports for power estimations

    I'm not sure if I get your first question right. Essentially this is a C++ question. But what you could do is a type erase of your (POD) data, use it as a byte array and count the changed bits using XOR (be carefull to you plain data, no classes). Something like: struct my_data { int x; long y; }; my_data old_val, new_val; uint8_t* old_data = reinterpret_cast<uint8_t*>(&old_val); uint8_t* new_data = reinterpret_cast<uint8_t*>(&new_val); unsigned toggles=0; for(size_t i=0; i<sizeof(my_data); ++i){ uint8_t diff = *(old_data+i)^*(new_data+i); uint8_t mask=1; for(size_t j=0; j<8; ++j, mask<<=1) if(mask&diff) ++toggles; } Regarding your second question: you transport the data via a signal which implements the signal_in_if. This interface has a value_changed_event() getter which returns an event firing when the value of the signal changes. Just wait for this event.
  4. 1 point
    I think there are a few more options, but first, you need to acutely aware of the issue of OS thread safety. The SystemC kernel is not generally thread-safe unless you use async_request_update(), and a queue. I've done this several times. That said you have several options: Place software inside a SystemC SC_THREAD and provide convenience methods to initiate TLM transport calls. This is the simplest SystemC approach, but does not allow for modeling the actual CPU planned to be used and hence timing is not very accurate. This has the fastest performance from the SystemC point of view. Place the software on a development board that has the target CPU and some type of communications to the machine where you will run SystemC. I generally use TCP/IP sockets. Replace the driver with a special socket call passing packets to to a remote machine and receiving back the response. On the SystemC side, create an OS thread to receive the socket and inject the message into SystemC via the async_request_update call and an unbounded STL queue or FIFO of some type. A TLM 1.0 FIFO might do. A receiving thread can then pass the data into the SystemC simulation and return the result. This is somewhat more overhead, but allows for a more accurate target CPU. Obtain an ISS (Instruction Set Simulator) for the target processor and interface it to SystemC. This varies in complexity. You might look at existing models or talk with vendors such as Cadence or Mentor. Or perhaps your CPU vendor (e.g. Arm has some very nice models). As @Eyck suggested QEMU, DBT-RISE´╗┐-RISCV´╗┐. Also Gem5 (Google it).
  5. 1 point
    What you can do is build your driver software as a shared library with a main function. In a SC_THREAD you just load the library and execute the main function. Along with this you have to implement a few utility functions (read, write, wait) which interact with the SystemC kernel or your DUT and being used by the main function (and the called functions from there). This is called host based or host compiled simulation (you may check the search engine of your choice for it). With some infrastructure it is even possible to mimic interrupt. Another option is to use some instruction set simulator (e.g. QEMU, DBT-RISE-RISCV, or some commercial alternatives) and do you driver development using a virtual prototype... Best regards
  6. 1 point
    In reference to http://forums.accellera.org/topic/6218-wait-is-not-allowed-inside-run_phase/ I created a small demo which exhibits the same behaviour: create a process which waits on an event list which resides on its stack (and waits there forever) create a second process which, shortly after starting the simulation, will kill() the other process. This creates the following fatal message: This begs the question if this is intended behaviour or is some kind of bug: I guess in principle it should be possible to kill() a thread which is waiting on an event list. But if the event list object resides on its stack the above message is printed. If the objects is e.g. an object member, the simulation runs without error. Demo source: #include <systemc.h> SC_MODULE(top) { public: sc_event e; void thread_1() { wait(1,SC_NS); // be sure that thread_2 is already waiting t2_hdl.kill(); } void thread_2() { sc_event_or_list terminated_events; terminated_events |= e; sc_core::wait(terminated_events); // will never finish } SC_CTOR(top) { SC_THREAD(thread_1); SC_THREAD(thread_2); t2_hdl = sc_get_current_process_handle(); } sc_process_handle t2_hdl; }; int sc_main(int argc, char **argv) { top i_top("top"); sc_start(); return 0; }
  7. 1 point
    tjd

    tlm_fifo nb_peek

    Just generated one. There is a tarball attached that contains a producer, consumer, and a main that connects the two. Also included is a simple bash script with my compilation commands and the log generated from running the script where I've also appended the G++ version to it. tlm_ex.tar
  8. 1 point
    Minimal example of the actual underlying issues posted here: http://forums.accellera.org/topic/6232-killing-a-process-with-an-included-sc_event_andor_list/
  9. 1 point
    kirloy369

    Verifying Verification Components

    maybe uvm_report_catcher is what you need http://www.vlsiencyclopedia.com/2016/10/build-smart-tests-using-uvm-report-catcher.html
  10. 1 point
    Hi Kevin, you are missing the raising and dropping of an objection in the run_phase method (which would be required in SystemVerilog too) to prevent the early finish of the test sequence: void run_phase(uvm::uvm_phase& phase) { phase.raise_objection(this); [...] phase.drop_objection(this); } By adding this I get the desired result: UVM_INFO @ 0 s: reporter [RNTST] Running test ... run: before fork/join Current time is 0 s =========== fun1======== Current time is 0 s =========== fun2======== Current time is 20 ns run: after fork/join Current time is 60 ns UVM_INFO ../../../../uvm-systemc-1.0-beta2/src/uvmsc/report/uvm_default_report_server.cpp(666) @ 60 ns: reporter [UVM/REPORT/SERVER] --- UVM Report Summary --- ** Report counts by severity UVM_INFO : 1 UVM_WARNING : 0 UVM_ERROR : 0 UVM_FATAL : 0 ** Report counts by id [RNTST] 1 UVM_INFO @ 60 ns: reporter [FINISH] UVM-SystemC phasing completed; simulation finished
  11. 1 point
    Hi Bhargav, In the 1685-2014 version, component wire ports can be multidimensional, i.e., it can have muliple arrays and multiple vectors. For this reason, a portReference must support multiple dimensions as well. This is done using the partSelect element. The partSelect element can contain a single range with left and right to support a single vector as before. Or it can contain can indices plus a range. The indices indicate which dimension the range applies to. If the indices contain more than one index then the first indices apply to the arrays and the last indices apply to the vectors. Best regards, Erwin
  12. 1 point
    BEGIN_REQ/END_REQ and BEGIN_RESP/END_RESP mark time points in the protocol. So in the stanndard implementation you have 2 phases: request and response. Depending on the type of access various data is been transferred: for a read REQ usually carries the addr while RESP carries the data and status while during a write REQ carries addess and data while RESP just carries the status. It is up to the initiator and target to care for consistency of the data in the payload, in most implementations I''ve seen the data is sampled/set at the BEGIN_* time point. Best regards
  13. 1 point
    SystemC has no special concept similar to SystemVerilog interfaces. Instead you can use regular SystemC modules to work as SystemVerilog interfaces. SystemC has it's own notion of interface that is similar to interface notion in object-oriented programming: interface is basically a set of pure virtual functions. So for example you can define a simple memory interface as a set of mem_read and mem_write functions. SystemC requires that interfaces inherit virtually from sc_interface: struct memory_if : virtual sc_interface { virtual uint32_t mem_read( uint32_t address) = 0; // reads data from memory at given address virtual void mem_write( uint32_t address, uint32_t data) = 0; // writes data to memory at given address }; Next you can define a module that implements and exports this interface. For example: struct memory_module : sc_module, memory_if { sc_export<memory_if> mem_export{"mem_export"}; SC_CTOR(memory_module) { mem_export.bind(*this); } private: std::array<uint32_t , 1024> mem; uint32_t mem_read(uint32_t address) override { return mem.at(address >> 2); } void mem_write(uint32_t address, uint32_t data) override { mem.at(address >> 2) = data; } }; You can have other modules implementing memory_if. For example you can write a module that translates mem_read and mem_write function calls into a cycle-accurate signal-level memory protocol. And this is what SystemVerilog interfaces often do. Now the closest analog to SystemVerilog virtual interface is SystemC sc_port. For example you can define a test_module that has a sc_port<memory_if> and sends memory reads and writes using this port: struct test_module : sc_module { sc_port<memory_if> mem_port{"mem_port"}; SC_CTOR(test_module) { SC_THREAD(test_thread); } private: void test_thread() { cout << " test_thread \n"; const int TEST_SIZE = 10; for (int i = 0; i < TEST_SIZE; ++i) { cout << hex << "Write mem[" << i * 4 << "] = " << i << "\n"; mem_port->mem_write(i*4, i); } for (int i = 0; i < TEST_SIZE; ++i) { cout << hex << "Read mem[" << i * 4 << "] == " << mem_port->mem_read( i * 4 ) << "\n"; } } }; To create a simulation you instantiate both memory_module and test_module and bind port to export: int sc_main (int argc, char **argv) { test_module tmod{"tmod"}; memory_module mmod{"mmod"}; tmod.mem_port.bind(mmod.mem_export); sc_start(); return 0; }
  14. 1 point
    dynamic_cast to sc_module* would only give you access to methods of sc_module that were overridden in A. fun() and function() are not part of sc_module.
  15. 1 point
    Read up on sc_spawn and sc_process_handle. Basically, you can do something like: // Example of fork-join any std::vector<sc_process_handle> process_handles; process_handles.push_back( sc_spawn( [&](){ ... } ); //< repeat as needed ... sc_event_or_list terminated_events; for( auto& ph : process_handles ) { terminated_events |= ph.terminated_event(); } wait( terminated_events ); //< wait for any process to exit for( auto& ph : process_handles ) { ph.kill(); } // Example of fork-join none (void)spawn( [&](){ ... } ); //< repeat as needed ...
  16. 1 point
    To answer the question of a fifo multiple writers and one reader, I would suggest either of two approaches: Create a module with a vector of inputs (1 per writer) and the sc_fifo output. A simple method process can then arbitrate and facilitate placing incoming data onto the output. This has the advantage of allowing a custom arbitration scheme Create an internal std::queue and limit access with a mutex (sc_mutex might work). Since the queue doesn't have a limitation, it will be up to you to manage maximum depth issues. Downside is that arbitration is managed by the mutex and may not be ideal. For a fifo with multiple readers, you probably need a manager to decide arbitrate requests, which is similar to 1 above.
  17. 1 point
    In general, your design would not work because reading a real hardware FIFO takes time and readers would get entangled. How would you manage that? Second, you are not using the correct methodology to connect up your FIFO, which is why you don't see the error. SystemC expects you to connect modules to channels using ports. Skipping the hierarchy and connecting consumers using pointers is incorrect. Instead, you consumer should replace your my_fifo pointer with a fifo port. You have two choices: sc_port< sc_fifo_in_if<int> my_fifo_port; or sc_fifo_in<int> my_fifo_port; The syntax to access the fifo will be my_fifo_port->read(). Next you will need to attempt to connect the port to the fifo during construction in your producer with: for( int i=0...4 ) consumer.port.bind( my_fifo ); When you run you will encounter an error during elaboration (construction) that you already have a connection and fifo ports don't allow more than one binding.
  18. 1 point
    David Black

    simple socket

    I would suggest that those for-loops need a better bound and should be coded: sc_assert( objA->initiator_socket.size() >= objB->target_socket.size() && objB->target_socket.size() > 0 ); for( int i=0;i<objA->initiator_socket.size(); ++i ) { objA->initiator_socket[i].bind(objB->target_socket[i]); } Coding rule: NEVER use a literal constant when a reasonable alternative is possible. Even when "in a hurry", you will not be disappointed if you use this rule.
  19. 1 point
    Eyck

    simple socket

    Just looked further: there is a typo in case 1. It should read //bind for(int i=0;i<3;i++){ objA->initiator_socket[i]->bind(*(objB->target_socket[i])); } as you use an array of pointers. Again, sc_vector eases your life: //Model A sc_core::sc_vector<tlm_utils::simple_initiator_socket_tagged<ModelA>> initiator_socket; ... // Model B sc_core::sc_vector<tlm_utils::simple_target_socket_tagged<ModelB>> target_socket; ... //bind for(int i=0;i<3;i++){ objA->initiator_socket[i].bind(objB->target_socket[i]); } The same applies to case 2: //bind objA->initiator_socket1->bind(*(objB->target_socket2)); objA->initiator_socket2->bind(*(objB->target_socket3)); objA->initiator_socket3->bind(*(objB->target_socket1)); Best regards
  20. 1 point
    Eyck

    TLM transaction tracing

    Maybe a little bit late but there are socket implementations available which do trace tlm transactions int a SCV database. They can be found at https://github.com/Minres/SystemC-Components/tree/master/incl/scv4tlm and are used e.g. at https://github.com/Minres/SystemC-Components/blob/5f7387ab7e3dfc2ff6a7cac6fbe834ed7ec8ae36/incl/sysc/tlmtarget.h which in turn serve as building blocks in https://github.com/Minres/SystemC-Components-Test/tree/master/examples/simple_system The setup given by Kai is put into sysc::tracer where all the tracing setup (VCD & SCV) is impelemted. Best regards -Eyck
  21. 1 point
    David Black

    sytemc temperal decoupling

    Read IEEE 1666-2011 sections 9 & 10. Available for free download from http://www.accellera.org/downloads/ieee. Or watch a free video from https://www.doulos.com/knowhow/systemc/. Or take a class on SystemC & TLM from https://www.doulos.com/content/training/systemc_training.php. TLM 2.0 is a marketable skill and I get many requests for job references. You will need to become proficient at C++, SystemC fundamentals, and then TLM 2.0 in addition to having a good basic knowledge of hardware and software design.
  22. 1 point
    David Black

    approximately timed

    IEEE 1666-2011 section 10.2 states: IEEE 1666-2011 section 10.3.4 states:
  23. 1 point
    You can use a custom "creator" to initialize elements of a vector with custom constructor parameters - here the inner vector. Something like this (assuming you have lambda support available): auto element_creator = [](const char* nm, size_t) // optional, depending on the "real" value type { return new sca_module(nm); }; size_t inner_size = 42; // adjust for your needs, could also be a vector of sizes element.init( outer_size, [&](const char* nm, size_t) { return new sc_vector<sca_module>( nm, inner_size, element_creator ); } ); If you don't have lambdas in your environment, you need to put the functionality in a custom function, e.g. static sc_vector<sca_module>* element_vector_creator(size_t size, const char* name, size_t) { return new sc_vector<sca_module(name, size); } // using sc_bind to pass in the size - placeholders needed for actual call element.init( outer_size, sc_bind(element_vector_creator, inner_size, sc_unnamed::_1, sc_unnamed::_2) ); Hope that helps, Philipp
  24. 1 point
    Hi. This is because an unbound port cannot be read. A port forwards all read and write calls to the actual interface (signal) it is bound to. In you module constructor, you are still in the model set up and elaboration phase. The port is not yet bound to any signal. Hence, you cannot read from it. Accessing ports should not be done befor end-of-elaboration. Greetings Ralph
  25. 1 point
    David Long

    static and dynamic sensitivity

    Hi Amit, A process has static sensitivity if it contains one or more calls to wait(). The sensitivity is set before the simulation starts running, usually be using "sensitive" in its parent module's constructor. Dynamic sensitivity is where a process contains one or more calls to wait(a_time_value) or wait(a_named_event). It is dynamic because the conditions that cause a thread process to wake up change as each wait statement is executed when the simulation runs. Here is a very brief example (not tested): SC_MODULE(mod) { sc_in<bool> clk; sc_event e; void proc1() { wait(); //static sensitivity e.notify(); } void proc2() { while(1) { wait(e); //wait for event (dynamic) //do something wait(1,SC_NS); //wait for time (dynamic) //do something } SC_CTOR(mod) { SC_METHOD(proc1); sensitive << clk.pos(); //static sensitivity SC_THREAD(proc2); //no static sensitivity, runs during initialization until 1st wait reached } }; You can find further details in section 4.2 of the SystemC LRM (1066.2011). Regards, Dave
×