Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 08/23/2018 in all areas

  1. 2 points
    The problem is, when you integrate RTL IP into Loosely-Timed VP that way, the whole simulator will have a performance of cycle-accurate model. Because clock generator will be always on, and Verilated model will be triggered even if it is idle. So don't try to boot Linux with such a simulator. If your RTL IP supports power gating or clock gating, it is a good idea to disable clock generation when RTL IP is turned off. In that case you don't pay for what you don't use: you can boot your OS quickly and then enable clock generator when you start to debug IP-specific driver code.
  2. 2 points
    Hello @kallooran, What version of SystemC library are you using? This issue has been fixed in the release of SystemC-2.3.2. You can find the latest release of SystemC-2.3.3 here: http://accellera.org/downloads/standards/systemc Hope it helps. Regards, Ameya Vikram Singh
  3. 2 points
    David Black

    Seeking Feedback on Datatypes

    Actually, it adds a lot of value. std::array can be passed by reference in a function call and the function can then determine the proper size of the array. This is much better than passing pointers, the C standard. You can also copy an array, which should be synthesizable, which reduces coding and greatly improves readability. It should be possible to implement some #include <algorithm>s on std::array too. Also, you can have bounds checking for additional safety; although, that aspect is probably not synthesizable. Additionally, constexpr should be quite helpful for the synthesis aspect.
  4. 2 points
    Hi, I'm not an implementer of the reference simulator but as far as I can judge the re-throw is used to find a more specific type of exception (since sc_elab_and_sim() just uses a catch-all) and uses sc_handle_exception() to convert it into an sc_report so it can be handled by the SystemC reproting system. Actually I agree it would be better to handle it directly in sc_elab_and_sim() but this would duplicate code. A side note rgd. debugging: if you use gdb there is a command 'catch throw' which stops execution right at the point where the (original) exception is thrown. This comes pretty handy in such cases. Best regards
  5. 1 point
    You would need to write a wrapper class with debug aids builtin. A sketch might be: struct debug_mutex : sc_core::sc_mutex { void lock( void ) override { auto requester = sc_core::sc_get_current_process_handle(); INFO( DEBUG, "Attempting lock of mutex " << uint_ptr_t(this) << " from " << requester.name() " at " << sc_time_stamp() ); this->sc_mutex::lock(); locked = true; locker = requester.name(); time = sc_time_stamp(); changed.notify(); } void unlock( void ) override { auto requester = sc_core::sc_get_current_process_handle(); this->sc_mutex::unlock(); time = sc_time_stamp(); locked = false; locker = ""; changed.notify(); } // Attributes bool locked{ false }; sc_event changed; sc_time time; string locker; }; I have not tested above. NOTE: I have a macro INFO that issues an appropriate SC_REPORT_INFO_VERB with the above indicated syntax. Replace with your own. Never use std::cout or std::cerr if coding SystemC.
  6. 1 point
    AmeyaVS

    What is SystemC library special in?

    Hello @Elvis Shera, It seems your SystemC library has been build with different C++ standard flag. Can you post the output of following commands?: # Compiler version you are using g++ -v # Library Properties: nm -C $SYSTEMC_HOME/lib-linux64/libsystemc.so | grep api_version Regards, Ameya Vikram Singh
  7. 1 point
    Eyck

    Read in customized structs

    Where did you check the values of p1 and p2? write() only schedules the values to be written, you will see the actual value in the next delta cycle. Best regards
  8. 1 point
    Philipp A Hartmann

    tlm_socket_base_if

    Hi Guillaume, I agree, that the new pure-virtual functions in tlm_base_(initiator/target)_socket are not compliant to IEEE 1666-2011. However, I'm curious what use case you have to use these classes directly instead of inheriting from tlm_(initiator/target)_socket, where the functions are implemented? Regarding the implementation on the base socket level, I suggest to add a typedef to the fw/bw interface classes, and use these typedefs in the socket base class then. Something like: template <typename TYPES = tlm_base_protocol_types> class tlm_fw_transport_if // ... { public: typedef TYPES protocol_types; }; // tlm_base_target_socket virtual sc_core::sc_type_index get_protocol_types() const { return typeid(typename FW_IF::protocol_types); } Theoretically, these types could mismatch between FW_IF and BW_IF in manual socket implementations. Therefore, I'd suggest to refer to the FW_IF in the target and BW_IF in the initiator socket. Greetings from Duisburg, Philipp
  9. 1 point
    SystemC is single threaded, you don't need std::mutex. However SystemC does not guarantee any order for processes executed in the same delta cycle. SystemC is a discrete-event simulator, and I suggest you to learn the concepts of simulation semantics from Paragraph 4 "Elaboration and simulation semantics" of IEEE-1666 SystemC standard. The purpose of primitive channels, like sc_signal, is to remove non-determinism by separating channel update request from actual update of value into two different simulation phases. But it only works if there is a single writing process during a delta cycle. In your case if initiator and responder threads are executed on same cycle, they both will read the same "current value" of signal. Initiator will request to set signal value to (valid = 1, ready = 0) and responder will request to set it to (valid = 0, ready = 1). Since there is no guarantee on order between processes, you will either get (1,0) or (0,1) on the next cycle.
  10. 1 point
    Those questions are covered in detail in paragraphs 14.1 and 14.2 of SystemC standard. Can't answer in a better way. TLM2.0 simulations are not cycle-accurate, so you don't have clock edge events. In AT modeling style you should call wait(delay) after each transport call. In LT modeling style all initiators are temporaly decoupled and can run ahead of simulation time, usually for a globally specified time quantum. For debugging you can use the same techinques as in cycle-accurate modeling: Source-level debugging with breakpoints and stepping Transaction and signal tracing Logging Comparing with RTL, debugging using waveform won't be that effective, because in AT/LT modeling state of model can change significantly in a single simulator/waveform step. Usually preffered way is combination of logging and source-level debug. Debugging TLM models is harder comparing to RTL. Also C++ is much more complex and error-prone comparing to VHDL/Verilog.
  11. 1 point
    Roman Popov

    using gtkwave

    In the working directory (active directory when you launch the executable). Search for "and_gate.vcd"
  12. 1 point
    Roman Popov

    using gtkwave

    Your code is not correct. Why did you put next_trigger(5, SC_NS) inside a method? Remove it, and you will get correct waveform for and gate.
  13. 1 point
    Well, the answer is i bit more complex. The main difference is that the standart requires that during the nb_transport call no sc_wait is allowed while in b_transport it si allowed. So any implementation adhereing to the standart guarantees this. Let's first look at the non-blockig implementation. tlm_phase do not denote a phase directly rather time -points o the protocol. Actually you have to phases: request and response which are denote by 2 time points each. So the initiator can indicate a start or end of a phase of a transaction and be sure that the call is not blocked by a call to wait(). So you can model the behavior and timing of a bus transaction in a fairly granular way and do something while the transaction is on-going. You can even have 2 transactions in parallel, one being in the request while the other one is in the response phase (or even more if you have more phases defined). The transaction are pipelined. Looking at the b_transport situation is different. The target can delay the transaction by calling wait() until it is ready to respond. During that time no other transaction can be ongoing, the initiator is blocked and cannot react to it. Blocking accesses can be used if timing of the communication is not of interest/not modeled (the other scenarion is loosly timed models, but that's a different story). They are easy to implement and easy to use. Non-blocking is used if the timing of the communication needs to be modeled in more detail. E.g. this allows to model, simulate and analyse bus contention situations as it allows to attach timing to all phases of a bus transaction lile grant, address phase, data phase. I hope this sheds some light -Eyck
  14. 1 point
    AmeyaVS

    Behavioral XOR Gate with Delay

    Hello @re1418ma, You can look at this example: http://forums.accellera.org/topic/5678-clock-to-q-propagation-delay/?do=findComment&comment=13657 Or this one which shows how to add delay in full adder: http://forums.accellera.org/topic/5715-delaying-simulated-execution/?do=findComment&comment=13844 Hope it helps. Regards, Ameya Vikram Singh
  15. 1 point
    AmeyaVS

    Behavioral XOR Gate with Delay

    Hello @re1418ma, This has been already discussed before here: http://forums.accellera.org/topic/5715-delaying-simulated-execution/ Hope it helps and if you have further questions please feel to ask. Regards, Ameya Vikram Singh
  16. 1 point
    In fact the fix is relatively simple, and will hopefully be in the next release (meantime, please find attached a small patch). The real question is why this inheritance structure was like this in the first place. Here's the PR rational: class tlm_put_get_imp <class1, class2> inherits private virtual from tlm_put_if<class1>, and tlm_get_peek_if<class2> tlm_put_if and tlm_get_peek_if are compounded class that are interfaces, which means that they (and their parents) have only pure virtual methods but these must be implemented because they are pure, so are they in tlm_put_get_imp, as publicmethods. So there is no point in the private inheritance. (Special thanks to Luc Michel from GreenSocs who helped sort this out) tlm1.patch
  17. 1 point
    Martin Barnasconi

    Error: System not scheduable

    It looks like you have a multi-rate system, i.e. somewhere you defined a <port>.set_rate(..) in a set_attributes callback. Now you try to access the n-th sample at this port, like <port>.read(<sample>), but the nth sample is higher than the rate specified. This means you have either the wrong rate, or reading a sample outside the range defined by the rate.
  18. 1 point
    David Black

    sytemc temperal decoupling

    Read IEEE 1666-2011 sections 9 & 10. Available for free download from http://www.accellera.org/downloads/ieee. Or watch a free video from https://www.doulos.com/knowhow/systemc/. Or take a class on SystemC & TLM from https://www.doulos.com/content/training/systemc_training.php. TLM 2.0 is a marketable skill and I get many requests for job references. You will need to become proficient at C++, SystemC fundamentals, and then TLM 2.0 in addition to having a good basic knowledge of hardware and software design.
  19. 1 point
    David Black

    approximately timed

    IEEE 1666-2011 section 10.2 states: IEEE 1666-2011 section 10.3.4 states:
  20. 1 point
    You can use a custom "creator" to initialize elements of a vector with custom constructor parameters - here the inner vector. Something like this (assuming you have lambda support available): auto element_creator = [](const char* nm, size_t) // optional, depending on the "real" value type { return new sca_module(nm); }; size_t inner_size = 42; // adjust for your needs, could also be a vector of sizes element.init( outer_size, [&](const char* nm, size_t) { return new sc_vector<sca_module>( nm, inner_size, element_creator ); } ); If you don't have lambdas in your environment, you need to put the functionality in a custom function, e.g. static sc_vector<sca_module>* element_vector_creator(size_t size, const char* name, size_t) { return new sc_vector<sca_module(name, size); } // using sc_bind to pass in the size - placeholders needed for actual call element.init( outer_size, sc_bind(element_vector_creator, inner_size, sc_unnamed::_1, sc_unnamed::_2) ); Hope that helps, Philipp
  21. 1 point
    HI Can any one provide some example for how to get virtual interface in sequence? I need to use the clk in sequece . Thanks praneeth
  22. 1 point
    Actually, you can start a sequence in any phase. It is more important to understand the domain/scheduling relationships between the task based (i.e. runtime) phases. UVM undergoes a number of pre-simulation phases (build, connect, end_of_elaboration, start_of_simulation) that are all implemented with functions. Once those are completed, the task based phases begin. The standard includes two schedules. One is simply the run_phase, which starts executing at time zero and continues until all components have dropped their objections within the run_phase. The other schedule contains twelve phases that execute parallel to the run phase. They are: pre_reset, reset, post_reset, pre_config, config, post_config, pre_main, main, post_main, pre_shutdown, shutdown, and post_shutdown. They execute in sequence. Every component has the opportunity to define or not define tasks to execute these phases. A phase starts only when all components in the previous phase have dropped their objections. A phase continues to execute until all components have dropped their objections in the current phase. Many companies use the run_phase for everything because there are some interesting issues to consider when crossing phase boundaries. In some respects it may be easier to use uvm_barriers for synchronization. Drivers and monitors (things that touch the hardware) are usally run exclusively in the run_phase, but there is nothing to prevent them also having reset_phase, main_phase, etc...
  23. 1 point
    bhunter1972

    parsing using system verilog

    SV isn't the best language to parse with, but Python is! You should consider having your Python script output real SystemVerilog code that can then be loaded into the simulator instead. Consider: for (addr, data) in write_commands: print >>sv_file, " block.write_data('h%s, 'h%s);" % (to_hex(addr), to_hex(data)) etc.
  24. 1 point
    David Long

    static and dynamic sensitivity

    Hi Amit, A process has static sensitivity if it contains one or more calls to wait(). The sensitivity is set before the simulation starts running, usually be using "sensitive" in its parent module's constructor. Dynamic sensitivity is where a process contains one or more calls to wait(a_time_value) or wait(a_named_event). It is dynamic because the conditions that cause a thread process to wake up change as each wait statement is executed when the simulation runs. Here is a very brief example (not tested): SC_MODULE(mod) { sc_in<bool> clk; sc_event e; void proc1() { wait(); //static sensitivity e.notify(); } void proc2() { while(1) { wait(e); //wait for event (dynamic) //do something wait(1,SC_NS); //wait for time (dynamic) //do something } SC_CTOR(mod) { SC_METHOD(proc1); sensitive << clk.pos(); //static sensitivity SC_THREAD(proc2); //no static sensitivity, runs during initialization until 1st wait reached } }; You can find further details in section 4.2 of the SystemC LRM (1066.2011). Regards, Dave
  25. 1 point
    Assuming you're using plain signal ports, you can use the event member function to check, whether a specific port has been triggered in the current delta cycle: sc_vector< sc_in< int> > in_vec; // ... SC_METHOD(proc); for( unsigned i= 0; i<in_vec.size(); ++i ) sensitive << in_vec[i]; // ... void proc() { for( unsigned i= 0; i<in_vec.size(); ++i ) if( in_vec[i]->event() ) std::cout << "in_vec[" << i << "] triggered." << std::endl; } Greetings from Oldenburg, Philipp
×