Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 04/06/2019 in all areas

  1. 3 points
    The Accellera SystemC AMS Working Group released the 2020 edition of the SystemC AMS User's Guide. You will find the user's guide on this page: https://www.accellera.org/downloads/standards/systemc This version of the user's guide is fully compatible with the SystemC AMS standard released as IEEE Std. 1666.1-2016. It describes all the features introduced in the SystemC AMS language standard during the last decade. For example, the user’s guide now explains the use of the dynamic timed data flow capabilities, to make AMS system simulations even more efficient and running even faster. The SystemC AMS Working Group is currently preparing the release of the user's guide application examples as separate download. Availability of these application examples will be communicated at a later stage. Please use this forum to post your questions or remarks on the user's guide.
  2. 2 points
    Eyck

    sc_clock Doubt

    sc_clock triggers itself based on the period and the (in your case default) constructor settings. The period is the default_time_unit.
  3. 2 points
    Eyck

    TLM CPU modeling

    There is no such thing as CPU TLM modeling. Usually you write a C/C++ processor model with the needed accuracy (instruction accurate, cycle approximate, cycle accurate) and wrap it in a way that you translate memory accesses into TLM socket accesses. Along with that you need to manage to syncronization of the time of your model and the SystemC time (to run e.g. in loosly timed mode). Another task is to take the returned execution time of the bus accesses into account for the execution of the CPU model. This involves also the selection and implementation of the accesses (DMI & blocking or non-blocking). You can find a complete example of an instruction accurate VP at https://git.minres.com/DVCon2018/RISCV-VP (or https://git.minres.com/VP/RISCV-VP which is a newer version). The wrapper for the C++ model in SystemC can be found at https://git.minres.com/DVCon2018/RISCV-VP/src/branch/develop/riscv.sc/incl/sysc/core_complex.h To put it straight: doing this correctly is a non-trivial task as it is the implementation of a micro-architecture model of a CPU. One option is to build an instruction accurate ISS and add a microarchitecture model like it is done in the ESECS project (https://github.com/MIPS/esesc) BR
  4. 2 points
    You are initailaizing fl_ptr during consturction, not during execution. In generator.hpp you have: float* fl_ptr = reinterpret_cast<float*>(dmi_mem); //ovo sam ja pisao This never updates fl_ptr to the actual value of dmi_ptr. Actually your access should look like: if (dmi_valid) { dmi_mem = dmi.get_dmi_ptr(); //dmi_mem is pointer to ram[] array in memory.h float* fl_ptr = reinterpret_cast<float*>(dmi_mem); for (int i = 0; i != 20; ++i) fl_ptr[i] = 12.7; }
  5. 2 points
    Your sc_trace function is a member function of the TraceList class and cannot be called like the sc_trace functions coming with the SystemC reference implementation. Those are free functions in the sc_core namespace. Moreover your sc_trace implementation is non-static so it cannot be used without a TraceList object. You need to move the function out of the class scope. Basically this is a valid approach to setup complex types. But under performance considerations I would suggest to use a different container. Best choices are std::vector or std::dqueue. And if you are using C++ 11 I would replace the while loop with a range based loop, something like: for(auto& val: var.lst) { // use namespace, compiler otherwise chooses wrong function sc_core::sc_trace(tf, val, nm + std::to_string(pos++)); }
  6. 2 points
    2 days! That's fast response Exactly! If you're not open in the design/pre-release phase you're likely to miss use cases and if the members have committed themselves to solutions and switched their focus to other tasks I imagine that there will be an unwillingness to go back and redo things even if new important insights have been revealed. I think most users would like a code base they can build upon, not one that needs adaptations to make it work. Being fully transparent about the code in the making will reduce the risk for such adaptations What I'm suggesting is free and efficient access to the collective intelligence of the entire community at a point in the development cycles where it makes the most difference. I'm not suggesting a shift in the rights to make the final decisions. That's exclusive to the paying members. What's preventing this from happening within Accellera?
  7. 2 points
    Please be aware, that an sc_and_event_list does not imply that the events in the list are triggered at the same time. I would suggest to keep the only the clock sensitivity and act on the triggers in the body of the method instead: SC_METHOD(func2); sensitive << clk.pos(); dont_initialize(); // ... void func2() { if( nreset.posedge() ) { // nreset went high in this clock cycle // ... } } Alternatively, you can be sensitive to nreset.pos() and check for clk.posedge() (as a consistency check), if you don't have anything else to do in the body of the method. With this approach, you might be able to avoid unnecessary triggers of the method. Side note to Eyck: There's a small typo in the example above, which should should use "&=" to append to an sc_event_and_list. ev_list &= nreset;
  8. 2 points
    Unfortunately I'm not with a member company. I was hoping that I'd have read permissions regardless of my current affiliation. As a user I'd like to see the connection between discussions in the official forum, the issues reported to the issue management system, and the code being developed in response to that. The ability to immediately test that code and possibly give feedback as code comments or a pull request. More like Github, Gitlab and other platforms. Seems to me that this would be a more efficient way to give and get user feedback.
  9. 2 points
    David Black

    Systemc performance

    Perhaps you would like to share your code for measurements via GitHub? Measuring performance can be tricky to say the least. How you compile (compiler, version, SystemC version) and what you measure can really change results. Probably helps to specify your computer's specifications (Processor, RAM, cache, OS version) too. Processor (vendor, version) L1 cache size L2 cache size L3 cache size RAM OS (name, version) Compiler (name, version) Compiler switches (--std, -O) SystemC version SystemC installation switches How time is measured and from what point (e.g. start_of_simulation to end_of_simulation) Memory consumption information if possible This will help to make meaningful statements about the measurements and allow others to reproduce/verify your results. It is also important to understand how these results should be interpreted (taken advantage of) and compared. As with respect to TLM, it will get a lot more challenging. For example, what style of coding: Loosely Timed, Approximately Timed. Are sc_clock's involved?
  10. 2 points
    David Black

    sensitivity list

    You can only specify sensitivity on objects that have events or event finders directly accessible at the time of construction. Normally this means using either a suitable channel, port or explicit event. If you wrap your int's with a channel such as sc_signal<T>, you can do it. Example - https://www.edaplayground.com/x/5vLP
  11. 2 points
    David Black

    Heartbeat, clock and negedge

    You can use it however you like. We didn't use it everywhere and I'm sure there are more areas where it might be applicable. The main point is that "Performance is a function of simulator CPU activity and how well it used." In some cases such as clocks, there is a lot of activity that goes unused. Many designs really only use the positive edge of the clock. In some designs, the activity can even be reduced significantly. Another instance is timers that often are only touched when they are set up and timeout after N clocks. The RTL approach to modeling a timer decrements the timer value on every clock. A behavioral approach would be: void set_timer( int N ) { assert( N > 0 ); delay = N * clock.period(); setup_time = sc_time_stamp(); projected_time = setup_time + delay; timeout_event.notify( delay ); } The current value of the timer can always be had with: int get_timer_value( void ) { return ( projected_time - sc_time_stamp() ) % clock.period() ); } So you really don't even need the clock in many instances. Instead replace clock.period() with a simple constant. Fast and smart SystemC models don't use sc_clock at all.
  12. 2 points
    Actually, you can start a sequence in any phase. It is more important to understand the domain/scheduling relationships between the task based (i.e. runtime) phases. UVM undergoes a number of pre-simulation phases (build, connect, end_of_elaboration, start_of_simulation) that are all implemented with functions. Once those are completed, the task based phases begin. The standard includes two schedules. One is simply the run_phase, which starts executing at time zero and continues until all components have dropped their objections within the run_phase. The other schedule contains twelve phases that execute parallel to the run phase. They are: pre_reset, reset, post_reset, pre_config, config, post_config, pre_main, main, post_main, pre_shutdown, shutdown, and post_shutdown. They execute in sequence. Every component has the opportunity to define or not define tasks to execute these phases. A phase starts only when all components in the previous phase have dropped their objections. A phase continues to execute until all components have dropped their objections in the current phase. Many companies use the run_phase for everything because there are some interesting issues to consider when crossing phase boundaries. In some respects it may be easier to use uvm_barriers for synchronization. Drivers and monitors (things that touch the hardware) are usally run exclusively in the run_phase, but there is nothing to prevent them also having reset_phase, main_phase, etc...
  13. 1 point
    Eyck

    Handling SystemC engine from a Qt5 thread

    Esp. as educational project you should implement it in 2 threads which communicate with each other. Since they run in the same process space you can access data safely once the simulation is in paused state. But you cannot restart the simulation without restarting the process it self. SystemC uses static global variables storing the simulation state and those get only initialized at program start.
  14. 1 point
    Eyck

    SC_METHOD/SC_THREAD Synchronization

    There is no guarantee which method or thread is activated first There is no means to give priority. Why would you like to do this? In my experience you have a thought problem if you believe you need to do this. thread activation is more expensive (in terms of computing power) than method as the thread context needs to be restored and saved. But threads keep an internal state so they are good to describe state machines.
  15. 1 point
    @tymonx- Thanks again for the responses, I didn't get a notification about them, otherwise I would have responded a bit faster 🙂 The UVM Packer is not specified as packing bits in any particular format... if the a developer or end user requires a specific format, then they are free to implement their own within the standard. If you've come up with an alternative and you think it'd be useful, please post it! That said, while the format/structure of the bits isn't specified, the LRM is very clear about how packers behave... it seems that the behavior just isn't exactly what you were expecting. I can understand the frustration (FWIW: there's plenty of places wherein the library acts in a way I would consider unexpected, you're not alone there!). The largest disconnect here seems to be with how UVM handles "metadata", ie. data that describes the data contained within the stream. There are 3 basic forms of metadata that Accellera's implementation is concerned with: The current position of pointers in the packer stream (Your magic 8 bytes) The size of a string The validity of an object handle The use of a fixed-size array is an Accellera implementation artifact. It was chosen back in the pre-UVM days, but I believe the basic reasoning for it is that accessing data within a fixed size array tends to be faster than constantly (re)allocating data inside of a dynamic array. The reference implementation does allow the user to easily change the size of the fixed size array, but it will still be a fixed size array. To be clear though, this is an Accellera decision, and you are free to implement a packer which uses a dynamic array instead. Should you choose to make that a queue instead of a dynamic array, then you no longer need pointers for pack/unpack. Your unpack pointer is always 0, your pack pointer is always [$]. Strings can generally be dealt with one of two ways: {size,string} or {string,'\0'}. Accellera goes with the latter, but really either is fine so long as you're consistent. The validity of an object handle is called out by the LRM. The LRM dictates that is_null returns 1 if the next item in the stream is a null object, 0 otherwise. Unfortunately, this is the one place wherein the LRM truly requires _some_ form of metadata being present. You are absolutely free to create a packer which doesn't support this method, but then your packer won't work for 100% of the objects out there. As to your other concerns: The initial values of the member variables is fine because they're 2-state ints, not 4-state integers. They automatically initialize to a value of 0. The library will also automatically flush any packer passed to uvm_object::pack_*, so long as that packer is not actively packing another object (refer to 16.1.3.4 in the 2017 LRM here). Having the pack_* methods use the default packer was another case wherein simplicity/performance was chosen over strict thread safety (again, pre-UVM). I would argue that instead of changing the behavior of "Default/Null packer" to clone, it would be cleaner to simply remove the option altogether. Make it an error to pass null to pack_*, and now the user has to be explicit. No more thread safety concerns (for the library), no more potential for unexpected behavior. Downside? Breaks a _ton_ of existing code, some of which dates back to before UVM existed. The packer is actually one of the more heavily documented features of UVM, even going so far as separating those methods which packer developers need to worry about (16.5.3) from those that the users generally interact with (16.5.4). The fact that the LRM doesn't dictate the format of the bitstream isn't a bug or an omission, it's an intenitonal feature. It's left at the discretion of the developer. The "do one thing, well" philosophy is a bit alive and well: the one thing is that the packer allows you convert a sequence of pack_*/unpack_* calls to/from a bitstream. A quick side story: During the discussions of the packer during the development of the 1800.2 standard, an example was shown wherein all of the methods in 16.5.4 didn't actually modify a bitstream at all, instead they simply pushed/popped their values in separate local queues of bits, bytes, ints, longints, uvm_integral_t, uvm_bitstream_t, and strings. A hook was present which allowed a user to control how that data was eventually packed/unpacked inside of the set/get_packed_state methods. In theory this implementation could be significantly faster, because the packer could choose the optimal layout for each type. This was just an example though, the full source code was not provided. A final note on the fact that the Accellera implementation doesn't exactly match the LRM: You're 100% correct, which is why the release notes include the following: The inconsistency between sections 5.3 and 16.5.3 are being addressed by the IEEE in the next revision. -Justin PS- I get that it's just an example, and therefore I can't tell if protocol_c is meant to extend from uvm_object or not, but you should never call packer.flush in a uvm_object::do_pack call, unless you explicitly created the packer! If the packer has any data in it (including but not limited to metadata), then you just cleared all of that data!
  16. 1 point
    Thanks for the spirited comments @Tymonx! I'll try and respond to all of your messages in one comment, apologies in advance if it's overly long. To the concerns re: the packing of the m_pack_iter and m_unpack_iter into the byte stream, that is a consequence of how the 1800.2 LRM defines the packer's functionality and compatibility. Specifically sections 16.5.3.1 (State assignment) and 16.5.3.2 (State retrieval), which effectively allow one to dump the packer's state, and safely load it into another packer, regardless of what operations have been performed, and in what order, on the original packer. In this way, the state assignment/retrieval methods can be compared to SystemVerilog's process::set/get_randstate methods. They contain enough information to put you in _exactly_ the same state. As an additional aside, it's also important to acknowledge that while uvm_object does provide a pack/do_pack/do_unpack interface, there's zero restrictions on where a packer can actually be used. Users can create/use packers anywhere in their code, not just in the context of a UVM object. As to m_pack_iter and m_unpack_iter being magic numbers, you're absolutely right... but to that end, the entire bit stream dumped by the packer is a magic number. The 1800.2 standard intentionally leaves the formatting of the bitstream to be not just UVM Library dependent, but uvm_packer implementation dependent. This allows for alternative implementations based on performance or other requirements. Additionally, as m_pack/unpack_iter are values of type int, they auto-initialize to 0. For some background on "Why did 1800.2 add the state methods? Why the changes?": Pre-1800.2 it was impossible to use a packer _without_ pushing it through a uvm_object of some kind first, unless you opened up the source code to see how it worked. FWIW: This is precisely what UVM-SystemC did. The 1800.2 standard reversed the polarity though, saying that it's not important that everyone pack/unpack byte streams in the same exact way, but instead it's important that the standard allows for developers to define their own bit packing mechanisms, so long as they all present the same interface to the user. This is also why the various control knobs which altered string and array packing were removed. IOW: If library X wants to pack/unpack with UVM, they don't need to know how Accellera packs/unpacks, they just need to make "class X_packer extends uvm_packer", and follow the rules. So long as they do that, they're safe to use whatever bitpacking mechanism is best for their needs. Your final concerns re: thread safety are valid, but are not constrained to uvm_packer. Unfortunately, the UVM library in general isn't uber-thread-safe, primarily because SystemVerilog isn't particularly thread safe in "zero time" (ie. functions). The same basic singleton flaw surrounds all of the singletons in the library... the default printer, copier, report server... you name it, it's not thread safe. Unfortunately, SystemVerilog doesn't provide any mechanism for "atomic" access to a variable without consuming time. The 1800.2 standard and the Accellera reference implementation do their best to "thread the needle" between general functionality, performance, and strict thread safety (and generally bias in that order). That said, the best place to attack this may be inside of the uvm_coreservice itself... an alternative "thread safer" version could be created that constructed clones of the policies when get_* was called. You'd then have the option of perf. vs (relative) thread safety. I hope that helps to shed some light on the questions, Justin R.
  17. 1 point
    The SystemC AMS standard defines in section 9.1.2.6 (sca_util::sca_trace) that it can trace objects of type sca_traceable_object. Since all ELN primitives are derived of this type, you can simply trace the ELN component itself, see example below SC_MODULE(eln_circuit) { // node declaration sca_eln::sca_node n1; // ELN node sca_eln::sca_node_ref gnd; // ELN ground // component declaration sca_eln::sca_vsource vin; sca_eln::sca_r r1; // constructor including ELN netlist eln_circuit( sc_core::sc_module_name nm ) : vin("vin", 0.0, 1.23), r1("r1", 1e3) { // Only ELN primitives requires explicit timestep assignment to one element vin.set_timestep(1.0, sc_core::SC_MS); // netlist vin.p(n1); vin.n(gnd); r1.p(n1); r1.n(gnd); } }; int sc_main(int argc, char* argv[]) { eln_circuit cir("eln_circuit"); sca_util::sca_trace_file* tf = sca_util::sca_create_tabular_trace_file("trace.dat"); sca_util::sca_trace(tf, cir.n1, "v_n1"); sca_util::sca_trace(tf, cir.vin, "i_through_vin"); sca_util::sca_trace(tf, cir.r1, "i_through_r1"); sc_core::sc_start(1.0, sc_core::SC_MS); sca_util::sca_close_tabular_trace_file(tf); return 0; }
  18. 1 point
    David Black

    How to generate sc_signal at runtime?

    Sc_signals are not data types. Sc_signals are channels, which represent hardware being modeled and are used as mechanisms to transport data between processes. As such, channels are only allowed to be created during the “elaboration phase” that occurs prior to simulation starting up. Also, strictly speaking, sc_port’s are not normal pointers; although, an underlying mechanism uses pointers for efficiencies sake. The operator->() is overloaded on sc_port<IF_TYPE>. You could of course create an sc_vector< sc_signal<T> >, N >, where N was a maximum of the number of signals required and then allocate specific index to each spawned process.
  19. 1 point
    Thanks for reporting this issue! I think your use case is justified. However, the decision to not installing the header seems to have been made on purpose as utils/sc_stop_here.h is explicitly added to NO_H_FILES in src/sysc/utils/files.am. The behavioural difference between the Automake and CMake-based build flow is probably unintentional. I have reported the issue to the LWG for investigation.
  20. 1 point
    This compiler warning is a false positive. There is a loop in sc_fifo<T>::read(T&) ensuring that the fifo is not empty (and the success of the nb_read(T&) is even guarded by an sc_assert😞 while( num_available() == 0 ) { sc_core::wait( m_data_written_event ); } bool read_success = sc_fifo<T>::nb_read(val_); sc_assert( read_success ); The check for num_available() is even stricter than the check in buf_read, but I can imagine that some compilers might not be able to prove this invariant. Therefore, unconditionally initializing the local variable to silence the warning might be an acceptable trade-off.
  21. 1 point
    Since you are talking about timing I would stick to a more AT like modeling style using the non-blocking transport functions. In this case you should use a memory manager (see section 14.5 of the IEEE standard). For this you need to implement the tlm::tlm_mm_interface (there a few implementations out there, you may google them). The mechanism works similar to a C++ shared pointer. The initiator always pulls a new transaction from the memeory manager and sends via its socket. Each component dealing with the transaction calls acquire() on the payload and release() once it is finished with it. Upon the last release() call the transaction is automatically returned to the memory manager and can be reused. HTH
  22. 1 point
    Philipp A Hartmann

    SystemC TLM CMake install bug

    Fix published in 5a94360d in the public repository. Hope that helps, Philipp
  23. 1 point
    You specified a signed number which was converted to unsigned under rules of twos complement. Change “0x to “0xus and you will obtain desired results. See section 7.3 String literals on page 199 of IEEE-1666-2011 for more information.
  24. 1 point
    vrsm

    sc_fifo.read() does not work

    Its because printer input.num_available() non-blocking call which is executed even before fork.output2 is written. Printer input.read() is working fine as it a blocking call which is waiting for fifo to get filled, once fork.output2 is written and fifo have a item now and now read() is unblocked and things are working fine. You cannot predict the thread execution order unless otherwise you are explicitly making them to happen in order. I suggest you keep multiple print statements to see how threads are executed in your simulation. num_available() is not mandatory for any implementation. But the correct way to use num_available in your experiment is, while(fifo.num_available() == 0) { wait(SC_ZERO_TIME); //by calling wait you are giving chance for other threads in simulation to run } cout<<" items in fifo "<<fifo.num_available()<<endl; Ram
  25. 1 point
    This is correct, you cannot trace dynamic data structure. This is not even a SystemC limitation, but limitation of VCD waveform dump in general. VCD does not allow to add/remove signals to waveform dynamically. So usually you have two options : 1) If maximum capacity is known in advance and is small, you can create your own "list" that utilizes statically sized array as a storage: template<typename T> struct my_list_item { bool has_value = false; T value; } template<typename T, size_t MAX_SIZE> class my_list { std::array<my_list_item, MAX_SIZE> storage; // ... } 2) If maximum size is large or unknown, but it is sufficient for you to trace only head and tail of the list, you can have a copy of tail and head that is updated on every push and pop: class my_list { std::list<T> storage; my_list_item head_copy; my_list_item tail_copy; //... custom push pop }
  26. 1 point
    Roman Popov

    sc_fifo data_read_event

    Did you probably forget to add wait() into consumer thread to suspend it after first read?
  27. 1 point
    Roman Popov

    sequence processing

    SystemC standard does not guarantee any order of process evaluation within a single delta cycle. So in first example both 2,4 and 4,2 will be correct.
  28. 1 point
    Roman Popov

    Compiling a Simple C code on Mac

    They are in namespace sc_core. If you don't want to use namespaces you can change #include <systemc> to #include<systemc.h> and it will pull everything into global namespace.
  29. 1 point
    Because dynamic linker does not know where to find library. You have two options: use -Wl,rpath,<path/to/lib> command line option at compile time, or set LD_LIBRARY_PATH environment variable. Google for more details.
  30. 1 point
    Setting library path != Linking library. Also add -l systemc to g++ options.
  31. 1 point
    According to log, you are using C compiler driver (/usr/bin/gcc). SystemC is C++, you should write your application in C++ and use C++ compiler driver (g++). Maybe your IDE automatically uses C compiler because you named your source file main.c
  32. 1 point
    Eyck

    Generic pointer to sc_out<T>

    boost::variant would be an option or you use a union. In the latter case you loose the type safety...
  33. 1 point
    Eyck

    Generic pointer to sc_out<T>

    Well, this topic is not easy to solve. In C++ each template instantiation (like sc_in<bool>) is a separate class in the class hierachy. The common base class is sc_port_base and this is in this context more or less useless. Actually there I see 2 options: You store a sc_port_base pointer in your map and upon each write you check for the typeid of the specific template instance and down-cast it using dynamic_cast. This is inflexible and needs additional coding if you want to use new type. You store a writer function in your map which knows how to translate a generic value (like int or double) to the particular value. This might by a lambda which captures the port reference and therefore 'knows' hwo to write to this port. But in this case you would loose type safety as you have to store a generic function unless you can use std::variant. So your map would have to be declared as typedef std::map<std::string, std::function<void(unsigned)> Map0 or (C++17): typedef std::map<std::string, std::function<void(std::variant<bool, sc_dt::sc_uint<2>>> Map0 I personally would go for option 2 as it is simpler and more flexible... HTH
  34. 1 point
    You should have a top level. If you don’t, create one and instantiate everything there. Set the global quantum in your top level module st end_of_elaboration and reset local quantum’s at start_of_simulation. Set a default and allow for override. You can obtain the override value to use at run time from any of: - command-line argument using sc_argv() - read a file or database if it exists - Enviroment variable set prior to invocation - user input prompt
  35. 1 point
    There are several ways to do this: you might use a different way to carry signals which are more TLM like. One option would be to use tlm_signal. The other common option is if your CPU writes into the register of your interrupt controller via TLM it carries the delay which is essentially the offset of the CPU local time to the SystemC kernel time. If you here just break the quantum and call a wait() so that the SystemC kernel can sync up and the signal change propagates you should be fine.
  36. 1 point
    David Black

    reading part of input port

    Sorry, but this is simply not possible in the convenient manner of Verilog. Reason: SystemC is not about RTL. If you need a few bits, then read all of them and mask off the ones you want.
  37. 1 point
    Eyck

    tlm model with clocks & code sample

    My fault, SC_THREAD has a sensitivity list and you can call wait() for the default event. In my experience -and this guided my answer- it is better to have no sensitivity list and wait explicitly for events. When specifying a port in the sensitivity list you need to use an event finder (with .pos()) as the port is not bound yet and hence does not have an event associated. In the thread itself you can use the pos_edge event. So your code shown above is correct. BR
  38. 1 point
    Eyck

    using clocks in tlm

    Adding to Davids answer: you can supply clocks to the modules having initiator and/or target sockets. This is can be done using the usual signal/sc_in/sc_out means as David mentioned. There is also the option to have 'tick-less' clock as David presented many years ago (http://www.nascug.org/events/12th/nascug12_david_black.pdf). This can also be achieved by distributing a signal just carrying the clock period as sc_time and caculating the time points in the modules. This way it is even possible to model timers or PWM (e.g. https://git.minres.com/DBT-RISE/DBT-RISE-RISCV/src/branch/develop/platform/incl/sysc/SiFive/pwm.h). This modelling style avoids the many context switches implied by toggling clocks. All this holds true for LT and PV modelling, if you think about AT modelling the story becomes different. But this is as usual: you trade speed for accuracy. BR
  39. 1 point
    David Black

    block_interface

    TLM does have a response status, but that is intended to catch modeling failures rather than modeled failures. So if you consider your case a modeling error, which seems unreasonable to me on the surface, then you should either set the error response to something like a generic error or issue SC_REPORT_ERROR. In any event, do not set both. If this is a modeled error, then you are left to your own devices. See section 14.17 of IEEE-1666-2011 for details.
  40. 1 point
    Eyck

    why is,what is, and how is tlm used ?

    Well, in TLM1.0 there is even a tlm::tlm_fifo channel which provides something similar what you use. The sockets defined in TLM2.0 are more geared towards memory-mapped busses and provide facilities to model for speed (DMI, loosly-timed blocking interfaces) or accurracy (approximately-timed non-blocking interfaces). To achive this with pure SystemC provided classes takes some effort and it ends to be proprietary... BR -Eyck
  41. 1 point
    From SystemC standard: SystemC requires that you add sc_module_name as a parameter to module constructors, like this: MULT(sc_module_name, int var_a, int var_b); MULT(sc_module_name); This is a hack that allows SystemC kernel to track lifetime of constructor.
  42. 1 point
    Eyck

    why is,what is, and how is tlm used ?

    Hi, TLM2.0 is used to model bus-like transactions as you have in APB, AHB, AXI or alike. It abstracts those transactions in to e.g. read and write with some attribites (liek protection and alike). This saves some from dealing with bit wiggling and, as you have less events, improves simulation performance. If you use in your scenarion fifo between your producer and consumer you are already using TLM as you abstract the transmission of more or less complex data as a write into the fifo. If you keep this style in your design you do transaction level modeling (TLM). If you are asking if it makes sense to use the TLM2.0 standard is a different question and depends on what aspects of your design do you need to represent in your model. If you have some kind bus structure (e.g. some micro processor or controller) then it is highly advisable to use TLM2.0. In that case your producer calls the transfer functions of the initiator socket and your consumer needs to react on the callbacks of the target socket. Actually this does not relate to your file structure but as a starting poitn I would suggest to study the lt example comign with the SysetmC distribution. You will find it under examples/tlm/lt, it contains 2 initiator_top mdouels, a bus and 2 lt_target modules. if you reduce the thing to just one initiator_top and one lt_target being directly connected you have your producer/consumer scenario. HTH
  43. 1 point
    SDeepti

    d flipflop output

    No, you need to have loop with the function registered as SC_THREAD. For your code, it will be: void d_operation() { while(1){ if (reset.read()==1) { d_out = 0; // std::cout << "reset= " << reset << "...din= " << d_in << ".....d_out" << d_out << std::endl; } else { //d_out.write(d_in.read()); d_out=d_in; // wait(1, SC_NS); // std::cout << "reset= " << reset << "...din= " << d_in << ".....d_out" << d_out << std::endl; } wait(); } }
  44. 1 point
    sas73

    UVM Library Test Suite and Git Repository

    @David Black I did stumble upon an accellera/uvm repository on GitHub. Seems to be what I was looking for although it has been dead since UVM 1.2. Why have it there and not use that platform?
  45. 1 point
    Most likely you have C++98 in CMake cache. You don't need to modify SystemC CMakeLists, just set c++ standard from command line. Check here for detailed instructions https://stackoverflow.com/questions/46875731/setting-up-a-systemc-project-with-cmake-undefined-reference-to-sc-core
  46. 1 point
    Philipp A Hartmann

    (E112) Port is not bound

    You've not shown the constructor of your RF module. Make sure not to access the "fir_instr" port from there. If you need to perform some initialization after the port has been bound, have a look at the "end_of_elaboration" or "start_of_simulation" callbacks. Greetings from Oldenburg, Philipp
  47. 1 point
    SystemC TLM is a part of the SystemC standard (both parts TLM1 and TLM2). True, it is an newer addition, but it is never-the-less part of the standard. TLM1 was the first attempt to standardize an API, which worked, but it didn't address the needs of the SystemC community as well as had been hoped. TLM2 standardizes a methodology to model address mapped bus communications and the associated API. It allows for easier exchange of IP blocks for simulation. TLM emphasizes that "ports" are not just wiring connection points, but rather a nexus for higher levels of communication. TLM2 has "sockets", which are really just glorified SystemC port combinations (sc_port & sc_export). TLM2 also standardizes some concepts (even if not stringently) of different styles of transaction level modeling (sometimes called coding styles). For instance, "loosely-timed" (LT) represents "execute as fast as possible while maintaining register functional accuracy"; whereas, "approximately-timed" (AT) means "provide sufficient timing detail to allow bus-level timing analysis[1]". AT does not simulate as quickly as LT because it has to provide extra details and timing behavior. Note 1: Not necessarily the same as clock cycle accuracy. On the other hand, the SystemC core provides the fundamental mechanisms that allow for design encapsulation (sc_module), event-driven simulation (sc_event and wait()), processes (SC_THREAD, SC_METHOD), a notion of simulated time (sc_time) and channels (sc_interface, sc_prim_channel, sc_channel). Channels are one of the most important features and enable abstraction of safe interprocess communications. SystemC also provides the minor addition of hardware datatypes (sc_logic<>, sc_int<>, sc_fixed<>, etc). It also provides primitive communications channels such as sc_signal<> and sc_fifo<>. Thus, the SystemC core provides the foundation needed to implement TLM. Sadly in some sense, SystemC provides a number of simplifications for writing RTL even though it is fundamentally not the strong point of the simulator. To some degree these simplifications are an holdover from SystemC 1.0 for backwards compatibility. I said sad because the simplifications have encouraged many to think of SystemC as an appropriate vehicle for writing RTL code, but then get frustrated at the lack of performance (for RTL). SystemVerilog and VHDL are much better suited for that task. The RTL aspect of SystemC is good in making it easier to interface SystemC to the other languages for co-simulation.
  48. 1 point
    apfitch

    user defined data type signal assignment

    How do you know the assignment is not working? When are you printing out the assigned value? Remember that you must wait at least a delta for a primitive channel to update. There's more about user defined types and sc_signal here http://www.doulos.com/knowhow/systemc/faq/#q1 regards Alan
  49. 1 point
    First of all, you should refer to the standardized API in the IEEE Std. 1666-2011, instead of looking at the source code of the proof-of-concept implementation. This way you make sure that your solution works in other standards-compliant implementations as well. I assume there are specific reasons, why you can't use a proper sc_port (e.g. sc_fifo_in/out), connected to the particular sc_fifo instance of interest? Looping through the design hierarchy to manipulate the model structure is a rather special case in SystemC. No. As you have seen in your tests, the add_static function is private and non-standard. It is an implementation artefact. Instead, you should spawn a process, which is then sensitive to the events. Processes are the natural way to specify event-triggered functionality in SystemC. If you need to keep a handle to an event, you can use a C++ reference (or pointer), instead of creating a new event instance. Events have "identity" and can not be copied or assigned. This is the case for many structural elements in SystemC. sc_event const & ev_ref = sc_fifo_obj->data_written_event(); In the easiest case, you can just create a plain SC_METHOD process, being sensitive to your event. Of course, this requires to wrap the "callback" in a module instance. To make sure your FIFO has been created already, you should create this process in the end_of_elaboration callback of the module: SC_MODULE( fifo_callback_handler ) { // ... void callback_process(); void end_of_elaboration() { // ... find fifo object SC_METHOD(callback_process); sensitive << sc_fifo_obj->data_written_event(); dont_initialize(); // do not run at start of simulation } }; You can also use dynamic processes (sc_spawn, sc_bind) to create processes dynamically during the simulation. See 1666-2011, Section 5.5. Greetings from Oldenburg, Philipp
  50. 1 point
    Admin

    Welcome!

    Welcome to the Accellera Systems Initiative forums. Please register to start new topics or reply to existing topics. We have resently migrated our UVM forums from UVMWorld to this site. If you were registered on the previous UVM forum site, you should be able to log into the forums using your username and password from the UVMWorld forums. If you had an account on both the UVMWorld forums and the Accellera forums and these accounts used the same email address, then log in with the username and password of the forums.accellera.org account, not your UVMWorld account. If you do not remember your password, you may reset it. If you have any questions about using the forums, click the Help button at the bottom of any forum page. If you need any help with your account and you are logged into the site, click the Messenger icon (a letter) in the upper right of your screen, click Compose New, enter “admin” in the Recipient’s Name field, compose your message, and then click Send. You may also send an email to admin@lists.accellera.org. Thank you, Accellera Systems Initiative
×
×
  • Create New...