Jump to content

David Black

Members
  • Posts

    690
  • Joined

  • Last visited

  • Days Won

    154

Everything posted by David Black

  1. Simply being sensitive to the clock port should work. Replace: SC_CTHREAD(fetch, clock.pos()); SC_CTHREAD(fetch, clock.neg()); with: SC_CTHREAD(fetch, clock); When you are sensitive to a port, SystemC infers sensitivity to a method default_event(). For the sc_signal class, default_event() maps to value_changed_event(). sc_clock provides an sc_signal<bool>.
  2. How did you install your SystemC? autotools (i.e., mkdir build && cd build && ../configure --prefix=INSTALLDIR && make && make install) cmake (i.e., cmake -B build; cmake --build build; cmake --install build) Did you *carefully* read the install instructions (every point) or did you just rush through without comprehending what each step does? What platform (i.e., machine), OS (i.e., Ubuntu, CentOS, macOS, mingw, ...), OS version #, and compilers (i.e., GCC, LLVM) are you using? What new features of SystemC 2.3.4 are you attempting to exploit? Have you considered adding the SystemC lib path to your LD_LIBRARY_PATH environment variable (to capture the .so files)?
  3. This is a question for Arm and is not appropriate for UVM discussions. Please go to developer.arm.com to find your answers. We want this forum to remain focused.
  4. If you are using TLM-2, you should be only buffering the references and not the data itself. Backpressure can be modeled using the TLM_ACCEPT mechanism. If your bus allows more than one outstanding request, you will need some form of storage for those references, and std::vector should be fine.
  5. A key to successful C++ is to turn on -pedantic -Wall -Wextra and accept NO WARNINGS. If you think a particular warning is acceptable (check the language manual), then you can use an attribute or pragma with a carefully worded comment to make singular exceptions around the smallest fragment of code you can. Using a proper toolchain and running all the static checks helps as well. I have found JetBrain's CLion (commercial cross-platform IDE) to be immensely helpful and easy to use in this regard. A lot of large corporations have adopted CLion for C++ development. JetBrain's work hard to improve CLion and have a very responsive support team. Of course they can only comment on C++ issues, but most SystemC issues are actually C++ issues. Visual Studio Code editor with the right plugins can do similarly. The SystemC LWG is working on releasing version 2.3.4, but they require consensus of the membership and there are some other issues. I am not allowed to say more. If your company is an Accellera member, you can join the discussion. Anybody can use https://github.com/accellera-official/systemc.git , which defaults to an early version 2.3.4 available (I believe it is technically beta status). I have had no issues with it.
  6. I use SystemC compiled with C++17 using GCC and Clang on Intel and Arm architectures without any issues (Ubuntu, macOS, and WSL2 Ubuntu). Better features and better performance. I also regularly use cmake (less hassle). The key is that SystemC requires you to use the same compiler version for your SystemC code as was built with the library. There are some switches to fall back, but I don't see why you would do that. C++17 has so many nice features.
  7. Actually, there is an easier way to model that delay. Simply use a spawned SC_THREAD. // The following is just a small code snippet in the context of the original posting void do_dff8() { if (RST->read() == 0) { Q0->write(0); } else { sc_spawn( [&](){ wait( 10, SC_NS ); Q0->write(D0->read()); }); } }
  8. It seems weird to me that you can only use DLL. The way I see most folks use SystemC under VS is to create a SystemC project and then just link it into the result (i.e., static). I tend to use SystemC exclusively under Linux or macOS (i.e., BSD Linux). WSL2 Ubuntu works very nicely under Windows 10/11. Perhaps you can use cmake to accomplish your goal. SystemC does have a cmake pathway. I use it all the time now because it is much easier to use than the antiquated autotools approach. Although the cmake approach is marked as experimental in the SystemC README, I can assure you that it works quite nicely and I've found it easier to use across platforms. VS does have cmake support.
  9. First, why the focus on a DLL? You should be focused on obtaining an executable simulation model. If you need a library, create a static one. The type of library has practically no application for SystemC since it’s not usually placed in an environment that needs so much sharing at runtime. A serious simulation usually wants an entire processor. Also, SystemC is currently single threaded due to some fundamental aspects of event driven simulators. There has been work towards multi ore , but that is not a completely solved problem. Due to heavy usage of templates, SystemC is heavily include-file dominated. So libraries don’t do a lot anyhow. The next question I would ask: where did you define sc_main? This is the required top level function of any systemc application. SystemC itself provides main, because models are often linked into cosimulations under EDA tools that already have defined main. Cosimulations provide interaction with hardware models written in RTL SystemVerilog or VHDL. There may also be interactions with SystemVerilogUVM.
  10. Simplistic answer: Because that is how the standards committee defined it originally, and a lot code now depends on this behavior. Rationale answer: Because most issues revolve around the component hierarchy (what component is broken or causing this issue?). You can obtain the effect you may seeking by creating a bogus transaction component at the top-level and then use the `uvm_info_context variants to specify the controlling component. You may need more than one of these for some situations.
  11. Yes, your reasoning on why they did a macro is correct. They had to work around the existing syntax of SystemVerilog. In uvm-systemc, they may still provide these as a way of providing familiarity for those coming from a SystemVerilog approach. Nevertheless, I would avoid macros in my implementations. A refactored variation of the solution can move those class definitions inside the channel class. See solution 2 here: https://www.edaplayground.com/x/gzXP. Although less code and fewer files, this approach suffers in case you wish to refine or provide alternatives later.
  12. See https://www.edaplayground.com/x/S5J3 for an example of my suggestion. W.r.t. the question, no we don't have macros such as UVM's uvm_analysis_imp_decl. Most C++ programmers consider macros to be evil and dirty. The C++ standards are evolving C++ to remove the need for macros altogether. Macros create bugs that are messy to debug and avoid proper type checking etc. For example, consider: #include <iostream> #define MAX(x,y) (((x)>(y))?(x):(y)) int main() { int i=1; double j=1.1; std::cout << MAX(++i,j) << '\n'; //< what gets printed? return 0; } Those macros were required in uvm because SystemVerilog has a number of syntactic issues due to the time at which it was invented. They lacked proper multiple inheritance was the primary issue. To be sure, there are still some good uses for macros, but we need to be thinking of eliminating them when possible.
  13. First, there are no "register" methods for basic SystemC nor is this needed. This is really just a basic C++ question; however, you can use sc_export as part of the solution. You could redirect the call to a submodule that implements the method and bind the appropriate sc_export there. That is what the convenience sockets of TLM-2 do to implement their registration mechanism.
  14. Use a TLM tagged socket from the utilities. There are several to choose from. They allow you to register the method of choice or you can use the tag to differentiate.
  15. There are no sticky methods as you suggest; however, it would not be too hard to construct. Here is an outline: struct Some_module : sc_module { sc_buffer<Data_t> my_buff; bool changed{false}; SC_CTOR( Some_module ) { SC_METHOD( method_process ); sensitive << my_buff; SC_THREAD( main_thread ); } void method_process() { changed = true; } bool was_changed() { auto result = changed; changed = false; return changed; } void main_thread() { ... do stuff ... if( was_changed ) { ...stuff with changed info } ... } }; Of course, you could replace my_buff with port (sc_in<Data_t>) that was tied to the (sc_buffer<Data_t>) externally.
  16. The definition of a channel in systemc is a class that implements one or more interfaces and a base class. In the case of sc_fifo, the base class is sc_prim_channel, and there are two interfaces, which you appear to have discovered. However, you are trying to derive from the interfaces, which will not work. Interfaces are merely the API and should/do not contain implementation. You should derive from the sc_fifo<T> class directly to override the read method. Then you could call sc_fifo<T>::read(). This is all basic C++ stuff once you see the architecture. // The following is for conceptualization and does not represent the full details of sc_fifo, // which can be found in the IEEE-1666-2011 standard document. The depth might be an int. template<typename T> struct sc_fifo_in_if : virtual sc_interface { virtual T read() = 0; }; template<typename T> struct sc_fifo_out_if : virtual sc_interface { virtual void write(const T&) = 0; }; template<typename T> struct sc_fifo : sc_prim_channel, sc_fifo_in_if<T>, sc_fifo_out_if<T> { sc_fifo(char* nm, size_t depth = 16) : sc_prim_channel{ nm }, m_depth{ depth } { sc_assert( depth > 0 ); ... } T read() override { ... } void write(const T& data) override { ... } // other methods... }; // Outline of what you do template<typename T> struct my_fifo<T> : sc_fifo<T> { // IMPORTANT: You must pass an instance name and depth back to the base class static constexpr const size_t default = 0; // or whatever value you want my_fifo(char* nm, size_t depth = default) : sc_fifo<T>{ nm, depth } { sc_assert( depth > 0 )... } T read() override { auto data = sc_fifo<T>::read(); // log data from here return data; } };
  17. Why use SC_METHOD instead of SC_THREAD? If your answer is performance, you are very likely committing the sin of premature optimization. The overhead of SC_THREAD is very minimal (a few percentage points), and the benefit of easier coding is very high. SystemC is as much about faster coding as it is about high-performance models. Anything you can do in SC_THREAD can be accomplished in SC_METHOD, but with a complexity cost (i.e., increased development time and the likelihood of bugs). You could use next_trigger( time); return, and add some state to your code in order to resume at the proper point.
  18. If you are using Modern C++ (C++14 or C++17 would be best), then you could accomplish this with constexpr functions. You should also consider using sc_vector. Vanilla C/C++ arrays are simply evil anyhow. For non-SystemC components (i.e., modules, ports, channels) use std::vector<T> or std::array<T>). Also, you would save some code space if you used simple constructor arguments for your ShiftRegister and implement with std::vector.
  19. Read up on fork-join, fork-join_any, and fork-join_none. You can disable any labeled block or task. You can also kill processes if you know the process id. Your "After fork" won't execute until ALL three processes within the join complete. You likely want fork-join_any or fork-join_none. Also, you only have three threads/processes (the three labeled blocks). Your $displays inside those threads will execute sequentially.
  20. The message means "You have not connected the port core_pim.decoder_addr.IN." The error happens at the end of elaboration and so you never get to sc_start.
  21. I won't show you a library, but I have one comment: SystemC with TLM, in general, is generally used to model cycle "approximate", which is the basis for the Approximately-Timed modeling style described in the standards document. Virtual platforms and behavioral models tend to favor the less detailed Loosely-Timed modeling style for reasons of performance. If you model at the cycle "accurate" level, then I don't see why you don't just go for RTL modeling since the overhead of cycle accuracy will tend to slow down the model to the same level. Cycle-Approximate simply means that transactions take place with a roughly equivalent timing for transactions, but there are no events for each clock edge. In fact, efficient and fast models do not have any sc_clocks at all. They can still be very close in timing results but eliminate the incessant context switches imposed by real clocks (not to mention that half of the context switches are not even useful.
  22. I concur with @Eyck on the use of explicit read/write. Using the operator= override can lead to unexpected interpretation, and the explicit presence of method calls alerts the reader to the presence of a channel behavior. If you care about simulation performance, I would avoid sc_biguint and sc_uint if possible. Native C++ types work great and are much faster. Many models are written without using the SystemC datatypes at all. Some groups even forbid them except at the interface points to SystemVerilog or VHDL for this reason. Bit twiddling is a simple art. For really large vectors, sc_bv is fine, but you might also consider std::vector<bool> and std::bitset<N>. [See https://stackoverflow.com/questions/4156538/how-can-stdbitset-be-faster-than-stdvectorbool for a comparative comment.]
  23. I suggest that you invoke your compiler with -DSC_INCLUDE_DYNAMIC_PROCESSES to avoid the issue of making sure all #include's of systemc are preceded by it. You might also consider using https://www.edaplayground.com to put your code when sharing. That way others can examine/execute the code with less effort. It's entirely free and quite popular.
  24. To use sc_spawn, yes you need SC_INCLUDE_DYNAMIC_PROCESS. It's a compile-time option to simply allow the code into compilation and has nothing to do with the definition of dynamic vs static in the strictest sense. Also, SC_THREAD/SC_METHOD do not allow for using the same class member more than once if I recall correctly. So you will need to use sc_spawn.
  25. @acc_sysC, to help clarify a bit more: sc_signal<T> is a channel, not data. Conceptually, channels provide transport for data for communication. Use the write() method deposits a copy of the data (as a whole) into the sc_signal channel's write buffer (next_value). At the end of the delta cycle, the write buffer is copied into the current_value in it's entirety. The read() method simply returns a copy of the current_value. sc_bv is a data type, not a channel, and supports bit-specific access (reading and writing) via operator[]. Finally, if you use ports (e.g., sc_in<T> or sc_out<T>, which are really just specializations of sc_port<sc_signal_in_if<T>> and sc_port<sc_signal_inout_if<T>> respectively), they are effectively pointers to channels that support the channel's API's. I mention this because newbies to SystemC frequently confuse the ideas of data, channels, and ports. Finally, before you ask: No, there is unlikely to be any attempt to "fix" SystemC to provide extensions to all of this to make modeling at this (low-level) more comfortable. SystemC attempts to raise the abstraction level and tends to avoid bit/pin twiddling. If you need to code RTL, please use SystemVerilog or VHDL.
×
×
  • Create New...