Jump to content

maehne

Members
  • Posts

    367
  • Joined

  • Last visited

  • Days Won

    43

Everything posted by maehne

  1. The SystemC PoC implementation includes a subset of the Boost libraries. The stackoverflow question, you found, points clearly out the source of the error (`visualc.hpp`). In the SystemC PoC implementation, this header is located in `src/sysc/packages/boost/config/compiler/`. At the end of that file, it is stated that the "last known and checked version is 19.16.27032.1 (VC++ 2017, Update 9)". Your version of Visual Studio is probably newer and therefore, you receive this warning. Until the Boost version packaged with SystemC is updated, you can either live with it or patch the responsible header to not print the message. In fact, Boost has itself commented out the warning in the header since March 2018, as they were not anymore able to keep up with the pace of Visual Studio releases. I will report the issue to the Language Working Group so that this can be addressed in the next release.
  2. Your example code is not fully self-contained. Nevertheless, I wonder why you are waiting for 2 s at the end of each loop iteration in your consumer thread. The wait() call at the beginning should be sufficient.
  3. Your question is not stupid at all, but requires knowledge of some less widely known corners of the IEEE Std 1666-2011. Typically, people constrain the word length of their data types using the template parameters, because this has the additional advantage of enforcing correct connectivity. As you noted, the port and signal types construct the value types using their default constructor. So, to configure the word lengths as required in your use case, you will have to profit from the fact that SystemC integers and vectors get their default length from the current sc_dt::sc_length_context in scope. I recommend to read up on the topic in chapter 7 of IEEE Std 1666-2011, in particular clause 7.2.3 "Base class default word length". I think this should allow you to implement all aspects of your use case.
  4. Skimming over your code snippets, I don't see a line, which would cause the error from your thread title. Try to reduce your code to a self-contained example, which exposes the problem. As the error message states, you cannot create new modules after elaboration has finished and simulation is running.
  5. Check IEEE Std 1666-2011 clause 7.9.8.2. sc_lv<32> has a constructor and assignment operator for uint64, so your approach should be safe. If you want to be more explicit, you can create the temporary sc_lv<32> object yourself before passing it to the write function.
  6. As @Eyck suggested, constructor parameters should fit best your needs. You can even give them default values if it is sensible. If the number of parameters grows, grouping them in a struct may become handy. Its members can be default-initialised and you can override them with assignments before passing the whole struct to the module constructor. Personally, I like to first check for consistency and legal range for these parameters in the constructor / member function to which I pass this struct, e.g., by using assertions before actually using them for describing any behaviour/internal structure.
  7. Thanks for your suggestions, I reported them to the SystemC Language Working Group.
  8. The group of Daniel Große from University of Bremen and now Johannes Kepler University in Linz has released a RISC-V-based virtual prototype under MIT license, which could be of interest for you.
  9. You should not edit the configure file directly, but rather the file configure.ac. After that, you'll have to run the config/bootstrap to regenerate the build system. For this to work, you'll have to install GNU libtool, GNU automake, and GNU autoconf, e.g., through MacPorts.
  10. If you want to set the initial value of the signal to which the port int_o will be bound during elaboration, you can use the member function initialize() (cf. to clause 6.10.4 of IEEE Std 1666-2011).
  11. Thanks @William Lock, for sharing your experience of building SystemC on macOS for the Apple M1 architecture. I opened an issue on the LWG’s internal tracker to update our build scripts so that it will work out of the box in the future.
  12. If you add the wait() into the inner for loop, the algorithm will be implemented using a FSMD architecture. Each sample will then take 4 clock cycles to get processed. A new sample is processed only every 4 clock cycles as well. If you want your algorithm to be pipelined, your loop needs to get unrolled. Depending on your HLS tool and your coding style, the synthesizer might automatically defer the pipeline, require some hint in form of a pragma or you’ll have to rewrite your model. Similar to classic HDLs, it helps to first imagine the structure and behaviour of the hardware, you want to implement and then try to express it in code following the recommended coding styles of your tool. A for loop with a fixed number of iterations can behave like a for generate statement in VHDL. Then, i can be simply of type int, because it is just an index and not a transient value kept in a register.
  13. Your code snippet of the tx_top::process() confirms that it gets activated once per rising edge of clock and then waits until the next rising edge of clock. All code, which gets executed in your while loop (including the function calls gets executed in the same delta cycle. It's important to be aware that tracing of signals and variables happens not upon assignment to them, but as part of the simulation cycle, i.e., a new trace value gets only recorded once there are no new events to process for the current time (because all signals have stabilised). After that, the simulator advances time to the next moment when an event occurs. This explains, why you are observing in your VCD trace only the final values of your traced internal variables from the end of your function executions. So, @AmeyaVS's hypothesis in his first reply was right to the point! If you want to trace your algorithm execution within a delta cycle, you are on your own. One option is to set a breakpoint on the respective function and step through it while monitoring the evolution of the variable values. Another option is to output the values at strategic points in your code to some output stream.
  14. A running simulation is no indication that you model follows established SystemC coding practices. The suggestions by @AmeyaVS are all valid. The code snippets, which you provided use for the moment only SystemC data types, but they don't define a module class with a ports interface and processes. Without a minimal, self-contained, and executable example exposing your issue, you are making it difficult to others to give you good feedback. Instead of pasting all the code, you can also attach a ZIP archive and keep code snippets to the parts, which you think are relevant for your problem. I recommend you reading a good introduction book on SystemC to get familiar with it and associated modelling methodologies.
  15. You still don't provide enough context for enabling us to give you a good response. Ideally, you should provide us with a self-contained executable example exposing your problem. It is also important to know on which platform, with which compiler and library versions you did build your application and what full error message resulted. Your code snipped seems to stem from some test program, which is part of the ac_math distribution and hosted on GitHub. So, if this gets triggered on your platform following their build instructions, I suggest to check their issue tracker and raise a new issue if needed.
  16. No, sc_vector is part of the IEEE Std 1666-2011 (cf. to clause 8.5), which is describing SystemC including TLM-2.0 and is available for free through the IEEE Get Program. Additional information on how to use it, you may find, e.g., here, here, and here.
  17. Be aware that SystemC itself is compiled with g++/clang++ using the following options "-Wall -Wextra -no-unused-parameter -W-no-unused-variable". Any additional warning options might trigger additional warnings inside the SystemC headers. You may therefore point g++ to the headers using "-isystem" instead of "-I".
  18. Hello Sumit, no, gcc 9.3.0 has not yet been officially tested, as you can see from the RELEASENOTES. However, you can do so yourself by building SystemC with that compiler and running the regression test suite matching your version of the SystemC library. I am routinely using SystemC with newer compiler versions. If you should observe issues, you can report them here or even better on GitHub. Regards, Torsten
  19. Thanks for sharing the results of debugging and fixing your issue! That's helpful for others.
  20. You are referring to section 8.2.2.2 "Modeling noise in the time domain" in the SystemC AMS User's Guide. This document explains in section 5.3.2 "Laplace transfer functions" how to instantiate a Laplace transfer function within a TDF primitive module to filter an input signal. If you want to generate transient coloured noise, all you have to do is to specify the coefficients for that transfer function according to the characteristic of the kind of noise you want to model (see, e.g., Wikipedia as a starting point). Then, instead of feeding the input samples to the LTF functions, you would feed the samples generated by calls to function gauss_rand() to the LTF and add the return value to your signal as your coloured noise contribution. Note that std::rand() gives no guarantees on the quality of the generated random numbers. Since C++'11, there are better ways to generate pseudo-random numbers satisfying various distributions.
  21. @coderoo: As a start, you can have a look to this blog post by Chethan. If you want to avoid having to deal with non-deterministic simulation results due to race conditions in the future, you may consider switching from Verilog to VHDL. 😉
  22. Thanks @Guillaume Audeon for this additional piece of information!
  23. The application, which you want to model, will certainly profit from using the AMS extensions in the right places. In fact, there have been several attempts to enhance even further the modeling capabilities of the SystemC AMS extensions to more directly support the modeling of multiphysyical systems including non-linear behaviour. It was the topic of my PhD thesis and work continued in the European H-INCEPTION project. Publications regarding SystemC-AMS can be found, e.g., here, here, and here. Also, COSEDA Technologies provides some solutions in their commercial offerings. If you should require further consulting in this area, feel free to contact me directly.
  24. You are welcome! In general, simulation performance improves with increasing the level of abstraction of your system model. Also the choice of the model(s) of computation, i.e., the solver(s), which execute(s) your model. For DSP applications, the TDF MoC is particularly well suited. The static scheduling employed for calling the processing() member functions of the connected TDF modules in the right order ensures a high performance. Due to its multi-rate capability, it is also easy to, e.g., use coarser time steps for processing the samples in the base band part than for the samples in the HF part of a transceiver model. Inside the TDF modules also Laplace transfer functions and state space equations can be embedded for continuous-time filtering of samples. Each equation system is solved by its own solver instance ensuring optimum performance. On the other hand, LSF and ELN MoCs are best suited to model continuos-time non-conservative and conservative behaviour, respectively. Clusters of connected LSF and ELN primitives will form one equation system, which will be solved by a linear solver. Performance will depend on the size of the equation system and the resulting time constants, which constrain the maximum time step for solving with still acceptable precision. So, prefer LSF for continuous-time signal processing over ELN unless you need to actually model a conservative system with across and through quantities. You can certainly model with TDF non-linear differential equations. Their dynamic properties will usually impose a maximum to the usable time step for simulation. The worst case scenario is that you need to simulate your whole system with a very small time step for the whole simulation duration due to some highly dynamic non-linear behaviour requiring it for convergence in certain operation regions, which are entered relatively seldomly, while in other operation regions a considerably bigger time step would be sufficient. Depending on your system topology, you might be able to optimize overall performance by employing the Dynamic TDF features in such a cluster of TDF modules modeling the non-linear part of your system, which allow to modify the time steps and sample rates on the fly during simulation. However, I would always first try with a simple modelling approach and integrate advanced features only when they are really needed because simulation performance is an actual problem.
×
×
  • Create New...