Jump to content

maehne

Members
  • Posts

    367
  • Joined

  • Last visited

  • Days Won

    43

Everything posted by maehne

  1. Dear Joshua, Thank you for your bug report! I have forwarded the issue to the internal issue tracker of the SystemC Language Working Group so that it can be examined and fixed in a future version of the regression test suite. Regarding your port to AArch64, you can post about the necessary modifications in this forum. There is also an upload area accessible via a link on the SystemC Community page of the Accellera website. For possible integration, your modifications have to be licensed under the Apache license. Best regards, Torsten Maehne
  2. Hello Thomas, The voltage drop across an ideal current source is not defined unless there is some load attached to it, e.g., a reasonably sized resistor. The voltage sink primitive you used is also ideal, i.e., its internal resistance is infinite so that it does not fix the voltage across the current source. By the way, for voltage sources, it is similar. They only fix the voltage drop across them, but the current through them is fixed by the attached load. Best regards, Torsten Maehne
  3. Have a look at clause 4.5.7 "Functions to detect pending activity" in the IEEE Std 1666-2011 SystemC LRM. The functions described in this clause should allow you to implement the functionality you are looking for.
  4. Hello Sumit, my advice would be to not mark member functions as noexcept that contain calls to SystemC functions or a library that does throw. Otherwise, any uncaught exception leaving the context of the function noexcept-declared will yield a call to std::terminate(). Yes, annotating functions with noexcept may enable the compiler to do additional optimization. However, many modern compilers already generate code, which doesn't has a performance penalty in the path where no exception occurs. In case an exception occurs, you won't care about the performance penalty caused by the occuring stack unwinding. However, you do usually care about being able to deal with exceptions in places, where you have enough knowledge to properly deal with problem. This is not necessarily in the function you declared noexcept. So, in many cases you probably won't gain much by declaring your member functions noexcept. Still, there are certain places, where declaring functions as noexcept promises more advantages: in particular, move operations, swap, memory deallocation functions, and destructors. This is explained in much more detail in Item 14 of Scott Meyer's book "Effective Modern C++". The SystemC standard still is based on C++'03 and so is the proof-of-concept implementation provided by Accellera. It will require considerable work to move SystemC to actively use C++'11/14/17 features. The topic is on the agenda of the Language Working Group, as the discussion regarding this topic showed during the SystemC Evolution Day in Munich in May. Contributions through participation in the respective working groups are certainly welcome. Finally, to give you some more solid information regarding best practices for using noexcept, I would like to point you to: Scott Meyers "Effective Modern C++", Item 14 "Declare functions noexcept if they won't emit exceptions.", O'Reilly, 2015. Bjarne Stroustrup: "C++'11 FAQ: noexcept -- preventing exception propagation", 2015. Andrzej Krzemieński: "noexcept — what for?", Andrzej's C++ blog, 2014-04-24. Andrzej Krzemieński: "Using noexcept", Andrzej's C++ blog, 2011-06-10. Best regards, Torsten Maehne
  5. Hello Thomas, If this snippet corresponds to your minimal example, then you are accessing the elements on the wrong object. Your variable "grid1" is of type sc_core::sc_module. It contains a private member variable "element", which is your two-dimensional vector array of type sc_core::sc_vector<sc_core::sc_vector<sca_element_model> >. If you want to access the vector from outside the grid1 module, you have to declare the member variable as public. Then, you should be able to call the functions on the individual elements using: grid1.element[j][i].foo() I hope this fixes your error. As a side note: The SystemC AMS extensions LRM reserves all identifiers with prefix sca_ and SCA_ for itself. You should therefore avoid using the sca_ prefix for your own class, function, variable, and preprocessor macro definitions even if you put them into an own namespace. Simply, choose a different prefix instead, e.g.: sca_element_model -> my_element_model Regards, Torsten Maehne
  6. Hello Jean-Claude, Maybe you could try to regenerate the configure script from inside your Cygwin shell. Cygwin tries its best to make Windows look like Unix, but sometimes it cannot hide all differences. Then, Unix programs require patches to work properly. This might be the case for GNU libtool, GNU automake, and GNU autoconf. Install all three packages through the Cygwin package manager and then go to your SystemC root directory. There execute the bootstrap script, which calls libtool, automake, and autoconf in the right order to regenerate configure: .../systemc-2.3.1/> config/bootstrap I hope this helps. It's been a while that I used Cygwin to compile SystemC... Nowadays, I prefer MSYS2 with the MinGW-w64 toolchains. Regards, Torsten Maehne
  7. Hello, the error message about multiple drivers stems from the usage of sc_inout ports in your pipelined bus module. All inout ports bound to a signal act as a potential driver. You can modify the writer policy governing the check for multiple drivers by appropriately setting the SC_DEFAULT_WRITER_POLICY preprocessor definition to SC_ONE_WRITER (default), SC_MANY_WRITERS, or SC_UNCHECKED_WRITERS consistently in all the translation units of your application. Details for this preprocessor definition can be found in the INSTALL file of the Accellera SystemC 2.3.1 proof-of-concept implementation distribution archive. Regards, Torsten Maehne
  8. Hello, As far as I understand your design from your description, you don't need to use a multiport at all to achieve the wanted effect of distributing your power on signal to all modules of your system. A single sc_core::sc_signal<bool> sig_power_on can be bound to one output port of type sc_core::sc_out<bool> and as many input ports of type sc_core::sc_in<bool> as you like. Once a new value is written to the out port, this will trigger an event to which the processes reading from the in port in your modules can be made sensitive. Best regards, Torsten Maehne
  9. Hello, the error lies probably in the order you pass the libraries systemc and scv to the compiler/linker. Instead of specifying first "-lsystemc" and then "-lscv": g++ -L"C:/systemc-2.3.1/lib-cygwin" -o "proba.exe" ./apb_transaction.o -lsystemc -lscv you should specify first "-lscv" and then "-lsystemc": g++ -L"C:/systemc-2.3.1/lib-cygwin" -o "proba.exe" ./apb_transaction.o -lscv -lsystemc The reason is that the SCV library references functions and datatypes of the SystemC library. Therefore, the linker has to resolve symbols of the SCV library and then resolve the remaining symbols to the SystemC library, which stem from your program and the SCV library. Best regards, Torsten Mähne
  10. It would be helpful if you could post a minimal self-contained example demonstrating your issue.
  11. Dear Roman, contrary to the previous poster, I would like to encourage you to keep posting your proposals for improving the syntax of SystemC! They come very timely. The IEEE Std 1666-2011 will need to get revised in the coming years to not become obsolete. The current standard is still based on C++'03 and the next version will need to get aligned to C++'14 or the then current C++ standard version. This forum is the right place for regular SystemC users to expose their ideas about what they would like to see in a revised standards. Members of the SystemC Language Working Group are reading and contributing to the Accellera Forums on a regular basis and are thus aware of your proposals -- even if they don't provide feedback for the moment. Personally, I think that your proposals are a good starting point for thinking further how SystermC users can tear most benefits from the possibilities of the new C++ standards to simplify the code of their models. Improvements to the SystemC syntax, which don't jeopardize backwards compatibility, will be more easy to integrate than proposals, which are not compatible to the current standard. The latter probably need to be very convincing in terms of usability and productivity improvement to get considered in a new revision of SystemC. Personally, I would also try to avoid as much as possible the introduction of any new preprocessor macros in SystemC. So, please keep posting! I hope that also other Forum members will provide feedback in the future. Best regards, Torsten Maehne
  12. Have a look to clause 8.3.2 "Class definition" (of sc_report_handler) of IEEE Std 1666-2011 to find the different standardized macros for reporting with different severities: SC_REPORT_INFO(msg_type, msg) SC_REPORT_WARNING(msg_type, msg) SC_REPORT_ERROR(msg_type, msg) SC_REPORT_FATAL(msg_type, msg) and the by Alain mentioned SC_REPORT_INFO_VERB(msg_type, msg, verbosity). I admit that this information is rather well hidden in the SystemC LRM.
  13. Thanks for your new feedback! One performance bottleneck in SystemC AMS are the virtual function calls to the processing functions. Yours are extremely simple so that the virtual function call itself can have a remarkable impact on performance. One possibility to improve performance is to use the multirate features of the SystemC AMS TDF MoC to calculate/process many samples during one activation. You just have to make sure that the system stays schedulable. Also multirate TDF causes more constraints on the synchronization with the DE MoC. Please note that your current benchmark does not constitute the complexity of a typical SystemC AMS virtual prototype, which typically also interfaces with modules expressed using other Models of Computation (MoC). The synchronization with other MoCs does come with some performance penalty, which pure data flow simulators don't have. This may explain the observed difference in performance with respect to CPPSim. However, this is just a guess from my side, as I don't have any experience with CPPSim.
  14. Hello, the big performance differences between SystemC AMS and CPPSim may be explainable by different compilation flags building the SystemC and CPPSim libraries. For optimal performance, you have to make sure that not only your model, but also the SystemC and the SystemC-AMS libraries, are compiled in Release mode, i.e., without Debug symbols (-g) and with full optimisation (-O3). Linking against libraries compiled in Debug mode (-g -O0) can lead to a slow down of several orders of magnitude. Another factor for slower performance may be file i/o: Your SystemC AMS simulation leads to a text file with 1 million lines of 3 columns of double values. Does your your CPPSim simulation yield a similar trace? Regards, Torsten
  15. You are welcome. If you are willing to drag in some additional dependencies, you may consider cppformat or the Boost.Format library, which provide functionality close to $sformatf of SystemVerilog.
  16. The SC_REPORT_* macros take a C string as argument. To format the message and include runtime values in it, you will have to prepare this C string before passing it to SC_REPORT_*. Relying only on Standard C++, the string stream is a convenient and flexible way to do so: #include <sstream> ... int myint = 10; ... { std::ostringstream ostr; ostr << "The value is " << myint; SC_REPORT_INFO(MSGID, ostr.str().c_str()); }
  17. The SystemC AMS LRM states that each DE->TDF converter ports will always sample DE signals at delta cycle 0 of a given point in time. This means that if a DE process activates at time t (delta cycle 0) to immediately change a signal, this change won't be seen by the TDF cluster, which samples the DE signals in the very same delta cycle 0. However, the DE signal change won't be visible until delta cycle as the evaluate and update phase have to first finish. For DE signals generated by TDF->DE converter ports, the new value will always get visible at delta cycle 1 (as the TDF cluster got activated in delta cycle 0). Only with SystemC AMS 2.0, there's a limited way to react to events instantly that are happening at delta cycles > 0. For this, you have to use the Dynamic TDF features: all TDF modules of the cluster have to be marked to accept attribute changes. The module, which shall instantly react to an event has to be marked to also do attribute changes. Then that module can use request_next_activation on a DE->TDF input converter port to react instantly to an event. However, be aware that request_next_activation() only works at the very beginning of a cluster period. If your cluster is multi-rate, many activations of the TDF modules in the cluster will follow according to the pre-determined static schedule for the execution of the TDF cluster's processing functions. Only when the static schedule has been completed, a TDF module gets the chance to issue a new request_next_activation().
  18. Each model of computation (MoC) of SystemC AMS uses an independent solver, which are specialised to the needs of the respective MoC. An individual solver instance is created for each topological cluster of connected modules of the same MoC. The LSF and ELN MoCs use a linear DAE solver to advance their local solution. The TDF MoC uses a static scheduler to execute the processing member functions of the TDF modules situated in the TDF cluster. The schedule is calculated during elaboration based on the Synchronous Data Flow (SDF) theory. These different solvers have to be able to interact with each other to exchange information at the cluster boundaries and synchronize their local advancements of the model execution. The SystemC AMS standard defines the necessary synchronisation semantics but leaves it open how a SystemC AMS implementation shall achieve it. However, for your understanding, you can assume that: - Each cluster is assigned to a solver instance. This solver instance represents the whole cluster with respect to the outside world (i.e., the communication through converter ports). - An LSF or ELN solver can be thought of a TDF module, which is embedded into the connected TDF cluster. - All TDF modules of a cluster are handled by a TDF scheduler instance, which is implemented as an SC_THREAD that is sensitive to the sc_signals connected to its DE->TDF input converter ports. While executing the TDF schedule in the SC_THREAD, it will issue discrete-event wait() statements as necessary to sample the DE signals and generate events on the signals connected to the TDF->DE output converter ports. I hope this answers your first question in enough details. Regarding your second question: SystemC has many extensions. However, not all are maintained as standards by the Accellera Systems Initiative. Accellera working groups standardised in the past as extensions to SystemC: TLM, the SystemC Verification library, and the SystemC AMS extensions. There are currently efforts to standardise CCI (configuration, control, and inspection) and UVM-SystemC.
  19. The solution of state-space equations is an implementation detail and not covered by the SystemC AMS 2.0 LRM. However, the SystemC AMS 2.0 LRM defines the semantics for the solution of the state-space equation system and how you can obtain solutions with a pace smaller than the TDF time step and redo a solution step (cf. to the discussion of the tstep parameter in clause 4.1.4.5.8). Regarding your points, the answers depend on the used implementation of SystemC AMS 2.0. I will assume that you use Fraunhofer SystemC-AMS: An implementation may do smaller time steps than the TDF module time step. The linear ODE/DAE solver is for its internal time stepping constrained only to not make a step bigger than the TDF module time step or the explicitly passed tstep value. I'm not aware that an instance of sca_tdf::sca_ss can output statistics about its internal time stepping. As far as I know, Fraunhofer SystemC-AMS uses the Euler and trapezoidal integration methods.
  20. I'm afraid that only someone with a local COSIDE license can answer your questions related to file_in_tdf. However, connecting TDF clusters with different time steps has been rendered possible with the advent of the SystemC AMS 2.0 standard and the therein define decoupling out ports. You will have to add a module, which reads through its input port of type sca_tdf::sca_in<T> the values of type T from your file_in_tdf instance and then forwards them via an output port of type sca_tdf::sca_out<T, sca_tdf::SCA_CT_CUT>. To this output, you can then connect your TDF signal going to your dummy_inst module.
  21. Unfortunately, this information is not sufficient to diagnose a potential memory leak in the LSF MoC of Fraunhofer SystemC-AMS. We need a compilable minimal test case for that. Also, can you report the development platform, which you are using (OS Version, processor architecture (i686, x86_64, ...), compiler version, configure parameters used for compiling SystemC and SystemC-AMS, etc. It may be also helpful to actually monitor the memory consumption of the process (e.g., using top on Linux). You should also check from where the bad_alloc exception is thrown and then to examine the call stack (bt command in gdb). Assuming that your model is not doing any dynamic memory allocation after your model has been instantiated and the elaboration phase has finished, you should be able to localise the part of SystemC-AMS, which allocates memory. With some luck, it will be a stable location across different runs. By trying to reproduce the same error on different platforms, you may be able to further localise the source of the problem.
  22. Hello Alessandro, I'm not able to comment on the memory consumption of LSF models during simulation. However, SystemC-AMS shouldn't consume all the physical memory (i.e. RAM + swap) available -- even when simulating for a long period. What is a common problem is the size of tabular trace files, which can consume a lot of disk space quickly when simulating over long times with high resolution. The reason is that the tabular trace file format does not use any kind of compression. To reproduce your problem, could you provide a minimal executable example, which exhibits the problem in your LSF models? Regarding your application: for integration, you are not obliged to use LSF. You can also use a Laplace Transfer Function (LTF) embedded into a TDF module: H(s) = 1/s (cf. to the SystemC AMS 1.0 User's Guide). You could also do the integration by hand using, e.g., the rectangular or trapezoidal rule embedded into a TDF module.
  23. Hello Marijn, it is hard to read your code example as it didn't get correctly indented. However, from what I see, It seems that you call the calculate() member functions of the SS equations via the via the operator() overload of paper_pos_ss and paper_temp_ss at each activation of processing, as the calls are present in the if and else branches of your code -- only the arguments changes. You could try to not make the call to the respective member functions when it is not needed. Still, SystemC-AMS might solve the equation system with a smaller time step when you call again the SS equation solver, as the current time of the TDF module only constitutes the maximum integration time step. As long as you safe the state of a SS equation system, you should be able to resume from there at any time the simulation. You can also reset directly the state vectors to new initial values to proceed with the simulation. That said, SystemC-AMS support for continuous-time differential equation systems wasn't initially developed to support deactivation/activation of themselves. However, SystemC AMS 2.0 added support for Dynamic TDF, which allows for TDF modules to wait for the occurence of some external event using request_next_activation. You could try to represent each sheet as a TDF module, which receives the printers relative position to it (think of the printer "moving around of the paper"). Once the paper leaves the paper paper, it could go to sleep mode waiting on the external event to wake up, potentially reset its state vector and resume the solution of the SS equation system. If that also doesn't support your need, you may consider to encapsulate an external ODE solver into your TDF module (e.g., from Boost.ODEInt or the GNU Scientific Library (GSL)). Then, you will have more direct control on the solver execution. Regards, Torsten
  24. No, the TDF modules don't get activated every 50 ns, but every 1 us as you specified with set_timestep. However, Fraunhofer SystemC-AMS interpolates by default between floating point samples. You can control this behaviour by passing an object of type sca_util::sca_multirate to set_mode of the trace file object. The sca_multirate object can take any of three values SCA_INTERPOLATE, SCA_DONT_INTERPOLATE, or SCA_HOLD_SAMPLE. For details, cf. to clause 6.1.1.1.3 in the SystemC AMS 2.0 LRM.
  25. Indeed, a source module is a good place to assign the TDF time step. I agree with your preference to not mix behavioural and simulation-specific parameters. However, your approach of adding a setTimeStep member function to the module is unnecessarily complicated and prone to errors due to the use of raw pointers, which don't guarantee the lifetime of the pointed-to object. I suggest that you use the set_timestep() member function provided by all classes inheriting from sca_core::sca_module (this also includes all TDF modules). This member function specifies the module time step, which is equal to the port time step when the port has a rate of one. In contrast to port time steps, which have to be assigned in the context of set_attributes (to guarantee that the port has been already bound to a TDF signal), the module time step can be assigned anytime between construction of the module itself and the call of its set_attributes member function.
×
×
  • Create New...