Jump to content

karsten

Members
  • Posts

    25
  • Joined

  • Last visited

  • Days Won

    4

karsten last won the day on January 16 2017

karsten had the most liked content!

Recent Profile Visitors

895 profile views

karsten's Achievements

Member

Member (1/2)

14

Reputation

  1. Hello Diego, get_typed_trace_value() is not a documented/standardized method, thus you should not use this method. This are only experimental methods to prepare the next standard update and this non-standardized methods will be definitely changed or removed from version to version in dependency of the discussions in the Acellera AMSWG. Guaranteed is only the IEEE P1666.1 standardized functionality. Best regards Karsten
  2. I just saw, that it looks like you instantiate the modules statically. You may run into the OS stack size limitation. Allocate the modules/vectors dynamically (using new) may solves the problem (or at least increases the possible size). May you can try similiar like this (not checked): sc_vector< sca_eln::sca_node >* c_vec; //nodes beteen resistors sc_vector<sca_eln::sca_r>* rs_vec; SC_CTOR(rmatnn): p("p"),n("n") // { c_vec=new sc_vector< sca_eln::sca_node >("c_vec", N ); rs_vec=new sc_vector<sca_eln::sca_r>("rs_vec", N ); ... }; what are the shell messages before the segfault?
  3. There is no real limitation except the available memory. Each resistor is a SystemC module which allocates some memory due the sc_module members. I guess SystemC and SystemC AMS does not check always, that memory could be allocated - thus you get the segfault. You can try a computer with more memory or try to setup the equation manually- for a resistive network this should be some lines of c code only - if you have inductors or capacitors you can use the state space or Ltf objects.
  4. Due SystemC AMS is a C++ library on top of SystemC only, all simulators, which support SystemC principially also support SystemC AMS. We tested this for different Cadence, Synopsys and Mentor versions.
  5. RNM uses the digital event driven model of computation (MoC) to allow abstract analog signalflow based behaviour. For such an purpose event driven simulation is not optimal - the dataflow MoC available additional in SystemC AMS is therefore more efficient and introduces less artefacts like unneccesarry activations. Additional, modelling of dynamic behavior (transfer function, filter, ...) with RNM is difficult. You can transform your equations into a digital filter (e.g. using bi-linear transformation) or you have to use Verilog-A(MS) and thus the analog simulator which will slow down the simulation performance. SystemC AMS provides dedicated language construct for describing linear differential equations - the ltf_nd,ltf_zp and ss objects. Additional you will have the conservative ELN domain, which allows to describe electrical linear networks (via equivalent relation other physical domains can be described also). The resulting equation system will be solved by a dedicated light weight solver - so the simulation performance is usually orders faster than using a traditional analog simulator. For more information you can also read the user's guide http://www.accellera.org/images/downloads/standards/systemc/OSCI_SystemC_AMS_extensions_1v0_Standard.zip
  6. I can't reproduce the issue - the example compiles and runs under Linux 64Bit with the gcc 4.9.2 The example works also after I changed the port types and the signals to bool. Do you removed all object files before recompiling (call make clean) - may the make file of the example works not properly, it does not recompile the main function after changing the header Best regards Karsten
  7. Thats consistent - you have around 16.000 modules - and I assume they are connected in a kind of chain, which is the worst case for the algorithm. So the required maximum depth should be less than 16.000 may less. Best regards Karsten
  8. The issue is not the SystemC stack size (the size which can be used by SystemC threads). The current SystemC AMS proof-of-concept uses a recursive algorithm for clustering. So in the worst case the recursive-depth during elaboration equals to the number of connected modules. So far, I know, the maximum stacksize is a parameter of the operating system, which can may be increased. I have no expierence with this, may you can try the following links. I'll be interested in the result. http://stackoverflow.com/questions/7535994/how-do-i-find-the-maximum-stack-size http://stackoverflow.com/questions/2279052/increase-stack-size-in-linux-with-setrlimit Another solution, If possible, you can try to split the network in separate sub-networks communicationg via TDF ports. I dont know howoften those issues happen - whether it makes sense to refactor the recursive algorithm? Best regards Karsten
  9. Each tdf signal must be connected to exactly one tdf outport-sca_tdf::sca_out (LRM 5.2 d) p.83) Best regards Karsten
  10. you cannot read a value from a TDF signal. Access to TDF signals is only possible via TDF ports in the context of aTDF module inside the context of a processing method (see LRM or User guide). It's why the schedule (underlying equation system) is setup during elaboration and during simulation the TDF time is not inline with the SystemC time - a read access to a TDF signal out of context cannot know which value from the buffer has to be read - only the connected and thus scheduled TDF module has this information.
  11. There is no hard limitation. The limitation comes from the available memory. The cluster algorithm is currently recursively implemented - this seems to limit the modules per cluster in the worst case to around 12.000 in dependency of the computer. In this case, splitting the cluster should help. May you can try the current version. If you still see the limitation, I'm intersted in the example and/or the number of modules. Best regards Karsten
  12. the result is independent from the timestep as long as the timestep is at least smaller than half of the time constant of the equations. I checked your model - the required timestep seems to be smaller around 1ms with 10ms timestep after 1sec simulation time Y SS Systemc 338.3 457.7 after 1 sec simulation I get the result: 339.83 459.77 If I change the timestep to 0.1 ms I get after 1sec simulation the result: Y SS Systemc 339.983 459.977 with 0.01ms after 1sec simulation time: Y SS Systemc 339.998 459.998 Hopefully, this answers your question Best regards Karsten
  13. Hello all, I had a quick look to the profiler results. For the large example this lines: 5.55 1.26 0.60 sca_util::sca_implementation::sca_matrix_base<double>::resize(unsigned long, unsigned long) 4.99 1.80 0.54 sca_tdf::sca_implementation::sca_ct_ltf_nd_proxy::setup_equation_system() 4.94 2.33 0.53 sca_tdf::sca_implementation::sca_ct_ltf_nd_proxy::register_nd_common(sca_util::sca_vector<double> const&, sca_util::sca_vector<double> const&, sc_core::sc_time const&) indicate, that the model may can be improved. It seems, that an/the ltf objects detect a coefficient change may in each timestep, which results in an equation system re-initialization. I recommend, to initialize the coefficients (e.g. num/den vectors) in the initialize callback and do not touch them in the processing callback. If you switch between an restricted number of coefficient sets, use for each set a separate ltf object and switch between the objects (you can use the same state vector to hold the states). this prevents the time consuming equation system re-initialization. In the current version, the change detection is simply done by detecting a write access to the vector/matrix - this is improved in the next version, in which we check whether the value has been realy changed. For the small example, it looks like, that the most time is spent for context switching: 30.00 0.03 0.03 sca_core::sca_implementation::sca_synchronization_layer_process::wait_for_next_start() In the current implementation a SystemC-AMS cluster is embedded in a SystemC Thread. At least at the end of each SystemC-AMS clusterexecution the SystemC-AMS time is synchronized with the SystemC time. Therefore a context switch is required. So the performance depends on which SystemC version, with which thread implementation on which operating system you are using. The fastest version is usually the qt-Thread version under Linux and the slowest one the pthread versions. 20.00 0.05 0.02 sca_core::sca_implementation::sca_solver_base::get_current_period() This is may be an optimization problem (depending on your platform). In this function calculations with sc_time objects (64Bit integer values) are done. I saw for some examples, that those calculations at some plattforms are suprisingly slow without the highest optimization level. Best regards Karsten
  14. There are two current sources in series (I1 and I2 via M1 and M2) - this is not possible due potentially they can define different currents.
  15. you have to use SystemC-AMS 2.0. The module should look like this (not tested nor compiled): SCA_TDF_MODULE(de2eln) { sca_tdf::sca_de::sca_in<double> inp; sca_tdf::sca_out<double> outp; //define a Maximum timestep e.g. half of the time constat of the ELN filter sca_core::sca_time max_timestep; void set_attributes() { this->set_timestep(max_timestep); this->does_attribute_changes(); } void processing() { outp = inp.read(); } void change_attributes() { //requests a new calculation of TDF Cluster (with the connected ELN Network) //if an Event occurs at inp or the max_time is over this->request_next_activation(max_timestep,inp->default_event()); } SCA_CTOR(de2eln){} }; This module gets as Input your pwm de Signal and the TDF Output should be connected to the ELN source. best regards Karsten
×
×
  • Create New...