Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 08/23/2018 in all areas

  1. 2 points
    The problem is, when you integrate RTL IP into Loosely-Timed VP that way, the whole simulator will have a performance of cycle-accurate model. Because clock generator will be always on, and Verilated model will be triggered even if it is idle. So don't try to boot Linux with such a simulator. If your RTL IP supports power gating or clock gating, it is a good idea to disable clock generation when RTL IP is turned off. In that case you don't pay for what you don't use: you can boot your OS quickly and then enable clock generator when you start to debug IP-specific driver code.
  2. 2 points
    Hello @kallooran, What version of SystemC library are you using? This issue has been fixed in the release of SystemC-2.3.2. You can find the latest release of SystemC-2.3.3 here: http://accellera.org/downloads/standards/systemc Hope it helps. Regards, Ameya Vikram Singh
  3. 2 points
    David Black

    Seeking Feedback on Datatypes

    Actually, it adds a lot of value. std::array can be passed by reference in a function call and the function can then determine the proper size of the array. This is much better than passing pointers, the C standard. You can also copy an array, which should be synthesizable, which reduces coding and greatly improves readability. It should be possible to implement some #include <algorithm>s on std::array too. Also, you can have bounds checking for additional safety; although, that aspect is probably not synthesizable. Additionally, constexpr should be quite helpful for the synthesis aspect.
  4. 2 points
    Hi, I'm not an implementer of the reference simulator but as far as I can judge the re-throw is used to find a more specific type of exception (since sc_elab_and_sim() just uses a catch-all) and uses sc_handle_exception() to convert it into an sc_report so it can be handled by the SystemC reproting system. Actually I agree it would be better to handle it directly in sc_elab_and_sim() but this would duplicate code. A side note rgd. debugging: if you use gdb there is a command 'catch throw' which stops execution right at the point where the (original) exception is thrown. This comes pretty handy in such cases. Best regards
  5. 1 point
    David Black

    simple socket

    I would suggest that those for-loops need a better bound and should be coded: sc_assert( objA->initiator_socket.size() >= objB->target_socket.size() && objB->target_socket.size() > 0 ); for( int i=0;i<objA->initiator_socket.size(); ++i ) { objA->initiator_socket[i].bind(objB->target_socket[i]); } Coding rule: NEVER use a literal constant when a reasonable alternative is possible. Even when "in a hurry", you will not be disappointed if you use this rule.
  6. 1 point
    Eyck

    simple socket

    Just looked further: there is a typo in case 1. It should read //bind for(int i=0;i<3;i++){ objA->initiator_socket[i]->bind(*(objB->target_socket[i])); } as you use an array of pointers. Again, sc_vector eases your life: //Model A sc_core::sc_vector<tlm_utils::simple_initiator_socket_tagged<ModelA>> initiator_socket; ... // Model B sc_core::sc_vector<tlm_utils::simple_target_socket_tagged<ModelB>> target_socket; ... //bind for(int i=0;i<3;i++){ objA->initiator_socket[i].bind(objB->target_socket[i]); } The same applies to case 2: //bind objA->initiator_socket1->bind(*(objB->target_socket2)); objA->initiator_socket2->bind(*(objB->target_socket3)); objA->initiator_socket3->bind(*(objB->target_socket1)); Best regards
  7. 1 point
    David Black

    Initial value port

    The 'initialize(T)' method is a leftover from SystemC 1.0 circa 1999, when SystemC had not yet properly abstracted the port/channel concept. At that point in time, there was a stronger emphasis on making SystemC look like Verilog or VHDL. The 'initialize(T)' method is only present on the 'sc_out<T>' and 'sc_inout<T>' port classes, as part of their partial template specialization. The 'initialize(T)' method is not generally available to 'sc_port<>'. I usually don't mention it because then the reader gets the wrong impression that 'initialize(T)' should be present everywhere. In fact, it is only useful for RTL aspects. Certainly, this is not part of TLM. Since SystemC is more about abstraction and modeling, I avoid it. It is straightforward to override start_of_simulation. @TRANGIt is important for you to understand this distinction. I realize that the specification may say that "port is initialized to zero" or some such, but the concept of port in the specification is quite different than the concept of port in SystemC. If you don't understand this, you will hobble your understanding of SystemC. So there are three ways in SystemC of modeling what the specification says regarding an output pin on a hardware design. Depend on the underlying datatype's initial value to initialize the signal (not very flexible) If using the specialized ports (sc_out and sc_inout only), call the initialize(T) method. Write to the port during start_of_simulation, which is the most general and powerful approach. Challenge: How would you initialize an sc_fifo< float > connected to an sc_fifo< float > channel with four values? #include <systemc> #include <list> #include <iostream> using namespace sc_core; using namespace std; SC_MODULE( Source ) { sc_fifo_out< float > send_port; SC_CTOR( Source ) { SC_THREAD( source_thread ); } void source_thread( void ) { wait( 10, SC_NS ); send_port->write( 99.7 ); wait( 10, SC_NS ); std::cout << "Finished" << std::endl; sc_stop(); } // How to initialize output to contain following initial values? // { 4.2, -8.3e9, 0.0, 3.14 } // Do not add this to the thread. Instead, ensure that it happens before any thread executes. } }; SC_MODULE( Sink ) { sc_fifo_in< float > receive_port; SC_CTOR( Sink ) { SC_THREAD( sink_thread ); }; void sink_thread( void ) { for(;;) { std::cout << "Received " << setw(12) << receive_port->read() << " at " << sc_time_stamp() << std::endl; } } }; SC_MODULE( Top ) { // Constructor SC_CTOR( Top ) { m_source.send_port .bind( m_fifo ); m_sink.receive_port.bind( m_fifo ); } // Local modules Source m_source { "source" }; Sink m_sink { "sink" }; // Local channels sc_fifo< float > m_fifo; }; int sc_main( int argc, char* argv[] ) { Top top { "top" }; sc_start(); return 0; } Key concepts: SystemC is a modeling language mapped on top of C++. SystemC ports are not signals or pins. sc_in<T>, sc_out<T> and sc_inout<T> are partial template specializations of sc_port<T> on the respective sc_signal<T> interface classes. For historic reasons (SystemC 1.0), there are extra methods added to these specializations including initialize(T), read(), and write(T) that can later confuse novice SystemC programmers.
  8. 1 point
    Hi all, For debugging purposes it may be useful to add all signals in design to trace file. However, sc_signal::trace() which may allow to do it automatically is deprecated. Why is that? In general I think it will be useful to add trace method to sc_interface, so that all channels that support tracing can implement it. And then it will be possible to implement something like: sc_trace_all(sc_trace_file *tf) // add all objects that support tracing to trace file.
  9. 1 point
    Roman Popov

    Array when declare port for model

    This is a very old style. With a modern SystemC you can have the same with sc_vector: //exxe.h class EXXE : public sc_module { public: sc_vector<sc_in<sc_dt::uint64>> SC_NAMED(clock,3); sc_vector<sc_in<bool>> SC_NAMED(input,5); EXXE(sc_module_name); ~EXXE(); } But as David mentioned, before starting learning SystemC you should learn C++ first. Trying to understand SystemC without being proficient with C++ is a waste of time.
  10. 1 point
    You would need to write a wrapper class with debug aids builtin. A sketch might be: struct debug_mutex : sc_core::sc_mutex { void lock( void ) override { auto requester = sc_core::sc_get_current_process_handle(); INFO( DEBUG, "Attempting lock of mutex " << uint_ptr_t(this) << " from " << requester.name() " at " << sc_time_stamp() ); this->sc_mutex::lock(); locked = true; locker = requester.name(); time = sc_time_stamp(); changed.notify(); } void unlock( void ) override { auto requester = sc_core::sc_get_current_process_handle(); this->sc_mutex::unlock(); time = sc_time_stamp(); locked = false; locker = ""; changed.notify(); } // Attributes bool locked{ false }; sc_event changed; sc_time time; string locker; }; I have not tested above. NOTE: I have a macro INFO that issues an appropriate SC_REPORT_INFO_VERB with the above indicated syntax. Replace with your own. Never use std::cout or std::cerr if coding SystemC.
  11. 1 point
    Eyck

    Read in customized structs

    Where did you check the values of p1 and p2? write() only schedules the values to be written, you will see the actual value in the next delta cycle. Best regards
  12. 1 point
    maehne

    Direct Digital Synthesis

    As you can see from the DDS block diagram, which you posted, the structure of DDS block is quite simple. I therefore would model it as a single TDF module containing the accumulator register as a state variable. As input, you will have your frequency control and phase offset control signals. The frequency control signal basically is the value, which you continuously add to the value in your accumulator register to generate a periodic sawtooth signal (due to the overflow of the accumulator). The phase offset control signal gets added to the accumulator register to enable shifting the phase of the sawtooth signal. This sawtooth signal is interpreted as the angle operand of a sine function. The period of the sawtooth signal represents a full revolution on the phase circle, i.e., the 2^L possible sawtooth signal values are evenly mapped to phase angles in the range of 0 rad to 2 pi rad (0° to 360°). To avoid repeated calculations of the sine function, the sine amplitudes for all possible angles are pre-calculated and stored in a look-up table (LUT). The current value of the sawtooth signal is then used as the index into that look-up table to find the corresponding output amplitude. To implement this in a generic DDS TDF module, I would choose M, L, and K (bit widths) as generic parameters, which can be passed upon DDS module instantiation to the DDS module's constructor. From M and L, you can calculate the value for the modulo operation, which models in software your overflowing of a M-bits-wide and L-bits-wide addition operation. The difference of M and L gives you the shift distance needed to implement your accumulator output quantisation. The sine LUT needs to have 2^L entries. The sine function should be scaled by 2^(k-1)-1 to use the full range of a k bit wide signed output. All these preparatory calculations, you can do in your DDS module's constructor. The processing function then only needs to only implement the two additions and the amplitude look-up based on the current values of the frequency and phase control inputs and the current value stored in the accumulator register. If you plan to use this kind of module in a model of communication system, you might consider having a look to the Vienna SystemC AMS Building Block Library, which is available for download from systems-ams.org. The latter web site contains also other useful SystemC AMS resources.
  13. 1 point
    Those questions are covered in detail in paragraphs 14.1 and 14.2 of SystemC standard. Can't answer in a better way. TLM2.0 simulations are not cycle-accurate, so you don't have clock edge events. In AT modeling style you should call wait(delay) after each transport call. In LT modeling style all initiators are temporaly decoupled and can run ahead of simulation time, usually for a globally specified time quantum. For debugging you can use the same techinques as in cycle-accurate modeling: Source-level debugging with breakpoints and stepping Transaction and signal tracing Logging Comparing with RTL, debugging using waveform won't be that effective, because in AT/LT modeling state of model can change significantly in a single simulator/waveform step. Usually preffered way is combination of logging and source-level debug. Debugging TLM models is harder comparing to RTL. Also C++ is much more complex and error-prone comparing to VHDL/Verilog.
  14. 1 point
    Roman Popov

    using gtkwave

    I don't work on Windows. But as far as I remember executable should be somewhere in project sub-directory called "Release" or "Debug". Sorry, can't help you more here.
  15. 1 point
    Roman Popov

    using gtkwave

    In the working directory (active directory when you launch the executable). Search for "and_gate.vcd"
  16. 1 point
    Roman Popov

    using gtkwave

    Your code is not correct. Why did you put next_trigger(5, SC_NS) inside a method? Remove it, and you will get correct waveform for and gate.
  17. 1 point
    Well, the answer is i bit more complex. The main difference is that the standart requires that during the nb_transport call no sc_wait is allowed while in b_transport it si allowed. So any implementation adhereing to the standart guarantees this. Let's first look at the non-blockig implementation. tlm_phase do not denote a phase directly rather time -points o the protocol. Actually you have to phases: request and response which are denote by 2 time points each. So the initiator can indicate a start or end of a phase of a transaction and be sure that the call is not blocked by a call to wait(). So you can model the behavior and timing of a bus transaction in a fairly granular way and do something while the transaction is on-going. You can even have 2 transactions in parallel, one being in the request while the other one is in the response phase (or even more if you have more phases defined). The transaction are pipelined. Looking at the b_transport situation is different. The target can delay the transaction by calling wait() until it is ready to respond. During that time no other transaction can be ongoing, the initiator is blocked and cannot react to it. Blocking accesses can be used if timing of the communication is not of interest/not modeled (the other scenarion is loosly timed models, but that's a different story). They are easy to implement and easy to use. Non-blocking is used if the timing of the communication needs to be modeled in more detail. E.g. this allows to model, simulate and analyse bus contention situations as it allows to attach timing to all phases of a bus transaction lile grant, address phase, data phase. I hope this sheds some light -Eyck
  18. 1 point
    Eyck

    using gtkwave

    Actually SystemC provided sc_trace(...) functions where you register your signals and variables for tracing. Running the simulation yields a .vcd file which you can open in gtkwave. You may have a look into https://github.com/Minres/SystemC-Components-Test/blob/master/examples/transaction_recording/scv_tr_recording_example.cpp In sc_main() you will find sc_trace_file *tf = sc_create_vcd_trace_file("my_db"); sc_trace_file *tf = sc_create_vcd_trace_file("my_db"); This opens the waveform database. At the end you have to call sc_close_vcd_trace_file(tf); to properly close the database. 'tf' is a handle to the database, if you follow the code you will see how to trace signals (or variables), Best
  19. 1 point
    AmeyaVS

    Behavioral XOR Gate with Delay

    Hello @re1418ma, You can look at this example: http://forums.accellera.org/topic/5678-clock-to-q-propagation-delay/?do=findComment&comment=13657 Or this one which shows how to add delay in full adder: http://forums.accellera.org/topic/5715-delaying-simulated-execution/?do=findComment&comment=13844 Hope it helps. Regards, Ameya Vikram Singh
  20. 1 point
    Hello everyone, first of all I apologize if the post is too big and I know sometimes people get discouraged to read big posts. On the other hand I spent quite some time trying to make the post as clear as possible for the reader. So please do not get discouraged :). My SystemC version: SystemC 2.3.1 My operating System: Ubuntu 16.04 I am trying to understand how SystemC simulator works, and for that I ran the following code: SC_MODULE(EventNotifications) { sc_event evt1; void p1() { evt1.notify(10, SC_NS); evt1.notify(5, SC_NS); evt1.notify(); evt1.notify(0, SC_NS); wait(10, SC_NS); evt1.notify(5, SC_NS); wait(10, SC_NS); } void p2() { wait(10, SC_NS); evt1.cancel(); evt1.notify(); wait(10,SC_NS); } void p3() { cout << "evt1 is activated at " << sc_time_stamp() << endl; } SC_CTOR(EventNotifications){ SC_THREAD(p1); SC_THREAD(p2); SC_METHOD(p3); sensitive << evt1; dont_initialize(); } }; I referred to the SystemC Language Reference Manual 2.0.1: http://homes.di.unimi.it/~pedersini/AD/SystemC_v201_LRM.pdf and in my explanation down below, I used the following abbreviations: R = {} - list of runnable processes, D = {} - list of processes that have been notified using delta notifications, T = {} - list of processes where timed notification has been used e.g. event.notify( >0, SC_NS) 2.4.1 Scheduler Steps at page 6 from the SystemC Language Reference Manual 2.0.1 was used for the following reasoning of how this code works: Initialize phase: We initialize all the processes that are ready to run. Process p3 at the beginning will not be runnable due to dont_initialize command. So only p1 and p2 processes runnable, as a result R = {p1, p2} at the end of initialize phase. Now we go to the evaluation phase. We start with simulation time equal to 0ns. Simulation time = 0 ns Evaluation phase (delta cycle = 0): We have at the beginning R = {p1, p2}, D = {} (empty list) and T = {}. Let's say scheduler decides to execute p1 first, so p1 gets removed from R, effectively R = {p2}. Then we execute 1st timed notifications evt1.notify(10, SC_NS), after that we have evt1.notify(5, SC_NS), since 5ns is earlier than 10ns, only 5ns will be remembered so we have T = {p3}. Next statement is evt1.notify() which is immediate notification, and will overwrite the previous notification evt1.notify(5, SC_NS). Immediate notification is put into the list p3, R ={p3}, and T = {}. Next statement is evt1.notify(0, SC_NS), so p3 will be put in the list D. So now we have R = {p3}, D = {p3}, T ={}. Question 1: if I swapped two statements evt1.notify(0, SC_NS) and evt1.notify() here, will the delta notification will be removed? In my opinion only evt1.nofity() will be remembered: From page 128 of the manual: "A given sc_event object can have at most one pending notification at any point. If multiple notifications are made to an event that would violate this rule, the “earliest notification wins” rule is applied to determine which notification is discarded." As a result I would have R = {p3, p2}, D = {}, T = {}. Now we encounter the wait(10, SC_NS) and p1 is put to wait. Question 2: Since I we have wait(10, SC_NS), does that mean that the process p3 will be put in separate list/queue of sleep processes? Let's call it list S, so we would have S = {p1} effectively? Next let's say scheduler decides to run p2, so we remove p2 from the R list and we have R = {p3}. There we encounter wait(10, SC_NS), and p2 gets into S list, S = {p1, p2}. Now we have R = {p3} and p3 gets executed, so immediate notification gets executed at simulation time 0 ns as 1st console output indicates. Now method p3 exits, and list R is empty, R = {}, so we go to update phase. Update phase (delta cycle = 0): Nothing to be updated. We just go to the next delta cycle phase. Next delta cycle phase(delta cycle = 0): We increment delta cycle, and check all the contents of list D and put them in the list R. In our case D = {p3}, thus R = {p3}. Now we go back to evaluation phase. Evaluation phase(delta cycle = 1): We run only p3, here so the delta notification happens at simulation time 0 ns + 1 delta cycle. R = {}, we go to the update phase. Update phase (delta cycle = 1): Nothing to be updated. Go to next delta cycle phase. Next delta cycle phase(delta cycle = 1): We increment delta cycle, but since D = {}, we go to the increase simulation time phase. Increase simulation time phase (delta cycle = 2): From the page 6: "If there are no more timed event notifications, the simulation is finished. Else, advance the current simulation time to the time of the earliest (next) pending timed event notification." Now back to my Question 2, since we T = {}, that would mean that we have no timed event notifications, and based on reference manual simulation should be finished, which is not the case here if you run the code. So from my understanding, the processes that were called with wait operation, will not be put in the "list of sleep processes" but instead they will be put to either list T or D. I think in case wait(>0, SC_NS), process gets put into the T list, and if wait(0, SC_NS) is called process should be put into the D list. So in our case T = {p1, p2}? We increase simulation time to the earliest 10 ns, and contents of list T = {p1, p2} are put into the list R = {p1, p2}. and we go to the next evaluation phase. Simulation time = 10 ns: Evaluation phase (delta cycle = 0): Here we can either run p1 or p2. Let's say p2 is run first, and we encounter evt1.cancel(), since there are no pending events nothing will happen. Then we execute evt1.notify(), and p3 gets into the list R, so R = {p1, p3}. Then wait encountered, so T = {p2}. Now let's say scheduler decides to execute p3, and then immediate notification happens at simulation time of 10 ns. Now R = {p1}, so p1 gets executed and there we have evt1.notify(5, SC_NS), so p3 gets into the list T = {p2, p3}. Then we execute wait(10, SC_NS), and p1 sleeps again. So T = {p2, p3, p1}. Since R = {0}, we go to update phase. Update phase (delta cycle = 0): Nothing to be updated, so we go to the next phase. Next delta cycle phase (delta cycle = 0): We increment delta cycle. Nothing in list D, so we go to the next phase. Increase simulation time phase (delta cycle = 1): We put contents of T into R, thus R = {p2, p3, p1}, and we increment time to the earliest pending notification, since we had evt1.notify(5, SC_NS) and threads slept for 10 ns, we chose 5 ns. So simulated time is increased to 15 ns. We again go to the evaluation phase. Simulation time = 15 ns: Evaluation phase (delta cycle = 0): Here R = {p2, p3, p1}, so let's say we execute p3 first, as result timed notification evt1.notify(5, SC_NS), happens at simulated time 15 ns. Now R = {p2, p1}, and p1 executes, since nothing after last wait statement thread terminates. Same situation for p2, so p2 terminates. R ={} go to next phase. Update phase (delta cycle = 0): Go to next phase. Next delta cycle phase (delta cycle = 0): Delta cycle updates, since D = {}, we go to the next phase Increase simulation time phase (delta cycle = 1): Since T = {}, nothing to be simulated and simulation ends. So this would explain the following result I got outputted on the console: evt1 is activated at 0 s evt1 is activated at 0 s evt1 is activated at 5 ns evt1 is activated at 10 ns I tried to check my assumption that when wait(0, SC_NS) gets called in some process, the process will be put in the D list. So I ran this code: SC_MODULE(DeltaTimeWait) { void p1(void) { while (true) { cout << "p1: " << sc_time_stamp() << endl; wait(0, SC_NS); } } void p2(void) { while (true) { cout << "p2: " << sc_time_stamp() << endl; wait(0, SC_NS); } } SC_CTOR(DeltaTimeWait) { SC_THREAD(p1); SC_THREAD(p2); } }; There is also one thing I noticed. For example if I change the order of the thread registration in the constructor of the 1st code, and having SC_THREAD(p2) before the SC_THREAD(p1), I get different result. SC_CTOR(Task5_d){ SC_THREAD(p2); SC_THREAD(p1); SC_METHOD(p3); sensitive << evt1; dont_initialize(); } I get the following result: evt1 is activated at 0 s evt1 is activated at 0 s evt1 is activated at 10 ns I am not sure if my reasoning for this result is correct. So I think that we get his output due to reason that at the point where simulation time was 10 ns, we had two choices, we could either schedule p1 or p2 first. Simulation time = 10 ns: Evaluation phase (delta cycle = 0): At this point as I have mentioned earlier, we can either run p1 or p2 first. And in the first case we assumed p2 was run first. But if we assume now p1 will be run 1st instead of p2. So now p1 gets executed and statement evt1.notfiy(5, SC_NS) is encountered. As a result, process p1 gets into the list T and then we sleep the process. Now the process p2 gets scheduled, and the 1st line we encounter is evt1.cancel(), which as result would cancel pending notification evt1.notfiy(5, SC_NS) from process p1. After that evt1.notify() is executed which results p3 getting into R list. So p3 being only in the list R, we execute process p3, and evt1 is notified at simulation of time 10 ns. Question 3: How come that order of thread registration actually affects the order of the process being scheduled? I am not sure if my reasoning is correct, so I would appreciate your feedback, as I am only a beginner in SystemC. Looking forward to your feedback. Ivan.
  21. 1 point
    Uwghiello

    Install SystemC on Visual Studio 2017

    I am trying to install systemC on visual studio 2017 but it seems that I cannot find the guide for it. So I had to follow the guide of installing systemC on VS2010 :( In fact, I have got systemC.lib but when I configured the project for systemC, i.e, the project->property->C/C++->Code Generation->Runtime Library->balabalabala... finished all of them and build solutions, there are some errors. So...what should I do? Totally confused because I know nothing about VS =_= PS: the guide is here
  22. 1 point
    Hi. This is because an unbound port cannot be read. A port forwards all read and write calls to the actual interface (signal) it is bound to. In you module constructor, you are still in the model set up and elaboration phase. The port is not yet bound to any signal. Hence, you cannot read from it. Accessing ports should not be done befor end-of-elaboration. Greetings Ralph
  23. 1 point
    There have been some improvements to the performance of the field automation macros, but I still do not believe their benefit is worth the cost. You should be able to prove it to yourself by creating a simple testbench with and without the macros.
  24. 1 point
    Actually, you can start a sequence in any phase. It is more important to understand the domain/scheduling relationships between the task based (i.e. runtime) phases. UVM undergoes a number of pre-simulation phases (build, connect, end_of_elaboration, start_of_simulation) that are all implemented with functions. Once those are completed, the task based phases begin. The standard includes two schedules. One is simply the run_phase, which starts executing at time zero and continues until all components have dropped their objections within the run_phase. The other schedule contains twelve phases that execute parallel to the run phase. They are: pre_reset, reset, post_reset, pre_config, config, post_config, pre_main, main, post_main, pre_shutdown, shutdown, and post_shutdown. They execute in sequence. Every component has the opportunity to define or not define tasks to execute these phases. A phase starts only when all components in the previous phase have dropped their objections. A phase continues to execute until all components have dropped their objections in the current phase. Many companies use the run_phase for everything because there are some interesting issues to consider when crossing phase boundaries. In some respects it may be easier to use uvm_barriers for synchronization. Drivers and monitors (things that touch the hardware) are usally run exclusively in the run_phase, but there is nothing to prevent them also having reset_phase, main_phase, etc...
  25. 1 point
    Assuming you're using plain signal ports, you can use the event member function to check, whether a specific port has been triggered in the current delta cycle: sc_vector< sc_in< int> > in_vec; // ... SC_METHOD(proc); for( unsigned i= 0; i<in_vec.size(); ++i ) sensitive << in_vec[i]; // ... void proc() { for( unsigned i= 0; i<in_vec.size(); ++i ) if( in_vec[i]->event() ) std::cout << "in_vec[" << i << "] triggered." << std::endl; } Greetings from Oldenburg, Philipp
×