Jump to content

plafratt

Members
  • Posts

    26
  • Joined

  • Last visited

Recent Profile Visitors

535 profile views

plafratt's Achievements

Member

Member (1/2)

0

Reputation

  1. Your logic is sound in your conclusion that there is some time in the model. For the purpose of the example, we can assume that after the arbiter thread makes a decision, it waits a fixed amount of time, say sc_time(1,SC_NS). The intent is to have the arbiter wait to be sure that all input threads that will insert into their input FIFOs at this timestamp have done so. I think I understand what you are saying about having another part of the design (call it the "notifying_thread") tell the arbiter that "at this time stamp" has finished. But, if the requirement is that this notifying_thread waits until all of the input threads that will be activated have been activated, I don't know that SystemC natively provides a way to do this, since the order in which the notifying_thread and the input threads get activated is non-deterministic. One approach that I think might work, given the guidance provided above, would be to design the input threads such that when one wakes up, they will all wake up, even if some of them don't insert into their corresponding FIFOs. Each thread sets a "done" variable to indicate that it has woken up. The notifying_thread executes wait() statements in a loop until all of the "done" variables are set. Then the notifying_thread clears all the threads' "done" variables. Although, I admit that it seems not very elegant. I suspect there might be a better way. Thanks to both of you for the helpful input.
  2. Thank you for the helpful response. I believe I need to think this over a little more. I am still not entirely clear on how the arbiter should handle the case where one of the input threads may not check in during a given arbitration cycle, as the arbiter cannot make its arbitration decision until it is sure that all of the input threads that will check in have done so.
  3. I am facing a dilemma with the scheduling of SystemC processes. I have a feeling it is because I have erred in my design approach and wanted to see if someone could offer some guidance. Say that you have a SystemC thread that arbitrates transactions that arrive through multiple FIFOs. The threads that enqueue the transactions into the FIFOs are untimed. When a request arrives at an input FIFO of the arbiter, the arbiter gets notified. At this point, the arbiter does not know which of the FIFOs will eventually get a transaction in it at this time stamp. We'd like to wait until all of the other threads have executed before the arbiter proceeds to make its arbitration decision. As I understand, one way to do this would be just to make a bunch of wait(SC_ZERO_TIME) calls. But this seems a bit like a hack, for various reasons (e.g., what if later, there are more untimed processes added upstream from the input threads? Then the arbiter has to add more wait(SC_ZERO_TIME) calls to account for these). I am wondering if this would just be considered a bad design. Since real hardware takes time for activity to occur, perhaps this is not the intended use of the delta scheduling in SystemC. Or, maybe I am just approaching this in the wrong way? Any help is appreciated.
  4. Great, thank you. I was aware of ok_to_put(), but I think I had some confusion about how to use it in my situation. It looks like it should do what I need. Thanks!
  5. Well, I'm not sure something like deque will provide what is needed, because I still need process notifications when an entry is written or read. I think what I am not seeing is how to write into the fifo and then block the writer until the reader reads from it. I can write something custom based on deque or something else (or look for another class in sc_core or tlm) to provide this, but wanted to be sure that tlm_fifo didn't provide this first.
  6. I am using a tlm_fifo. When I put() into it and then immediately call nb_can_get(), it is returning false. Is this an invalid use of the tlm_fifo to call put() and nb_can_get() in the same delta? Or, have I made a mistake somewhere?
  7. I see now that the TLM2.0 LRM explicitly disallows this. (In the version of the LRM I have, this is in the section "4.1.2.6 The phase argument".) However, this text there does refer to the "base protocol." So, perhaps the way to allow what I am trying to do would be to define a new protocol. Any input on this is appreciated, though, in case that is the wrong way to go about this.
  8. With TLM2.0 nb_transport, is there a way for an initiator to tell an interconnect that it wants to preempt the last request the initiator sent? For example, say that the interconnect can buffer only 1 request. The initiator sends a low priority request R0 to the interconnect. Some time passes, and the initiator begins another, higher priority request R1. The initiator knows that the interconnect hasn't serviced R0 yet, because the initiator hasn't received a response for R0 yet. The initiator wants to tell the interconnect, "Throw out R0 and use R1 instead." I can achieve this by just having the initiator send R1 to the interconnect, and then having it send R0 again at a later time (after R1 is done being serviced). However, it isn't clear to me that TLM2.0 protocol allows the initiator to send R0 an arbitrary number of times in the BEGIN_REQ phase.
  9. Thank you for the thoughts. I am using acquire() and release() to ensure that the memory is safe while I am using the transaction in the target. The specific problem I am facing is that the initiator might call, for example, set_address() on the transaction after it receives the response, while the target is still using it. The target needs access to the original address.
  10. In systems where a target can send an early response, such as a posted write, and the initiator can change the transaction after it receives the response, how do most developers handle the transaction in the target while avoiding seeing the initiator's changes to that transaction? An obvious solution is for the target to do a deep copy of the transaction, but in a complex system with many modules, having each module potentially create its own copy of each transaction can make debug and performance analysis difficult. Another option is for the target to save the relevant information from the transaction in an extension when the target receives that transaction, but this has the problem that the developers of the target must always remember to use the accessors in the extension, rather than the standard ones for the transaction.
  11. Thank you for the responses. Sorry for the delay in my response. I'd gotten busy, and I am just now getting around to signing back in. We don't seem to be having this issue anymore. I am not sure if it because of code updates or because of a compiler upgrade. We recently switched from g++ 4.9.2 to 6.2.1, so that may have resolved the issue. The path in question is sc_fifo::read() -> sc_fifo::read(T& val_) -> sc_fifo<T>::buf_read( T& val_ ). On line 410, buf_read() returns false without setting val. None of the functions in the call chain set val, so when the original call to sc_fifo::read() returns, it is returning its locally-declared tmp variable, which was passed to sc_fifo::read(T& val), which never sets val. As I mentioned, though, this doesn't seem to be causing a problem for us at the moment. It may have been a compiler issue that got resolved when we upgraded. Thank You and Regards, Patrick
  12. I get the warning when building my code that uses sc_fifo.h, not when building systemc. I assume this is because the function is declared inline.
  13. For sc_fifo::read(), some versions of g++ give me a warning that the variable tmp is unset. I see there is a path through read() that leaves tmp unset, but I don't think it is possible that this path ever gets executed. Is there a way around this warning that doesn't require disabling the warning in g++ or changing sc_fifo.h?
  14. That is it. Thank you very much. Regards, Patrick
  15. I looked at the implementation of sc_fifo, as I'd guessed that in the read() function, it would just notify data_read_event, but the logic surrounding the notify looks a little more complex than that.
×
×
  • Create New...