Jump to content

Search the Community

Showing results for tags 'fifo'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Accellera Systems Initiative
    • Information
    • Announcements
    • In the News
  • SystemC
    • SystemC Language
    • SystemC AMS (Analog/Mixed-Signal)
    • SystemC TLM (Transaction-level Modeling)
    • SystemC Verification (UVM-SystemC, SCV)
    • SystemC CCI Public Review
  • UVM (Universal Verification Methodology)
    • Methodology and BCL Forum
    • UVM SystemVerilog Discussions
    • Simulator Specific Issues
    • UVM Commercial Announcements
    • UVM 1.2 Public Review
  • Portable Stimulus
    • Portable Stimulus Discussion
  • IP-XACT
    • IP-XACT Discussion
  • IEEE 1735/IP Encryption
    • IEEE 1735/IP Encryption Discussion
  • OCP (Open Core Protocol)
  • UCIS (Unified Coverage Interoperability Standard)
  • Commercial Announcements
    • Announcements

Categories

  • SystemC
  • UVM
  • UCIS
  • IEEE 1735/IP Encryption

Calendars

  • Community Calendar

Found 5 results

  1. I'm trying to understand how to model a simple pass through module correctly. Let's say I've got 3 threads/modules A, B, and C such that A sends messages to B which forwards them to C (A --> B --> C). A naive approach for thread B: sc_fifo<Message> incoming_fifo("incoming", 3); sc_fifo<Message> outgoing_fifo("outgoing", 3); thread_A(){ while(true){ Message msg = new Message(); incoming_fifo.write(msg); } } thread_B(){ Message msg; while(true){ incoming_fifo.read(msg); outgoing_fifo.write(msg); } } thread_C(){} Further, let's say that module C is negligent and not reading from outgoing_fifo (sorry, the names are B-thread centric). One would naively assume that thread_A can send 6 messages (3 in each fifo) before the fifos become full and it's write becomes blocking. The problem is, thread_A can actually send 7 messages, because you essentially have an extra slot, conceptually, in thread_B between the success of its read and the block on its write. One solution might be to use tlm_fifos, first waiting on outgoing_fifo.ok_to_put() then blocking on incoming_fifo.get(). But this becomes problematic if for example there was another module also putting into outgoing_fifo --then ok_to_put() might be true at one point, but when the read/get from incoming_fifo awakens thread_B, it there might no longer be room in the outgoing_fifo. (A similar situation arises if you try to wait on ok_to_get(), but there are multiple threads pulling from incoming_fifo). Is there a way to solve this problem so that you can be sure that you don't pull from incoming_fifo if you don't have room in outgoing_fifo, ie. simultaneous condition checking, without resorting to something like polling?
  2. Searchable FIFO

    Is there a searchable FIFO implementation around? I mean: my Producers send their product to the FIFO of the Consumer when they like. The Consumer, however, might have a preference: may want to consume products of certain Producers first, independently of their position in the FIFO.
  3. I couldn't find a fifo that I can attached to socket. Are there any fifo available or do we have to create our own fifo? Thanks
  4. Hi all, I want to be able to read an element from a "buffer-like" data structure without removing it. Is there a systemc structure for that ? sc_fifo "nb_read() and read()" methods removes the element from the buffer. I want to be able to decide when to remove or not remove the data. Thanks
  5. Hi All, I'm new at this forum. I encounter problem with circular_buffer. Let me explain it on simple example (code is from circular_buffer.h): I have a circular_buffer with few elements . After read one of them element is destroyed: template < typename T > T circular_buffer<T>::read() { T t = read_data(); buf_clear( m_buf, m_ri ); increment_read_pos(); return t; } template < typename T > inline void circular_buffer<T>::buf_clear( void* buf, int n ) { (static_cast<T*>(buf) + n)->~T(); } } // namespace tlm ... but In dectructor all elements are destroyed ... again template < typename T > circular_buffer<T>::~circular_buffer() { for( int i=0; i < used(); i++ ) { buf_clear( m_buf, i ); //!!!! } buf_free( m_buf ); } This case GPF's randomly. I notice that when I change desctructor code to for( int i=m_ri; i < used() % m_size; i++ ) { problem disappears I'm using SystemC 2.3.0 Is this a bug in TLM sources or I'm doing something wrong ? Radek
×