Jump to content

many to one analysis_port to export


Recommended Posts

We want to connect a bunch of analysis ports to the same export - is it guaranteed that each write would return in the same delta-tick, and there can be no case of the export being busy at a "write" call. Thanks a lot for your help!

There are functions only in the write() call chain, and that should not slurp simulation time.

Erling

Link to comment
Share on other sites

hi,

there are two sides to this:

1. the analysis port exposes a write "function". that in itself is a strong guarantee that a call to write() will return without ANY delay.

2. there is NO guarantee (due to the preemptive nature of verilog) that ONLY code in the call chain of that function is being executed. see http://www.uvmworld.org/forums/showthread.php?128-UVM-and-SystemVerilog-prosess-semantics&highlight=simulator for more details.

/uwe

Link to comment
Share on other sites

2. there is NO guarantee (due to the preemptive nature of verilog) that ONLY code in the call chain of that function is being executed. see http://www.uvmworld.org/forums/showthread.php?128-UVM-and-SystemVerilog-prosess-semantics&highlight=simulator for more details.

Is possible to put together a uvm-based snippet demonstrating clearly what this means?

Erling

Link to comment
Share on other sites

this has nothing todo with uvm. it is what the sv-lrm says - simulators "may" do that:

"

... At any time while evaluating a behavioral statement, the simulator may suspend

execution and place the partially completed event as a pending active event on the event queue. The effect of

this is to allow the interleaving of process execution. ...

"

it basically means that there is no guarantee in a sequential/behavioural code section that (line+1) is executed right after (line). the simulator may at anytime start executing some other code. think of it like a multi-threaded program. while one cpu is working on one block another one may execute a completely different block in parallel(at the same time).

Link to comment
Share on other sites

Take a look at this code

package x;
  class y;
    string m_name;
    function new(string name);
      m_name = name;
    endfunction
    task run();
      $display("%s.y.run", m_name);
      fork
        begin   
          $display("%s upper fork", m_name);
        end
        begin
          $display("%s lower fork", m_name);       
        end
      join_any
    endtask
  endclass 

endpackage

module top();
  import x::*;

  y y1 = new("y1");
  y y2 = new("y2");

  initial
    begin : initial1
      $display("%m first initial");
      y1.run();
    end

  initial
    begin : initial2
      $display("%m second initial");
      y2.run();
    end
endmodule[FONT="]

[/FONT]
The ordering of the code within each initial thread is deterministic, but I think you will find differences amongst simulators on how the two threads are interwoven. Even though the fork statement doesn't block, some simulators use certain statements as an opportunity to do a context switch between threads. Continuous assignment statements are another typical place where optimizations make it look like the current thread is preempted to evaluate the function on the RHS of a continuous assignment. Edited by dave_59
formatted code
Link to comment
Share on other sites

There seems to be an even clearer statement about this in the LRM, about process::kill():

"...If the process to be terminated is not blocked waiting on some other condition, such as an event, wait expression, or a delay, then the process shall be terminated at some unspecified time in the current time step."

Since a process not WAITING is usually RUNNING, and the process calling kill() has to be RUNNING, there can be at least two RUNNING processes. However, it is trivial to show that this would break uvm, i.e. it would seem a simulator for uvm programs has to implement a cooperative environment.

This does not mean that there can't be code running in parallel, though. It is just that this can't change the meaning of the program. For example, in context of the analysis port write(), if some other process is also calling write() on some port concurrently in execution time, then the effect has to be as if the write calls were serialized.

Erling

Link to comment
Share on other sites

There should be no issues running code in parallel as long as write/read access to shared variables is properly managed. That is handled by using semaphores and mailboxes, which is what is used when you write to an analysis fifo. Analysis fifos are unbounded, and a write will never block. This is one way you can serialize the transactions.

BTW, no practical simulator of Verilog/SystemVerilog/VHDL implements a process using multitasking, cooperative or preemptive on a single core (multi-core is another topic for another day). There are just too many threads to manage it that way. Code generators take your source code and splice them together into common streams. So what you think is a context switch is just the result of the way the code has been spliced together. You won't see this too much in testbench code, but certainly in the always/initial blocks throughout the design. Also, process control statements like wait and fork that may not actually need to block sometimes create an opportunity for the scheduler to make a "switch" to another thread in the interest of fairness.

Link to comment
Share on other sites

There should be no issues running code in parallel as long as write/read access to shared variables is properly managed. That is handled by using semaphores and mailboxes...

It seems to me this would be very difficult to handle without improved language support. Say you have a scoreboard with multiple analysis imps. If multiple write() calls could run concurrently in execution time, how would you protect the scoreboard state?

Erling

Link to comment
Share on other sites

It depends. What states are you trying to protect?

The write() method calls are automatic - their local variables are independently placed on the stack. If they all write to a shared fifo, then the transactions will be serialized.

If these write() method calls are all concurrent, and need compare results with each other, then you have races that need to be dealt with.

Link to comment
Share on other sites

It depends. What states are you trying to protect?

I was thinking of the entire state of the scoreboard and how to make sure it is well defined. For example, one write() call could carry a reset message that would have the scoreboard reset its state. Another write() call could be doing data compare at the same execution time. It seems a mutex would be needed to deal with this effectively. That could perhaps be implemented by means of DPI calls, I don't know, but is probably not worth it given that uvm wouldn't work with multi-threaded scheduling anyway.

Erling

Link to comment
Share on other sites

If one write() is doing a reset, and another write() is doing a compare at the same time, you have a race that needs to be dealt with independent of the preemptive nature of Verilog. SystemVerilog already has a mutex feature, it's called a semaphore, so there is no need to use DPI. However, you are not allowed to use blocking statements in a function.

Instead of have multiple analysis ports use the same write implementation, people usually write into separate fifos and have another process waiting on data to appear in the fifos.

Link to comment
Share on other sites

If one write() is doing a reset, and another write() is doing a compare at the same time, you have a race that needs to be dealt with independent of the preemptive nature of Verilog

Not if the write's calls are serialized in execution time by the runtime and the scoreboard is written for this threading model.

SystemVerilog already has a mutex feature, it's called a semaphore, so there is no need to use DPI. However, you are not allowed to use blocking statements in a function.

I was thinking of a mutex, not a semaphore. If uvm users are supposed to write thread safe code, a (task only) counting semaphore would not be sufficient. Consider this trivial class:

class C;

  local static int count;
  
  function new();
    ++count;
  endfunction
  
endclass

How would you protect the count member in a reasonable way if multiple processes can construct a C instance concurrently in execution time?

Erling

Link to comment
Share on other sites

BTW, you could write class C as...

It seems this would add a bug not present in the original version, and the class state would still be volatile. Also, if code like this actually did fix a problem, then uvm would fall over, so why bother with such complicating solutions?

Erling

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...