Jump to content

Recommended Posts

All of these relate to the concept of loosly-timed simulation. The basic principle behind this is that parts of the design can simulate larger time slices without returning control to the simulation kernel; the run ahead of the rest of the simulation. This is used e.g. for processors and alike as for quite some parts the do not interact with the rest of the design and create (a memory read does not really trigger any action other than returning some data). This way the simulation speed and hence the simulation performance can be drasticalliy improved but you trade performance for accuracy.

This domain (or part of the simulation) running ahead of the simulation kernel is temporally decoupled. As such it needs some mechanism to control its local time (the amount it is ahead of the simulation kernel). This is done by the quantum keeper, the quantum is the amount of time the decoupled domain is allowd to be ahead at max. To allow interaction with the rest of the design all interaction need to carry some information what the local time in the dcoupled domain is, this is called timing annotation. This information is needed to either schedule events in the simulation kernel to happen at the correct time or to decide to break the quantum which means the decoupled domain is stopped and control is returned to the simulation kern until it reaches the local time of the decoupled domain, they are in sync then.

HTH

Share this post


Link to post
Share on other sites

@Eyck I would point out that Timing Annotation is not limited to Loosely-Timed (LT) modeling, but can also be applied to Approximately-Timed (AT) models (see section 11.1 of IEEE-1666-2011); however, there is an important difference. LT timing annotation describes temporal decoupling as you explained. AT timing annotation is a way of indicating where a phase applies. This has some odd implications that are not immediately obvious.

For instance, I can start an nb_transport_fw transaction with a non-zero annotated delay:

tlm_phase phase { BEGIN_REQ };
sc_time time { 50_ns };
auto status = nb_transport_fw( payload, phase, time ); ///< begin transaction 50 ns in the future
// Note that ns_transport_fw may increase the time (same as b_transport); however, it may not decrease the time.

Section 11.1.3.1 describes this in detail.

Why would this be done? Perhaps the initiator knows wants to dispatch a transaction and doesn't want to wait around to its initiation.

I cannot immediately think of why the returned value might change, but it is legal.

The main rule about time in SystemC requires that time never goes backward. No playing Dr. Who.

Share this post


Link to post
Share on other sites

Absolutely agreed. But to my experience folks try to avoid that as much as possible as it complicates the often already complicated protocol implementation.

BR

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×