Jump to content

Determining appropriate delay time


Recommended Posts

Good day,

 

I have a question regarding how to determine the appropriate delay value for the wait( ) function call. In the target b_transport callback, we can add delay to the simulation time by passing delay amount to the wait( ) function.

 

In simulation that uses quantum and temporal decoupling that targets super fast instruction accurate simulation, the timing does not have to be very detail (loosely timed). With or without delay in the target callback function will not cause any functional inaccuracy and still we could produce the platform that can support firmware/software development.

 

Still if we want to put a delay to the wait( ), how can we determine the appropriate delay value for the function parameter?

 

Thank you.

Regards,

Arya.

Link to comment
Share on other sites

  • 2 weeks later...

Thanks Alan.

 

But my main concern is how to determine the value. I know for the b_transport we can put the delay time to the function parameter. However, how to find the "right" amount of delay time? 

 

For example I am developing a dma controller module in high-level (TLM-LT), therefore for each dma transfer happen how big is the delay time? I may put 10ns, 100ns, 1000ns, but I couldn't understand which one is the "right" one.

 

Could you please help me to figure out how to put the "right" amount of delay time.

Link to comment
Share on other sites

Essentially I have no idea! But with the magic of Google, I found this:

 

http://www.ti.com/lit/ug/spruft2a/spruft2a.pdf

 

Which says

"

The minimum (best-case) latency for a burst DMA transfer can be summarized as follows:
•For transfers initiating from internal memory:the first access for word read and write takes 8 cycles, while consecutive accesses take 2 more cycles. Thus the DMA takes 2N + 6 system clock cycles to complete a burst transfer, where N corresponds to the burst size in words.
• For transfers initiating from a peripheral source:
the first access for word read and write takes 6 cycles, while consecutive accesses take 2 more cycles. Thus the DMA takes 2N + 4 system clock cycles to complete a burst transfer, where N corresponds to the burst size in words.
"
 
So perhaps between 2N and 3N clock cycles might be a reasonable guess? Obviously it's device specific, so if you know exactly what DMA controller you're modelling, have a look at the data sheet for that.
 
Alan
Link to comment
Share on other sites

  • 2 months later...

Hi Alan,

 

Thanks for your response.

 

Yes it is absolutely device specific, DMA controllers can be fly-by too, so obviously the clock cycle requirement would be different from one to another.

 

Back to my original question that if we are developing a model from scratch, let say a peripheral that delivers specific task (unique), and we are using SystemC - TLM to do the high level modeling for it, either in LT or AT abstraction level. How all those clock cycles information relates to the delay time need to be passed into the function call?

 

At this point I suppose that at least the engineer/designer has already received the information about how many clock cycles that particular operation will take place in the RTL model. Is there a way to convert those clock cycles information to a value in SC_NS or SC_US during high level simulation (platform with ISS with configurations of 100MIPS and 1 SC_US default global quantum value)?
Anyway when you know how many clock cycles certain operations take place, what is good about it when in LT and AT we don't model the clock at all? Should we just pass random value to the function call as the model delay? (non-sense)

I am quite confused with this issue, appreciate any feedbacks.

Thanks!
 

Link to comment
Share on other sites

Hi,

   there are a number of issues to think about there.

 

Firstly, you can approximate the delay however you want. If you are doing LT modelling, then typically you would use a single value. So if your processing took 10 clocks with a 10 ns clock period, you could use a time of 100ns.

 

The argument to the function call represents the time in the future at which the function should be considered to operate. In a simple system using b_transport, you *could* just wait for the time delay in the function, but that's not very efficient. Instead you would just increment the time value by 100 ns (using the example above) and return.

 

However if you keep doing that, only one process will be able to run as no-one is calling wait. The purpose of the time quantum is to decide when a process should wait so that other processes should run. A big time quantum is less accurate but faster.

 

b_transport and the quantum are the LT style of modelling.

 

For AT modelling, you would again increment the time argument to model time passing, but you would typically wait more often. There are utilities in the TLM2 utils directory to assist with scheduling and executing updates in the future (payload event queues).

 

Regarding what time values to pass to the functions - the time values should represent the processing time of what you're modelling.

 

You could pass random time delays. The purpose of that would be to check that the software application running on your platform model behaves correctly independent of time delays (i.e. it has no ordering dependencies or non-determinism). That's the approach ST used to use in their original TLM approach (pre-TLM2 - well pre-TLM1!)

 

regards

Alan

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...