Jump to content

Asynchronous driving/monitoring in a SV-UVM framework


Recommended Posts

I have designed a SV-UVM framework initially for system simulation (for architecture evaluations).

I have been using clocking blocks and genereated synchronous inputs (and monitoring) and as far as I see such framework are somehow "based" on the idea of cycles (something happens every clock cycle or, at least the components check if there is something to do).

I am now asked to consider the effect of  RANDOM DELAYS (with respect to the clock edge) of the signals coming to the frontend of the chip and also to consider clock jitters.

 

I am not an expert on this topics and I would like to get some advices on:

- first of all, is it reasonable to do it in a system verification framework or should they be taken into account at a different level?

- then, what it the best/recommended way of doing it (I guess the first step is getting rid of clocking blocks)?

- what impact would this have on simulation performances?

 

Thanks in advance to anybody.

 

Link to comment
Share on other sites

I don't think you're going to achieve anything by simulating random delays w.r.t. clock edges in RTL simulations. If you want to see what effect signal delays have you need to run this on (timing annotated) gate level, but that defeats the purpose of doing architecture explorations, 'cause to get to the netlist you've done all steps of the design process.

 

Also, clock jitters aren't going to bring you anything either. If you care about clock jitter for any clock crossings, then there are specialized tools for this.

 

Personal opinion: Architecture exploration is something abstract. You want to see if you can get you're desire throughput on your internal bus, you want to see if your modules can work together in the intended way, etc. You don't care about such low level details as delays on input signals or clock jitter.

Link to comment
Share on other sites

I totally agree with you as far as architecture exploration is concerned.

The point is that the goal would be to reuse the verification environment even after architectural exploration, when the chip will be actually designed.

How do you see it.

 

Thanks a lot for sharing your knowledge/experience.

Link to comment
Share on other sites

Even afterwards, catching timing issues is best done with other tools, not with simulation. Simulation is there to show you that the RTL works, IMO. STA and CDC verification tell you that you don't have timing problems. You already have a huge stimulus space to cover when just considering that your signals come right after the clock edge. Adding randomness to the signal timings is only going to blow up your stimulus space even more. You'd be trying to model way too much.

Link to comment
Share on other sites

Tudor is dead on. Clock jitter and delays are better debugged with specialized timing tools.

 

Caveat: if your system has reclocked synchronous interfaces, make sure you design the stimulus to have sufficient jitter in the cycles (i.e. +/- whole clocks) to see how that might affect the design. This should of course be randomized. Probably a bit (or two) m_delay in the transaction constrained to let the driver wait a cycle before delivering data. That should be sufficient.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...