c4brian Posted September 14, 2016 Report Posted September 14, 2016 Let's say I have the following DUT. The UVM environment contains a chain of models/predictors. Input data flows down this chain and generates the expected CHIP output, which is compared to actual.Pros: verifies top-level functionality.Cons: Does not verify block level functionality.A good start, but I'd like to also verify the blocks in a system setting. So, I create block-level environments, then reuse them at the top level.Awesome, but wait a minute. I still need the top-level verification (Input-to-Output) like in the first example. However, all 3 of my block predictors are being used in their corresponding environments' scoreboards, hooked up to the RTL, via agents.How does one do both? Surely I'm not supposed to instantiate duplicate copies of my block level predictors to create the end-to-end model chain... Quote
tudor.timi Posted September 20, 2016 Report Posted September 20, 2016 Correction to picture 1: it does verify block level functionality, but you get poor visibility in case something doesn't work, it's available late, etc. (all the disadvantages of not splitting up design/verification into blocks). For the whole system to be working properly it's necessary (but not sufficient) for the blocks to work properly. If you have a system that is a simple cascade like this, the only thing you'd want to check is that everything is stitched together properly (e.g. outputs from A get routed properly to B's inputs and so on). You could do this via a formal app for connectivity. You could also make sure do it in simulation by having your agents tap on the other side of the connection. What I mean by this is that, for example, your B interface agent for the A env gets connected to the B block's signals and not to the ones in the A block. This way you implicitly check that stuff coming out of A reaches B. If you want to have an end-to-end check like this, you'd need to build the chain yourself. This shouldn't be a big problem, since your "models" (or predictors or whatever you call them) should have some kind of predict(...) function that only returns the expected output transaction given some input transaction. This means you only need to do: class e2e_scoreboard; // ... virtual function void write_input(input_trans_type input_trans); expected_trans_fifo.try_put(c_model.predict(b_model.predict(a_model.predict(input_trans)))); endfunction // push output transaction to actual FIFO // pop from both FIFOs and compare endclass Quote
c4brian Posted September 20, 2016 Author Report Posted September 20, 2016 Tudor, Thanks for the reply. "properly it's necessary (but not sufficient) for the blocks to work properly." - What do you mean by "but not sufficient" ? If you have a system that is a simple cascade like this, the only thing you'd want to check is that everything is stitched together properly (e.g. outputs from A get routed properly to B's inputs and so on I understand and agree. Revisiting this simple example I realized something so trivial it's almost embarrassing to admit ; I am still getting the top level functional checks using the block level environments. My concern was this: Using block level environments, each model gets its input from an agent, and not from a previous model output. In my mind, that implied the input might be incorrect; messed up possibly by the RTL block. However, I can guarantee the correctness to any stage because it was already checked in the previous stage. In short, I am an idiot. sm2345110 1 Quote
tudor.timi Posted September 20, 2016 Report Posted September 20, 2016 It's not sufficient that all the blocks work properly, because it might be the case that they aren't properly connected to each other. Outputs from some block are left hanging and the corresponding inputs are tied to some values. Cascaded block level checks can't really find this, if your observation points for each block level environment are its corresponding design block. Example: A can start read or write transactions, but the direction signal doesn't get passed to B, where it's tied to read. The A or B env checks won't fail, but the whole system is buggy. c4brian and sm2345110 2 Quote
c4brian Posted September 20, 2016 Author Report Posted September 20, 2016 Since that is cleared up, let's say I've got the following DUT. I guess this would called a "data-flow" DUT. The master is initialized via control interface B, then the master awaits commands on the same interface. I've got block level environments for components A, B, C, D. I need top level functional verification, of course, so I need to verify the behavior of the Master. The Master performs all kinds of low-level reads and writes on the shared peripheral bus. Option 1 (traditional): Have a scoreboard verify every peripheral bus transaction. This sounds like a bad decision. There are countless transactions, and I'd end up a 100% accurate Master model which is would change every time the Master software changed. I really don't care if the Master performed a read-modify-write of a register. I just care about the final result. Option 2: Use the RAL, or similar technique, to verify the correctness of data moved around the peripheral bus, at particular intervals, or instances. Example use-case: Master receives command "Component Status?". Master talks to all components, gathering status values. Master writes Status-Packet to Component A. I'm not interested in HOW it gathered the status, I just want to verify the contents of the response packet. Option 3...etc. Am I on the right track? Thoughts? Quote
sm2345110 Posted April 19, 2017 Report Posted April 19, 2017 c4brian, I agree with you, but plz tell me what is the resbery-pi tool for the Verilog , also included the tools name .? Awaiting for your positive response. Regards, Sharon Maxwell Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.