Jump to content
michele.lora

Derivative using ELN constructs

Recommended Posts

Hi everybody,

 

we are trying to model a system, using Electrical Linear Network, implementing the following equation:

 

I(out1) = ddt(V(in1))

 

That is, we want that the current of the output is the derivative of the voltage of the input terminal.

 

to do this we declared:

- two terminals (in1 and out1),

- two VCCS (v1 and v2),

- a sca_node (n)

- an inductor (ind).

 

Here you can find the implementation code:

#include "../inc/template_1.hh"

template_1::template_1( sc_core::sc_module_name name_ ) :
    sc_core::sc_module( name_ ),
    in1("in1"),
    out1("out1"),
    n("n"),
    scams_ground("scams_ground"),
    v1("v1", 1.0),
    v2("v2", 1.0),
    ind( "pattern_1", 1.0 , sca_util::SCA_UNDEFINED )
{

    v1.ncp(in1);
    v1.np(n);
    v1.nn(scams_ground);
    v1.ncn(scams_ground);

    v2.ncp(n);
    v2.np(out1);
    v2.nn(scams_ground);
    v2.ncn(scams_ground);

    ind.p( n );
    ind.n( scams_ground );
}

template_1::~template_1()
{}

We were expecting that what is modeled here was a derivative. However, when executing and reading the current value on out1, we see unexpected behaviors. In particular, we are writing as input (in1) a double signal, using a sca_de_vsource that starts from 1.0 and every microsecond it is self-multiplied by 1.01. Thus, we were expecting 1.01 constant output. This does not happen and very strange values, ranging from 0 to  -1.59561833143e+47 are traced as output (using a tabular file). To read the output value we are using a sca_de_isink.

 

We are probably forgetting or misinterpreting something. Does anyone have any idea?

Thank you very much,

 

Michele.

 

Share this post


Link to post
Share on other sites

Hi everybody,

 

we are trying to model a system, using Electrical Linear Network, implementing the following equation:

 

I(out1) = ddt(V(in1))

 

That is, we want that the current of the output is the derivative of the voltage of the input terminal.

 

to do this we declared:

- two terminals (in1 and out1),

- two VCCS (v1 and v2),

- a sca_node (n)

- an inductor (ind).

 

Here you can find the implementation code:

#include "../inc/template_1.hh"

template_1::template_1( sc_core::sc_module_name name_ ) :
    sc_core::sc_module( name_ ),
    in1("in1"),
    out1("out1"),
    n("n"),
    scams_ground("scams_ground"),
    v1("v1", 1.0),
    v2("v2", 1.0),
    ind( "pattern_1", 1.0 , sca_util::SCA_UNDEFINED )
{

    v1.ncp(in1);
    v1.np(n);
    v1.nn(scams_ground);
    v1.ncn(scams_ground);

    v2.ncp(n);
    v2.np(out1);
    v2.nn(scams_ground);
    v2.ncn(scams_ground);

    ind.p( n );
    ind.n( scams_ground );
}

template_1::~template_1()
{}

We were expecting that what is modeled here was a derivative. However, when executing and reading the current value on out1, we see unexpected behaviors. In particular, we are writing as input (in1) a double signal, using a sca_de_vsource that starts from 1.0 and every microsecond it is self-multiplied by 1.01. Thus, we were expecting 1.01 constant output. This does not happen and very strange values, ranging from 0 to  -1.59561833143e+47 are traced as output (using a tabular file). To read the output value we are using a sca_de_isink.

 

We are probably forgetting or misinterpreting something. Does anyone have any idea?

Thank you very much,

 

Michele.

I am curious why you are not using the LSF framework. I think that would provide

a quicker and more intuitive solution.

Share this post


Link to post
Share on other sites

Ouch! The circuit is not converging! You need a resistor ....

 

Where do we need to add a resistor? I know this may seen obvious but we do not have much expertise.

 

 

I am curious why you are not using the LSF framework. I think that would provide

a quicker and more intuitive solution.

I completely agree that using LSF will be much easier. However, we need to use ELN for project constraints.

Share this post


Link to post
Share on other sites

Implementing purely derivative behavior is numerically problematic as the time-derivative of a signal at a certain point of time depends also on the future evolution of the signal. So, the best you can get is an approximation based on the past solution points. This alone may already explain your observation of "very strange values, ranging from 0 to  -1.59561833143e+47". However, in addition, you have neglected that your stimuli signal is not continuous, but changing discretely: you provide it via a DE signal, which gets sampled with a time step that is equal to the time step that you assigned to the ELN circuit or connected TDF cluster.

 

I agree, using an sca_lsf::sca_dot primitive may be the more straightforward way to implement derivative behavior. However, you will probably encounter the same problems. For good convergence, both, ELN and LSF models, require more constraints in form of other primitives (like the resistor suggested by Sumit) to "slow down" the reaction of the system.

 

Therefore, also implementing the derivative as an LTF equation G(s) = s in the context of a TDF module is usually not working well. In that case calculating the difference quotient m = (current_val - last_val) / t_step may provide you with a first approximation for the derivative. However, be a aware that this works best for smooth continuous signals. Any noise present on the signal will get considerably amplified.

Share this post


Link to post
Share on other sites

It seems your starting point is Verilog-A or Verilog-AMS, because the ddt keyword is defined in this langauge. Note that Verilog-A/MS do not offer electrical primitives as part of the language defintion, so you should create your own. The ddt keyword is invented to create equations to model inductors or capacitors. But for the SystemC AMS extensions, you do not need to do all this; you can simply instantiate an inductor or capacitor.

 

So the lession is: do not blindly map Verilog-A/MS concepts on SystemC AMS.

 

Instead, explore if and how to abstract the analog functionality, perferably to a signal flow (LSF) or data flow (TDF) model of computation. Remember: SystemC AMS is a system-level language. If this is not possible, I expect you are dealing with electrical signals or conservative behaviour, and you should be able to directly use the predefined electrical primitives to model your analog subsystem.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×