Monday, July 14, 2014

DVCON goes GLOBAL!

Design and Verification Conference popularly known as DVCON is beginning to flex its muscles and move out of Silicon Valley to reach your friendly neighborhood this year. Yes, DVCON will be hosting the conference at Bangalore, India and Munich, Germany in 2014 on top of the already concluded conference at San Jose, California. This blog post is a quick refresher about the event for fellow engineers who have been busy solving the most intricate verification challenges in the trenches unknowingly that a platform does exist to discuss those puzzles and hear out how the community is handling them while sharing their solutions. It’s an avenue promoting collaboration among the design and verification community without a bias for a particular tool, flow or methodology. It’s a forum where everyone, a novice or a guru, a student or a professional, a beginner or an expert come together to discuss the takeaways that can be implemented as soon as they hit their work stations. It’s a place where you learn, discuss, network, collaborate and connect.....and this time it’s HERE (Bangalore & Munich)!

HISTORY of DVCON

It’s been 25 years in the spirit and 10 years to the name for this conference. The origin can be traced back to late 80s when VHDL started picking up; the user community started meeting twice a year under VUG (VHDL users group) leading to a conference by the name VIUF (VHDL International User Forum). Around the same time Verilog too gained user traction leading to IVC (International Verilog Conference) in early 90s. While the two events continued to serve the respective communities, they joined hands in 1997 giving way to IVC/VIUF conference later termed as HDLCon. Finally in 2003, HDLCon became DVCON giving it a legacy of 25 years and a brand that continues to evolve for a decade now. In 2014, it extends the reach to India and Europe.

DVCON India

ISCUG (Indian SystemC User’s Group) has been hosting an event with focus to accelerate adoption of SystemC as an open source standard for ESL design for past 2 years. This platform now morphs into DVCON India, a 2-day conference in Bangalore on Sept 25-26 2014 running Design Verification and ESL track in parallel.  In words of the committee, DVCon India is an attempt to bring a vendor-neutral international Design and Verification conference closer to home for the benefit of the broader engineering community. An excellent platform to share knowledge, experience and best practices covering ESL, Design & Verification for IP and SOC, VIP development and Virtual Prototyping for Embedded Software development and debug. The conference provides multiple opportunities to interact with industry experts delivering keynote speeches, invited talks, tutorials, panel discussions, technical paper presentations, poster sessions and exhibits from ecosystem partners.

Call for abstracts – submission deadline JULY 21st, 2014.
Registrations soon to be opened.

DVCON Europe

A new chapter for the Design and Verification community in Europe starts at Munich, Germany this year on Oct 14-15, 2014. The focus of the conference would be on Electronic System Level (ESL), Verification & Validation, Analog/Mixed-Signal, IP reuse, Design Automation, and Low Power design and verification. To know more about what you’ll see at the conference hear it in the words of Martin Barnasconi, General Chair DVCON Europe here.

Call for abstracts – submission deadline over.
Registrations – Open.

This new move from Accellera Systems Initiative opens up opportunities for thousands of engineers who have benefited from the proceedings of DVCON for a long time to get involved, contribute and learn for a better tomorrow. As Benjamin Franklin rightly said,

Tell me and I forget.
Teach me and I remember.
Involve me and I learn.

An excellent opportunity for all of us to get involved & learn!


Sunday, June 29, 2014

Shift left and Reuse in verification

SNUG India 2014 was a jam packed event! All sessions including the keynotes, papers, tutorials and the design expo were full with engineers pouring in huge numbers. Follow up questions during the presentations and curiosity of the engineers during the expo to understand the solutions from partners were clear indicators of the value these conferences bring in apart from the freebies :) 

Papers on the first day of Verification track focused on the 'Shift Left' approach viz confirming that the design is an exact representation of the spec by uncovering the issues early in the design cycle. The first 3 papers discussed involvement of acceleration/prototyping to enable early SW development and validation of the design in parallel to verification. The 4th paper talked about how the X’s of GLS can be stimulated at RTL to save time & avoid ECOs. While shifting left helps in achieving the goals faster, it is equally important to deploy tools & flows improving 'Reuse'. This was the gist of papers presented on the second day where users shared their experience with enhanced scalability and reusability on different aspects of the verification paradigm be it Formal, low power or IP and SoC verification. 

‘Shift left’ & ‘Reuse’ are interesting concepts sure to reap wonders when applied in conjunction! Really? Let’s see.


Reuse in verification is achieved predominantly through VIPs and test scenarios. 

Expectations from the VIP are not only limited to reuse in the context of IP at block or SoC level simulations. There is a need to have VIPs architected in such a way so as to be reused while porting the target design to a prototyping or emulation platform. If it is the system bus, the VIP needs to be partitioned such that the timed portion of the VIP can move into the box while the untimed portion still continues to be on the work station. SCEMI protocol enables the required communication between the host and the target box. If the VIP is a model for a peripheral sitting outside the chip, it can either fully reside inside the box if everything is to be programmed through the corresponding host IP or else have the same partitioning as the system bus VIP. While the benefits of moving to faster platforms are obvious, the challenges to enable this transition are multi-fold. Porting the design to a target box demands a struggle with partitioning amidst the complexity arising due to multiple clock and power domains. Though the industry has evolved to a large extent on these aspects, architecting and developing verification code that is portable between simulation and emulation is still in its infancy. 

VIPs are available off the shelf but a lot of test development today still happens with home grown test plan and test suite with reuse from what is provided as part of the VIP. Given that the ultimate verification goal is to have a functional SoC with minimal probability of a design bug, C based tests play an important role in achieving it. These tests need to be defined & developed in a planned fashion so as to enable reuse at simulation, emulation and even for Si bring up. A well thought about strategy can actually enable the first level of test development right at the IP stage where a model of the processor can be plugged as part of the IP test bench enabling early test development. Once this suite is ready, complex cases can be covered either with a detailed use case test plan and/or deployment of tools that enable test definition and development in an automated manner. With HW SW teams working in silos and probably in different geographies, a comprehensive solution that converges the two is quite difficult to achieve.

To realize the above there is a need to conceive the end picture, enable development of the pieces and finally connecting the dots to bring up multiple platforms early enough in the ASIC design cycle.

Sunday, June 15, 2014

Sequences in UVM1.2

This post concludes our date with sequences in UVM with a quick overview on modifications related to sequences in UVM1.2. UVM that came out with an EA version in 2010 received mass adoption and has matured by the day since then. The working group is all set with another major release(UVM1.2) that should be out anytime. It opened up for public review during DVCON 2014. You may want to dabble with this release and share your feedback here. Given the wide adoption and maturity level, the plan further is to probably take it to IEEE. Victor from eda playground shared interesting insights on what’s new in the UVM1.2 release.  Given below are 4 important changes related to sequences in UVM1.2 –

#1 Setting default sequence from command line [Mantis – 3741]

A new plusarg has been added to the command line to allow setting of default_sequence that the sequencer executes during the start phase. With this change the user can have 1 test with default sequence that can be modified from command line thereby avoiding redundant test development just for changing the default sequence. You can also run regressions by replacing the default sequence with a sequence library (remember sequence library is derived from uvm_sequence).

How was this done in UVM1.1?

uvm_config_db#(uvm_object_wrapper)::set(this,“<seqr_path>.<phase>”,“default_sequence”,<sequence>::type_id::get());

Additional control in UVM1.2?

+uvm_set_default_sequence = <seqr>,<phase>,<sequence>

How does it affect current code?

Existing code works without change while you move to UVM1.2. Users are free to choose either of the above. If both are present in the code the result may be a function of phase.

#2 Guarded starting_phase [Mantis – 4431]

The uvm_sequence_base::starting_phase variable is now protected and accessible only via set_starting_phase() and get_starting_phase() methods. One cannot set this variable again if retrieved using get method. The primary reason for this update was that any modification to the starting_phase variable in the code would lead to errors that were hard to debug.

How was this done in UVM1.1?

starting_phase as a variable was used to set phase, compare phase, raise & drop objections

How does it work in UVM1.2?

Use pre-defined methods set_starting_phase() & get_starting_phase()

How does it affect current code?

While moving to UVM1.2, if the code uses starting_phase variable then one can expect compile errors. The Accellera UVM team is generous enough to provide a script as part of the release that searches the starting_phase variable and replaces it with pre-defined methods based on how the variable is used in the code.

#3 Automatic raise & drop objection [Mantis – 4432]

set_automatic_phase_objection() method is added to raise and drop objections based on the starting_phase of the sequence thereby avoiding manual code. This change is mainly for ease of use.

How was this done in UVM1.1?

starting_phase.raise_objection(this);
starting_phase.drop_objection(this);

How does it work in UVM1.2?

set_automatic_phase_objection(1); 

How does it affect current code?

The existing code would need modification. As mentioned above, use the script provided by Accellera UVM team as part of the release.

#4 enum changes [Mantis – 4269]

A lot of verification code today involves a mix of different languages, methodologies and code developed by different teams. In UVM1.1 the enum values related to sequences didn’t have a unique signature and this lead to compilation errors when the uvm_pkg was wildcard imported into a scope that already had enum declarations with same name. To avoid this, those enum values are now prefixed with UVM_ string.

How was this done in UVM1.1?

UVM_ prefix missing in enum values for uvm_sequence_state_enum and uvm_sequencer_arb_mode

What changed in UVM1.2?

uvm_sequence_state_enum and uvm_sequencer_arb_mode enum values now have UVM_ prefix

How does it affect current code?

The existing code would give compile errors and you would need to update the values manually.

So when should you move to UVM1.2?

If you are not and early adopter and doesn't enjoy playing with new release, you can hold on for some time; otherwise go ahead, download the release and give it a shot with your code.

Recommended reading -

Sunday, April 20, 2014

Sequence Library in UVM

Looking into the history of verification we learn that the test bench and test cases came into picture when RTL was represented in form of Verilog or VHDL. As complexity grew, there was a need for another pair of eyes to verify the code and release the designer from this task that continues to transform into a humongous problem. The slow and steady pace of directed verification couldn’t cope up with the rising demand and constrained random verification (CRV) pitched in to fill the gap. Over the years industry played around with different HVLs and methodologies before squaring down to SV based UVM. The biggest advantage of CRV was auto generation of test cases to create scenarios that the human mind couldn’t comprehend. Coverage driven verification (CDV) further complimented CRV in terms of converging the unbounded problem. In the struggle to hit the user defined coverage goals, verification teams sometime forget the core strength of CRV i.e. finding hidden bugs by running random regressions. UVM provides an effective way to accomplish this aim through use of sequence library.

What is Sequence Library?

A sequence library is a conglomeration of registered sequence types derived from uvm_sequence

A sequence once registered shows up in the sequence queue of that sequence library. Reusability demands that a given IP should be configurable so as to plug seamlessly into different SoCs catering varied applications. The verification team can develop multiple sequence libraries to enable regressions for various configurations of the IP. Each of these libraries can be configured to execute sequences any no. of times in different order as configured by the MODE. The available modes in UVM include –

UVM_SEQ_LIB_RAND   : Randomly select any sequence from the queue
UVM_SEQ_LIB_RANDC :  Randomly select from the queue without repeating till all sequences exhausted
UVM_SEQ_LIB_ITEM    : Execute a single sequence item
UVM_SEQ_LIB_USER   : Call select_sequence() method the definition of which can be overridden by the user

Steps to setup a Sequence Library?

STEP 1 : Declare a sequence library. You can declare multiple libraries for a given verification environment.











STEP 2 : Add user defined & relevant sequences to the library. One sequence can be added to multiple sequence libraries.








STEP 3 : Select one of the 4 modes given above based on the context i.e. to run random or run a basic sequence item for sanity testing or a user defined mode as applicable like a set of sequences that test a part of the code for which bug was fixed.






STEP 4 : Select the sequence library as default sequence for a given sequencer from the test. This kind of depends on the STEP 3 i.e. the context.


Advantage

Sequence library is one of the most simple, effective but sparingly used mechanism in UVM. A user can plan to use sequence library by changing modes to achieve sanity testing, mini regression and constrained random regressions from the same test. Further, the library helps in achieving the goal of CRV in terms of generating complex scenarios by calling sequences randomly thereby finding those hidden bugs that would otherwise show up in the SoC verification or the field. 

As Albert Einstein rightly said "No amount of experimentation can ever prove me right; a single experiment can prove me wrong". It is important to run those experiments random enough so as to improve the probability of hitting that single experiment to prove that the RTL is not an exact representation of specification. Sequence library in UVM does this effectively!!!


Previous posts -

Sunday, April 13, 2014

Hierarchical Sequences in UVM

Rising design complexity is leading to near exponential increase in verification efforts. The industry has embraced verification reuse by adopting UVM, deploying VIPs and plugging block level env components at sub system or SoC level. According to a verification study conducted by Wilson research in 2012 (commissioned by Mentor) the engineers spend ~60% of their time in developing/running tests and debugging failures. While the report doesn’t zoom in further, the verification fraternity would agree that a considerable chunk of this bandwidth is spent because the test developer didn’t have flexible APIs (sequences) to create tests and ill planned verification environment lead to extra debug. UVM provides a recommended framework that if incorporated effectively can overcome such challenges. Hierarchical Sequences in UVM is one such concept suggesting sequence development in a modular fashion to enable easier debug, maintenance and reuse of the code.

As in LEGO, hierarchical sequences postulate development of base structures and assembling them in an orderly fashion to build desired structures. This process demands considerable planning right from the beginning with theend goal in mind. Definition of the base sequences determine the flexibility offered for effective reuse. The aim is to develop atomic sequences such that any complex sequence can be broken down into a series of base sequences. A few aspects worth considering include –

TIP 1: Define a sequence that generates a basic transaction depending upon the constraints passed to the sequence. To implement this, local variables corresponding to the sequence item members are defined in the sequence. These fields can be assigned values directly while calling the sequence from the high level sequences or test. The values received by the sequence are passed as inline constraints while generating the transaction. This provides full control to the user to generate the desired transaction.

TIP 2: Bundle the fields to be controlled into a configuration object and use the values set for this object to derive the inline constraints of the transaction to be generated. This is a complex version of TIP 1 particularly useful when either there is large number of variables in the transaction class or multiple base sequences are created having similar variables to be configured. This class can have fields directly linked to the sequence item and additional variables to control the sequence behavior. It is best to define a method to set the Config object. Note that this Config object has nothing to do with Config DB in UVM.

TIP 3: UVM provides Configuration DB to program the components of environment for a given scenario. As complexity increases, it is desired to read the configuration of a given component or understand the current state of a component and use this information while generating the transaction. Having a handle of the UVM component hierarchy in the base sequence facilitates development of sequences is such cases.

Once the base sequences are defined properly, a hierarchy can be developed enabling reuse and reducing margin of error. Let’s apply this to a bus protocol e.g. AMBA AHB. Irrespective of the scenario, an AHB master would finally drive a READ or a WRITE transaction on the bus. So our base sequence can be L0_AHB_READ and L0_AHB_WRITE. These sequences generate an AHB transaction based on the constraints provided to them as in TIP 1. Next level (L1) would be to call these sequences in a loop wherein the no. of iterations is user defined. Further we can develop a READ_AFTER _WRITE sequence wherein the sequences L1_LOOP_WRITE and L1_LOOP_READ are called within another loop such that it can generate Single Write followed by Read OR Bulk Write followed by Read. Using the above set of sequences any scenario can be generated such as configuration of any IP, reading data from an array/file and converting it into AHB transactions or interrupt/DMA sequences etc.

Deploying UVM is a first step towards reuse. To control the efforts spent on developing tests and debugging verification code, UVM needs to be applied effectively and that is beyond the syntax or proposed base class library. Hierarchical sequences demand proper planning and a disciplined approach. The observed returns are certainly multi-fold. This reminds me of a famous quote from Einstein – Everything Should Be Made as Simple as Possible, But Not Simple!


Suggested Reading -

Sunday, April 6, 2014

UVM : Just do it OR Do it right!

No, the title is not a comparison of captions but a gist of the discussions at UVM 1.2 day – a conference hosted by CVC Bangalore celebrating their decade long journey. The day started with keynote from Vinay Shenoy, MD Infineon Technologies India where he discussed the evolution of Indian industry in the last 30+ years and why India is a role model for services but lags in manufacturing. He also shared some rich insights into the Indian govt initiatives to bridge this gap. Vikas Gautam, Senior Director, Verification Group, Synopsys delivered the next keynote raising a question on the current state of verification and what next after UVM? Later in the day, Anil Gupta, MD Applied Micro India in his keynote discussed the growth of semiconductor industry emphasizing the need for value creation and expectations from the verification team adopting UVM. Pradeep captured the summary of these keynotes here – Vinay, Vikas and Anil. Further to this, Dennis Brophy, Director of strategic business development, Mentor & Vice Chairman, Accellera unleashed UVM1.2 inviting engineers to participate in the open review before final release.

Begin with an end in mind

While the phenomenal adoption rate of UVM has alleviated obvious worries, applying it effectively to address verification at various levels is still challenging. Deploying UVM in absence of proper planning and a systematic approach is a perfect recipe towards schedule abuse. This point was beautifully comprehended by Anil Gupta referring to carpentry as a profession. UVM is a toolset similar to hammer, chisel, lathe etc. found with every carpenter. Successful accomplishment of the work depends upon a focussed effort towards the end goal. If a carpenter is building a leg of a chair, he needs to concentrate on the leg in the context of that specific chair. This ensures that when the chair is finally assembled it comes into shape instantly. Ignoring this would otherwise lead to rework i.e. delay in schedule while affecting the stability and longevity of the chair. Similarly, while developing a UVM based environment, it is critical for the verification engineer to define and build it at block level such that it integrates at the sub system or SoC level seamlessly.

My 2 cents

Verification today demands a disciplined approach for marching towards the end goal. To start with, the verification architecture document at block level needs to address the reuse aspect of UVM components. Next is definition of sequence item and config DBs, avoiding glue logic while knitting components at block level and extending clear APIs to enable effective reuse. Performance of the SoC test bench depends on the slowest component integrated. Memory and CPU cycles consumed by entities need to be profiled, analyzed and corrected at block level to weed out such bottlenecks. It is crucial to involve block level owners to understand, discuss and debate the potential pitfalls when aiming to verify a SoC that is nothing less than a beast. Breaking the end goal into milestones that can be addressed at block level would ensure verification cycles at SoC level are utilized efficiently. Early start on test bench integration for subsystem/SoC level to enable flushing the flow and providing feedback to block owners would be an added advantage. 

Following is a 5 point rating scale for measuring overall verification approach effectiveness particularly at IP level –


Remember to revisit this scale frequently while developing the leg of the chair.... I mean any UVM based entity that is scheduled for reuse further.... :)

Related articles –


Sunday, March 2, 2014

Back to the basics : Power Aware Verification - 1

The famous quote from the movie 'Spiderman' – “With great power comes great responsibility” gets convoluted when applied to the semiconductor industry where “With LOW power comes great responsibility”. The design cycle that once focused on miniaturization, shifted gears to achieve higher performance in the PC era and now in the world of connected devices it is POWER that commands most attention. Energy consumption today drives the operational expense for servers and data centers so much so that companies like Facebook are setting up data centers close to Arctic Circle where the cold weather reduces cooling expense. On the other hand, for consumer devices ‘meantime between charging’ is considered as one of the key factors defining user experience. This means that environment friendly green products resulting from attention to low power in the design cycle can bring down the overall cooling/packaging cost of a system, reduce probability of system failure and conserve energy.

DESIGN FOR LOW POWER

The existing hardware description languages like Verilog & VHDL fall short of semantics to describe the low power intent of a design. These languages were primarily defined to represent the functional intent of a circuit. Early adopters to the low power design methodology had to manually insert cells during the development phase to achieve the desired results. This process was error prone with limited automation and almost no EDA support. Archpro, an EDA startup (acquired by Synopsys in 2007) provided one of the early solutions to this space. Unified Power Format (UPF – IEEE 1801) and Common Power Format (CPF from Si2) are the two TCL based design representation approaches available today to define a low power design intent. All EDA vendors support either or both formats to aid development of power aware silicon.

VERIFYING LOW POWER DESIGN

Traditional simulators are tuned to HDLs i.e. logic 1/0 and do not have a notion of voltage or power turning on/off. For the first generation of low power designs, where cells were introduced manually, the verification used to be mostly script based by forcing X (unknown) on the internal nodes of the design and verifying the outcome. Later, when power formats were adopted for design representation, the verification of this intent demanded additional support from the simulators such as –

- Emulating the cells like isolation, state retention etc. during the RTL simulations to verify the power sequencing features of the design. These cells are otherwise inserted into the netlist by the synthesis tool taking the power format representation as the base
- Simulating power ON/OFF scenarios such that the block that is turned off has all outputs going X (unknown)
- Simulating the voltage ramp up cycle which means once the power is turned ON, it takes some time for the voltage to ramp up to the desired level and during this period the functionality is not guaranteed
- Simulating multi voltage scenarios in the design and in absence of level shifter cells the relevant signals are corrupted
- Simulating all of the above together resulting into a real use case scenario

Tools from Archpro (MVRC & MVSIM) worked with industry standard simulators through PLI to simulate power aware designs. Today all industry standard simulators have the feature to verify such design with limited or complete support to the UPF & CPF feature list. Formal/Static tools are available to perform quick structural checks to the design targeting correct placement of cells, correct choice of cells, correct connections to the cells and ensuring power integrity of the design based on the power format definition at RTL and netlist level. Dynamic simulations further ensure that the power intent is implemented as per architecture by simulating the power states of the design as functional scenarios. 

CONCLUSION

In the last decade, the industry has collaborated at different levels to realize power aware design for different applications. Low power was adopted for the products targeting the consumer and handheld markets initially but today it is pervasive across all segments that the semiconductor industry serves. The key is to define the low power intent early and incorporate tools to validate that the intent is maintained all throughout. As a result, low power lead to greater responsibility for all stakeholders to the design cycle in general and for the verification engineers in particular!

DVCON 2014 issue of Verification Horizons has an article “Taming power aware bugs with Questa” co-authored by me on this subject.


Drop in your questions and experiences with low power designs in the comment section or write to me at siddhakarana@gmail.com

Sunday, December 29, 2013

Sequence Layering in UVM

Another year passed by and the verification world saw further advancements on how better to verify the rising complexity. At the core of verification there exist two pillars that have been active in simplifying the complexity all throughout. First is REUSABILITY at various levels i.e. within project from block to top level, across projects within an organization and deploying VIPs or a methodology like UVM across the industry. Second is raising ABSTRACTION levels i.e. from signals to transaction and from transaction to further abstraction layers so as to deal with complexity in a better way. Sequence layering is one such concept that incorporates the better of the two.
 
What is layering?
 
In simple terms, the concept of layering is where one encapsulates a detailed/complex function or a task and moves it at a higher level. In day to day life layering is used everywhere e.g. instead of calling a human being as an entity with 2 eyes, 1 nose, 2 ears, 2 hands, 2 legs etc. we simply refer to it as human being. When we go to a restaurant & ask for an item from the menu say barbeque chicken, we don’t explain the ingredients & process to prepare it. Latest is apps on handhelds where by just a click of an icon everything relevant is available. In short, we avoid the complexity of implementation & details associated while making use of generic stuff.
 
What is sequence layering?
 
Applying the concept of layering to sequences in UVM (any methodology) improves the code reusability by developing at a higher abstraction level. This is achieved by adding a layering agent derived from uvm_agent. The layering agent doesn’t have a driver though. All it has is a sequencer & monitor. The way it works is that there is a high level sequence item now associated with this layering sequencer. It would connect to the sequencer of the lower level protocols using the same mechanism as used by sequencer & driver in an uvm_agent. The lower level sequencer would have only 1 sequence (translation sequence that takes the higher level sequence item & translates it into lower level sequence item) running as a forever thread. Inside this sequence we have a get_next_item similar to what we do in a uvm_driver. The item is received from the higher level sequencer. It is translated by this lower level sequence & given to its driver. Once done, the same item_done kind of response is passed back from this lower level sequence to the layered sequencer indicating that it is ready for the next item.
 
Figure 1 : Concept diagram of the layering agent
 
On the analysis front, the same layering agent can also have a monitor at higher level. This monitor is connected to the monitor of the lower layer protocol. Once a lower layer packet is received it is passed on to this higher level monitor wherein it is translated into a higher level packet based on the configuration information. Once done, the higher level packet is given to the scoreboard for comparison. So we need only 1 scoreboard for all potential configurations of an IP & the layering agent monitor does the job of translation.
 
Example Implementation
 
I recently co-authored a paper on this subject at CDNLive 2013, Bangalore and we received the Best Paper Award! In this paper we describe the application of Sequence Layering where our team was involved in verifying a highly configurable memory controller supporting multiple protocols from the processor side and a no. of protocols on the DRAM memory controller front. A related blog post here.
 
I would like to hear from you in case you implemented the above or similar concepts in any of your projects. If you would like to see any other topic covered through this blog do drop in an email to siddhakarana@gmail.com (Anonymity guaranteed if requested).
 
Wish you all a happy & healthy new year!!!