Tuesday, 24 October 2017

When IP Addresses Get Over

With the advent if IoT, the number of devices that need to connect to the internet is increasing every day. When a device connects to the internet, it is assigned a specific IP address. This address is unique to each device. The system widely in use today, the Internet Protocol Version 4 (IPv4), uses a 32bit number for the address. The 32 bits mean that the number of available addresses is 2^32 (4,294,967,296). But, the number of connected devices is increasing today at an average rate of 5.44 billion per year. This means that the number of available addresses will soon get exhausted! Does that mean the end of the internet and a promising technology like the IoT?

Definitely not! Internet Protocol Version 6 (IPv6) is your knight in the shining armour. IPv6 will prevent the annihilation of the internet! The IPv6 redesigns the internet protocol itself. First major change is the number of addressing bits. It uses 128 bits instead of 64 (giving us around 3.403×1038 addresses).

But what exactly is IP? The full form is obviously well known, but what is the necessity of IP in using the internet? The internet consists of 4 layers. At the top, is the application layer. This has the applications you use every day like Facebook, Whatsapp and such. Next is the transport layer. TCP is a popular transport layer and it’s implemented along with the IP layer. The third is the network layer, which is the Internet Protocol. The last one, the link layer can be Ethernet or WiFi. The network encapsulates the data into packets called datagrams. A datagram has data and a header. The header has a fixed structure and so some specific duties to perform. So, when I said IPv6 redesigns the internet protocol, I meant that the header is changed. The rest of the datagram is the payload or the data.

IPv4 has a fixed payload length field of 16 bits in the header. This gives us the ability to specify 65535 octets or bytes. But in IPv6, we can use a jumbogram which allows us to specify payload sizes of up to one byte less than 4 GB. Further, in this new version of IP, has network layer security. Normally, the network layer just forwards the packets along the network on a hop by hop basis. But the Internet Protocol Security (IPSec) is a mandatory specification for IPv6. For IPv4, it was optional. The packet header and the process of forwarding have also been simplified, although the headers in v6 are longer.

These are the differences. So how do we go about changing over from IPv4 to IPv6? The Internet Engineering Task Force (IETF) has recommended certain deployment models and migration tools for the transition. Temporary goals of the transition are to enable parts of the internet to employ IPv6 and disable v4. The end goal is a network-wide IPv6 deployment which will result in IPv4 becoming obsolete. The simplest model is to use a dual stack and allow both the versions to run simultaneously. In such a network, it is up to the peers connected to each other to decide which version to use. But, the peers should be reachable by an IPv6 address and should advertise their using a naming service like DNS. This is the recommended approach wherein IPv4 can then be phased out once all peers on the network have an IPv6 address. Other approaches include tunnelling through IPv4 networks and creating IPv6 only networks.

The government of India too, has created its own deployment model in accordance with the e-governance plan. Thus, IPv6 is about to kill IPv4 and open up a whole new world to us.

Sources: www.statistica.com; Wikipedia; Stanford Lagunita, Introduction to Computer Networks course; docs.oracle.com, IPv6 administration guide; tools.ietf.org, Guidelines for using IPv6 Transition Mechanisms; 

Sunday, 30 July 2017

How to dual boot your PC

So, you want Linux on your PC which is currently running Windows. 10 maybe? But you don't want to let Windows go. It's too familiar and comfortable. In this case, the best option is to dual boot.
Here are the steps to dual boot a Windows PC with Ubuntu Linux. The first two are independent steps. You can follow them in any order. But once they're done, the rest should be in order.

Step 1
There's a high chance that your PC will be in UEFI Boot mode. You will need to disable secure boot to boot from a flash drive and might also require enabling legacy support. Pressing either of F2, F12, F10 or esc during start-up will do the trick. Most likely you'll see an instruction saying something like 'Press esc for startup options' right when you press the power button. Some manufacturers also include a dedicated button for start-up options. Using the arrow keys, navigate to 'Boot Options' under 'System Configuration' and press Enter. Now go to 'Secure Boot' and disable it. Similarly, go to Legacy Support and enable it. Select Save and Exit (F10 for HP, might be the same for others too). Your PC will now start normally in Windows.

Step 2
Make a bootable USB drive. For this part, I'm assuming you already have an ISO file of Ubuntu. If not, you can download it from their official website.
Here, you'll need an application, Rufus.
Download and run the .exe file (no installation required)
You'll see a window like this:
Select 'ISO image' as circled in the picture and then select your .iso file by clicking on the icon beside it. Device should show the name of the USB device currently connected. Keep the rest as it is.
Note: clicking on start will erase all the data presently on your device, so be sure to take a backup.
Once this is done, you're set to start the actual installation!

Step 3
Restart your PC and go start boot manager. It's the same screen you went to for enabling legacy support.
But now, instead of BIOS options, go to Boot device options or something similar. You'll be asked the device from which to boot. The flash drive with Ubuntu ought to be visible here. Select that.
Note: Select the drive in the BIOS you have Windows. Or else you'll be losing it. Normally it'll be UEFI. 
This point does make me feel installation might work without even enabling Legacy support. But everyone suggested doing it and it doesn't really hurt. 

Step 4
You'll now be seeing a screen with several options like 'try Ubuntu without installing' and 'install Ubuntu'. Choose try 'Try Ubuntu without installing'. The Ubuntu desktop will open. You familiarise yourself with the OS. Setup a Wi-Fi connection. There will be an option to install on the desktop. Select that when you're ready. 

Step 5
So you've selected to install now. First, you'll be asked your choice of language. After that, you'll see a window with check boxes for 'download updates while installing' and 'install third party software'. I suggest you check both. Next, it will check for installed operating systems. You will get a message saying something like Windows was detected, select action. It will ask whether you want to erase Windows and install Ubuntu or install Ubuntu alongside Windows. Choose the second option. 

Step 6
You'll be asked to partition your hard disk. A visual representation of the partitions will be visible. It will show some arbitrary space assigned to Ubuntu. You just have to drag the separator between the two partitions on your screen to change the size. Once you're done click on install now. This step takes some patience as it takes quite a lot of time to partition your hard disk. After that, it's a cake walk. Go watch a movie. Ubuntu will be ready for you by the time you're back!

Friday, 16 June 2017

Electronics Under Radiation!

Radiation! Everyone knows the effects of radiation on human health. But is living tissue the only thing it harms?
No. Ionising radiation can weaken materials, embrittle them or cause an electric breakdown. Semiconductors are particularly affected by such radiations. There are two incidents. What will happen if you use a semiconductor device in a radiation environment for a long period of time? What will happen if the device experiences a short burst of radiation energy?
In the first case, the device characteristics deteriorate. There is an increase in leakage current, change in the threshold and many such long-term effects. In the second, there may be bitflips in memory or transient pulses in logic circuitry. The first effect is termed as Total Ionisation Dose (TID) effect. The second, Single Event Effects (SEE). As these names suggest, TID is due to prolonged exposure and SEE, the result of a sudden, single energetic particle.
Change in device characteristics
So, what really happens?
At the core, both effects occur because of generation of free charges. The high-energy particles hit the device and liberate additional free charges. In TID, a fraction of the generated holes gets trapped in the oxide regions of the device. This happens because electrons having a high mobility are quickly swept away leaving holes behind. This situation is somewhat like a place with a skewed sex ratio resulting in not enough partners for marriage. These holes that are left behind, affect the electrical characteristics of the device.
A major factor affecting the degradation is, of course, the total dose of radiation received. The popularly used unit for measuring it is rad where 100 rad = 1 J/kg. Another important factor is the dose rate, measured in rad/sec. Higher the dose rate, greater will be the degradation. Then, there are factors like the geometry of the device, method of fabrication, its bias conditions and temperature among others. TID however, affects at the device level. It will cause degradation in the individual device parameters.
Trapped charges, red indicating more charge
SEE, on the other hand, affects at the circuit level. In memory cells, it may cause a bit to flip. In digital circuits, it may cause a pulse to propagate through the circuit. These, however, are not permanently damaging. Strong bursts of energetic particles can cause severe effects like generating shorts in the circuit (called latch-ups) or damage the gate oxide.
Single Event Effects
But where will devices experience radiation?
It’s not as if we expose our phones to high energy radiations on a daily basis. The circuits that are actually exposed to such high doses are those meant for special purpose applications. Beyond the atmosphere, there’s always an incoming barrage of high energy particles of all sorts. So, all devices meant for space applications are at risk. Further, circuits used in high energy physics experiments such as particle accelerators also are under threat from radiation.
Is there any solution?
Yes. The process is called radiation hardening. Literally, it means making the devices ‘hard’ or resistant to radiation. One of the methods is to use Silicon-On-Insulator technology. But that brings with it, its own set of problems. Shielding is a good option. There are also various fabrication methods and layouts on the chip which are used and which give a better performance when attacked by radiation.
What if the device is damaged?
TID can be mitigated by annealing at a specific temperature. This causes the traps to escape as they gain energy from the high temperature. As for SEE, an entire reboot of the system might be helpful. But imagine the losses if an entire system on a space station needs to be rebooted!

So, the best we can do is use proper radiation hardening techniques and avoid radiation related side-effects. But of course, we can never be sure!

Sunday, 23 April 2017

FIR Filter Design: Frequency Sampling Method

This method is another way of designing linear phase FIR filters. The process of obtaining the desired frequency response is same. The difference starts from then on. The desired frequency response is sampled in the frequency domain and then its inverse is calculated, which gives the filter response. For practical design, we again used Scilab. The formulae for DFT and IDFT were incorporated into the code and the filter parameters were taken as user input. Plot function helped us verify the accurateness of the designed filter.

Basic Operations on DSP Processor

The theoretical aspects of DSP technology are not too difficult. We just have different algorithms for different operations. But, the real world doesn't work on just mathematics and algorithms. We need to have some physical hardware that will implement these operations. This is where the DSP processor comes in.
We used a custom board of the popular C2000 processor. The coding platform used was Code Composer Studio. Using the implementations of DSP algorithms in C language developed previously, the code was tweaked to work on-chip in embedded C. Basic operations were performed on the board such as addition and subtraction among others. We also implemented FFT algorithms. The difference in implementing on hardware is that we have to reference the registers too while writing the code while a simple C language implementation does not require this.

Patent Review: Blind dialing US 8126512 B2

Who would not like to be able to call or send a text without tapping away on our screens? Voice assistants try to make this task easier by listening to voice commands. These commands too, however, require extensive setup, as your phone first needs to learn your voice pattern and pronunciations.
The invention 'Blind Dialing' tries to do away with this need for setup by using Morse code. It listens to acoustic signals and using DSP algorithms to identifies the morse code pattern from it. It then compares the decoded data to a set of existing phone numbers or identifiers for phone numbers and dial. The claims of this patent include a wireless communication device for analysing the destination and source addresses, decode morse signals and perform filtering of noise using DSP algorithms.
The only drawback for this: the user will have to learn morse code!

https://www.google.co.in/patents/US8126512?dq=Blind+dialing+US+8126512+B2&hl=en&sa=X&ved=0ahUKEwjot6nJ-8jTAhVEp48KHZ7NCsUQ6AEIJjAA

Saturday, 22 April 2017

FIR Filter Design: Windowing Method

If you have read previous posts, then you will be familiar with IIR filters. But how about FIR filters?
These are filters that have a Finite Impulse Response. In these, the order means the length of the impulse response. Broadly, the method for designing the filter is the same as IIR filters. You input the formulae in a Scilab code and run the program. But the formulae are different, meaning, the pen and paper method is different. 
We used a Hanning Window as the window function and wrote the code accordingly. During execution, the filter parameters like the attenuations and frequencies were given as user input. The plot function was used to verify the response of the designed filter. A notable difference between IIR and FIR designing is that much of the calculation is done in time domain rather than the transform domain.

Paper Review: Implementation of Morse Decoder TMS320C6748 DSP Development Kit

Morse Code has been used for communication right since the Second World War. Then, the radio operators used to decode the received morse signal by hand. But today, we have technology to make our life easier. What if we used processors to decode the incoming morse signal?

This paper did exactly that. The authors used a Digital Signal Processor to implement a real time morse decoding system. They used a noisy audio signal as in input, filtered the signal digitally and extracted the dots and dashes that make up the morse code. Thanks to the DSP algorithm, all of this could be done in real-time. An added advantage was, it used several basic DSP algorithms like the Cooley and Tukey FFT algorithm, making the setup useful for teaching purposes.

Paper Title: Implementation of Morse Decoder on the TMS320C6748 DSP Development Kit
Authors: Pavel Zahradnik and Boris Simak
Published At: 6th European Embedded Design in Education and Research, 2014
Publisher: IEEE

You can find the paper at http://ieeexplore.ieee.org/document/6924373/


Friday, 21 April 2017

Chebyshev Filter Design

When we can see ripples in a filter response, either in the stop band or the pass band, it's called a Chebyshev filter. Of course, to account for the ripple, we need to consider different equations.
The method for filter design, however, stays the same. Open Scilab, write a code and voila, your filter is ready! The only difference in the code for Chebyshev is the formula for calculating the parameters of the analog filter. The digitisation process stays the same.
We designed a Chebyshev 1 filter where the ripple exists only in the pass band and the stop band is free of it. The frequency response of the digital filter could be seen to be close enough to the desired response.

Monday, 3 April 2017

Butterworth Filter Design

Filters are essential in any system. In the classroom, it's easy to design filters for smaller orders using a pen and paper. But, in practical systems, very high orders are required to get the desired response. The order may go in tens. If you sit and go about designing such a filter, you'll spend the entire day and in the end, go crazy!

Scilab comes to the rescue in such situations. It's an open source tool for simulation. Of course, if you're willing to spend some money, then MATLAB will be better, and easier too. But one should always be familiar with open source tools. We wrote a code in Scilab to design a Butterworth filter and also digitise it. The transfer function of the filter was calculated in the Laplace (s) domain and then converted to the Z domain by using the Bilinear Transform Method.

Both, low pass and high pass filters were designed and their magnitude and frequency responses were simulated. We could see that we got a response very close to the desired one and the order was higher than 10 for each of the designs.

Monday, 13 March 2017

Filtering of Long Data Sequences

Practical signals are very long. Simple FFT and multiplication are not enough. So, we have to develop additional methods to filter or analyse such signals. Two of the standard methods used are Overlap Add Method (OAM) and Overlap Save Method (OSM). Both of these are block processing techniques. Meaning that the input sequence is divided into blocks and the operations are carried out on these blocks.
Using C programming, we implemented both the methods. A FIR filter was assumed, to which input was given. OAM uses linear convolution. In the program, FFT was used to calculate the linear convolution using circular convolution. In OSM, circular was calculated, again with the help of FFT. These blocks can then be processed in real time.

Sunday, 12 March 2017

Fast Fourier Transform

Fast Fourier Transform or FFT is at the heart of any DSP system. Converting to frequency domain and then sampling was a challenge overcome using DFT. But in today's world of real time processing DFT is too slow. If we want to, say, analyse the vibrations of a railway track caused by a train, then, if the DFT algorithm is started when the train approaches the track, two other trains would pass by before analysis is complete.
This definitely won't do. So FFT, which as the name suggests is a fast algorithm, is used. We studied implementing Cooley and Tukey's Radix-2 DITFFT algorithm. DIT stands for Decimation In Time. The signal is decimated in time domain which helps to reduce the number of calculations. We observed that the number of complex multiplications reduced greatly. This contributes to the speed. And the input and output sequence orders are in a bit reversed manner. So this model is easy to expand to higher values of N and improve the computational efficiency of DSP systems.

Discrete Fourier Transform

Everyone has heard of Fourier Transform (FT). It's used to convert a signal from time domain to frequency domain. But what is Discrete Fourier Transform?
I don't mean Discrete Time Fourier Transform, where the integral for Continuous Time FT is replaced with a summation, for discrete time signals. Discrete Fourier Transform (DFT) is the sampled form of DTFT. Since nowadays, almost all systems are digital, we have to sample the continuous signal given by DTFT, to obtain a discretised version of it. The signal is sampled at integer multiples of (2π/N). This, we can then process in a digital system.
We implemented this transform using C programming for an 'N' point discrete time signal. The value of 'N' was taken as input from the user. Of course, this discretisation is at the cost of accuracy. But this error can be decreased by appending zeros at the end of the actual signal, which gives a larger value of N giving us a better spectrum. The reason for this increase in the accuracy is that DFT always gives periodic results. So even if your signal is not periodic, it has to be assumed periodic! If we expand the original time domain signal, the DFT output gets compressed and vice versa.
The only drawback is that it's too slow for real time processing. A computer or any processor will take a lot of time to perform N square complex multiplications and N(N-1) complex additions. Obviously, we don't practically use this. The solution: Fast Fourier Transform!

Wednesday, 8 March 2017

Convolution and Correlation

Convolution and correlation are integral parts of any digital signal processing system. With a pen and paper, it's pretty easy to calculate. But how do we actually implement this on a digital signal processor or a computer?
That's what we studied: the implementation of linear and circular convolution, linear convolution using circular convolution and auto and cross correlation using C programming.
We saw how complicated it gets to perform simple operations like zero padding using a programming language. We took the length of the signals as the initial inputs for convolution and then entered the signals. The result was displayed on the screen. For circular, the greater of the two lengths was entered and by zero padding the result was displayed. We also realised that circular convolution gave an aliased output.
For correlation, we tried combinations of a signal delayed by different amounts with auto correlation and found that it always gave the same output. If we try cross correlation of a signal with its delayed self, we get a result that is an advanced signal from the auto correlation output. Thus we understood the importance of using correlation to find the degree of similarity between two signals. An added point: the value of autocorrelation output signal for n=0 is always the energy of the signal.