Beginning of The Indian Space Odessey
Author: Akash Kumar Singh
After the Second World war came to an end, USSR and USA had got into a technological warfare called the Cold war. Achieving pioneer success in the field of space exploration/aerospace technology was the one most emphasized upon. Nazi Germany had lost the battle, and their V2 missile technology was acquired by the USA to use it for security and aerospace applications. USSR had also got a hand on a few of the rockets and the Nazi scientists. Meanwhile, India was struggling to establish itself as an independent nation.
The Nazi V-2 Rocket.
Scientific research had gained some momentum in India with the establishment of the Tata Institute of Fundamental Research (1945), and pioneer research work by Homi. J. Bhabha, C.V. Raman, Vikram Sarabhai and S.N. Bose had set the acumen of the fellows at the IISc, CSIR, PRL and other laboratories throughout the country.
Homi Bhabha and Vikram Sarabhai were interested in India's space program, but in the late 40s space exploration/research was thought of a privilege for only rich nations, and Indians could hardly manage to feed themselves. Technological revolution earlier was only used for destruction during the World wars; it was yet to be used to build a nation and empower its people. Both Bhabha and Sarabhai shared a similar vision. Their expectations from the Indian space program were not to compete with economically advanced countries in terms of lunar/planetary exploration or Human space flight, but to apply advanced technology to solve common man problems like agriculture, remote sensing and education.
Vikram sarabhai and Homi bhabha's first visit to Thumba.
Meanwhile, the USA and USSR were in an end to end competition to reach the space first and leave a footprint on those who follow. USSR developed a new series of rockets, the R series. R-1 was an exact copy of the Nazi V-2, but as time passed the Russian scientists developed R-7, " World's first Intercontinental Ballistic missile" which also served as the launch vehicle for Sputnik-I in 1957 (The first human-made satellite in space). USSR's rapid advancement was alarming for the rest of the world. Still, to Sarabhai and Bhabha, it was a relief and a source of inspiration.
India approached NASA and the French CNES for initial support and help, to which they agreed on some T&C of their own. The geomagnetic equator passes through India, which gave us a subdue advantage as it would reduce the launch cost/weight constraints by a high amount. Soon INCOSPAR was formed under the Department of Atomic energy to find a suitable launch site and get trained the scientists from NASA.
After some exploration Thumba, a Fishing village in Thiruvananthapuram was chosen as the launch site. The committee had people like A.P.J Abdul Kalam, E.V Chitnis and other members. They would become the founding members of the Indian space program. These people worked selflessly without any stipend as the Government minimal finances for the launch. The launch equipment was carried to the launching station on cycles and bullock carts.
Staff carrying the launch equipment on bullock carts and cycles (due to non-availability of radiation sheilded vehicles.)
Finally, on November 21st 1963, the first sounding rocket took off from Thumba Equatorial Rocket Launching Station (TERLS) with two payloads. First being a sodium vapour payload (to study the upper atmosphere), second being a magnetometer (to check the equatorial electrojet). Following this initial success Space, Science and Technology Centre (SSTC) was set up in Thumba; Satellite Telecommunication Earth Station was set up in Ahmedabad, Sarabhai could see his vision of the space program taking its shape.
Later in the year 1967, India managed to build a sounding rocket of its own, called the ROHINI-75 and was launched from TERLS on Nov 20 the same year.
The space program soon paved its way in the international community when TERLS dedicated to the United Nations and the "Indian Space Research Organization"(ISRO) was formed under the Department of Atomic Energy (DAE) on the 22nd anniversary of India's Independence.
Just when the space program was in its full swing, all were numbed by the sad demise of Dr Vikram Sarabhai on 30th December 1971. However, the program continued under the leadership of E.V. Chitnis and A.P.J Abdul Kalam.
After gaining its identity as an organization, the ISRO started with Air-borne sensor experiments, which later paved the way for the development of thermal sensors, RADARs, photographic equipment, and an applications program was evolved around these instruments. These experiments led to the launch of Bhaskara-I and Bhaskara-II under the "Satellite for Earth Observation (SEO)" program and embarked India's entry into the remote sensing programme.
Backtracing, on June 1st, 1972, the Department of Space (DOS) and the Space commission was set up, and ISRO was brought under the DOS. All these administrative steps in the space program were right in place as Sarabhai had engineered in his mind years ago.
In 1975-76 a unique experiment called "Satellite Instructional Television Experiment (SITE)" was conducted by SAC/ISRO, utilizing the American ATS-6 satellite educational television programs were broadcasted for the first time. This initial success led to the "Satellite Telecommunications Experiments Projects (STEP)" carried out with the French-German satellite, Symphony. On April 1st 1975 ISRO became an independent government organization.
Aryabhatta, the first in-house manufactured satellite of India.
On April 19th 1975 India launched its first in-house manufactured satellite, Aryabhata on a Soviet Kosmos-3M rocket from Kapustin Yar (now Volgograd space station) with experimental payloads (Xray, Aeronomy, Astronomy). Finally, India had lived its dream of launching its own satellite.
Journey of India to Space
Author: Abhishek Avadhanam
While the Indian Space Program was deeply affected by the death of Vikram Sarabhai, the wheels he set in motion kept turning to his design. He aspired to give India the ability to Independently partake in its aerospace ventures, with the process of satellite development and launch happening domestically.
Vikram Sarabhai had arranged the formation of the Thumba Equatorial Rocket Launch Station to help India leapfrog the development of booster and aerospace technology by observing and learning from its international partners. Since the TELRLS was dedicated to the United Nations on 2 February 1968, over 2200 sounding rocket launches have taken place from it. Notable rockets that have flown from Thumba include Nike Apache, Arcas-1, Centaure-1, Dragon-1, Dual Hawk, Judy Dart, Menaka-1 and many more. All this activity was a great catalyst to the development of India's own Launch Vehicles.
ASLV,leaving the Satish dhawan space centre with a Rohini satellite on 24th march 1987.
The first domestically designed and built sounding rocket, Rohini-75, made its maiden flight on 20 November 1967. The Sriharikota Range, India's first Satellite Launch Station was set up in 1969, and it became fully operational after the launch of a RH-125 sounding rocket on 9 October 1971. After the death of Vikaram Sarabhai on 30 December 1971, TELRIS and associated space establishments at Thiruvananthapuram were renamed as the Vikram Sarabhai Space Centre in his honour.
Dr. Satish Dhawan, 4th and longest serving Chairman of ISRO
After Vikram Sarabhai's demise the mantle of ISRO's Chairman fell on Satish Dhawan, after a brief interim tenure of 9 months by M. G. K Menon. Under Satish Dhawan, India became a space-faring nation; we were only the 6th nation to do so. Satellite Launch Vehicle (SLV-3), India's first orbital booster, successfully launched from Sriharikota Range on 18 July 1980 and placed an Indian built satellite, into Low-Earth Orbit. In total there would be 4 SLV-3 launches between 1979 and 1983, out of which only two were successful. But the knowledge gained led the way to the development of future advanced launch vehicles.
INSAT- 1B, first successful communication satellite of INSAT program.
After several technology demonstrations via the Rohini, Bhaskara, APPLE and RS satellites the Indian National Satellite System was commissioned with the launch of INSAT-1B on 30 August, 1983. A previous attempt at launching INSAT-1A had been a failure. Today INSAT is the largest domestic communication system in the Indo-Pacific region. INSAT ushered a revolution in Television/Radio broadcasting, telecommunications, and meteorological sectors in India. Of 24 satellites launched via the INSAT program, 11 are still operational.
Dr. U.R. Rao, 4th Chairman of ISRO
In 1984, U. R. Rao became the Chairman of ISRO. For the next ten years, he pushed for the development of domestic launch vehicles, eventually boosting India's status as a space-faring nation. But a very notable event also occurred in this time.
Wing Commander Rakesh Sharma, AC became the first Indian to travel in space when he flew aboard Soyuz T-11 on 3 April 1984. As part of the Interkosmos programme he along with his crewmembers, Yury Malyshev and Gennadi Strekalov, launched from Baikonur Cosmodrome in present-day Kazakhstan and flew through space in their Soyuz capsule toward the Salyut 7 Space Station. Rakesh Sharma proceeded to spend the next 7 days, 21 hours, and 40 minutes in the space station as a Research Cosmonaut. He conducted several experiments in biomedicine and remote sensing. Famously the crew of Salyut 7 partook in a joint television conference with Prime Minister Indira Gandhi. When she inquired about how his nation looked from outer space Wing Commander Rakesh Sharma replied "Sare Jahan Se Accha". And thus, India became the 14th nation to send its citizen to space. Upon his return on 11 April 1984, he became the only Indian to be conferred the honour of "The Hero of the Soviet Union". He and his crew were also awarded the Ashoka Chakra.
Wing Commander Rakesh Sharma, First Indian Cosmonaut to travel to Space.
For ISRO, sending a man to space was the icing in the cake of all their achievements. The vision Vikram Sarabhai had dreamt all those years ago was coming together. The development of the Augmented Satellite Launch Vehicle (ASLV) started in the early 1980s in an attempt to develop a Small-Lift launch vehicle capable of placing a payload in Geostationary Orbit. The ASLV would launch four times between 1987 and 1994 from SHAR. After these initial developmental flights, ISRO had to terminate this program in favour of the PSLV due to insufficient funds.
There are various Programs and that were set in the consequent years that are still active and affect our day to day lives. These programs, which include various types of earth sensing and telecommunications satellites, are too many to be included in this article, but the chief amongst the things that were developed in the coming years were the various launch vehicles.
The Polar satellite launch vehicle, getting launch ready at SDSC
The Polar Satellite Launch Vehicle is a medium-lift launch vehicle. The PSLV can send payloads into sun-synchronous orbits, a service that before its introduction was available commercially only via Russia. ISRO had been looking into creating such a vehicle since 1978 to launch its INSAT series satellites, and the booster it developed has a very elegant modular design, which gives the PSLV the ability to cater to a range of different mission requirements, as well as become a leading provider of rideshare services for small satellites. The first launch of PSLV occurred on 20 September 1993. Today after nearly 30 years of operational development, the PSLV is a popular launch vehicle that has launched over 50 times and has put satellites from over 33 countries into earth orbit. Famous payloads include Chandrayaan-1, Mangalyaan, Astrosat etc. A notable event was the deployment of 104 satellites via a PSLV-C37 on 15 February 2017.
The Indian Remote Sensing Programme stated on 17 March 1988 with the launch of IRA-1A. Today, the IRS system is the largest constellation of remote sensing satellites for civilian use in the world. It supports the national economy in the areas of agriculture, forestry, ecology, geology etc.
GSLV Mk-II, the largest Indian satellite launch vehicle currently in operation.
The Geosynchronous Satellite Launch Vehicle (GSLV) project was initiated in 1990. The GSLV uses various proven components of the PSLV. The GSLV Mk I had a Russian built KVD-1 cryogenic engine in its third stage, it first launched on 18 April 2001. The GSLV Mk II had a domestically built CE-7.5 cryogenic engine its third stage, and it first launched on 15 April 2010. It mainly supported the launch of INSAT satellites.
The Sriharikota Range (SHAR) was renamed as the Satish Dhawan Space Centre (SDSC) in 2002 after ISRO's former Chairman, Satish Dhawan.
The GSLV Mk III is a medium-lift launch vehicle that is totally different from the similarly named GSLV Mk I/Mk II. It was primarily designed to launch communications satellites to geostationary orbit, but it has also been identified as the launch vehicle for crewed missions. After a suborbital test on 18 December 2014 which launched the Crew Module Atmospheric Re-Entry Experiment, the entire GSLV Mk III stack was successfully launched on 5 June 2017 from SDSC. Famous payloads include Chandrayaan-2 and possibly in the near future, Gaganyaan.
GSLV MK-III (left) and Gaganyaan(right)
The future of ISRO is very exciting. It includes everything from human spaceflight, planetary sciences, astronomy to extraterrestrial exploration. Under the Indian Human Spaceflight Programme, future Indian astronauts have already been sent for training and plans for crewed missions before 2022 are underway. There are even plans for the construction of an Indian Space Station. Many new Launch Vehicles are also under development, some being commercial boosters. Under the Reusable Launch Vehicle Technology Demonstration Programme, RLV-TD was launched on 23 May 2016. The Unified Launch Vehicle (ULV) is a development project whose core objective is to design a modular architecture that could eventually replace the PSLV, GSLV Mk I/II & GSLV Mk III. New Launchpads are being constructed in SDSC, and new exciting earth sensing satellites and explorative missions like Chandrayaan-3 are under development. This is truly a very exciting time for the Indian Space Programme!
SSLV is another small lift launch vehicle currenly under developement, which would be used to place payloads(<=500kg in LEO)
Introduction to ADCS
Author: Abhishek Avadhanam
Attitude Determination and Control refers to the process of determining the orientation of an object and bringing this object to the desired state. This task is broadly divided into two studies: Attitude Determination using Sensors and Estimation Algorithms, and Attitude Control using Actuators and Control Algorithms.
As you can probably imagine, the study of Attitude Determination and Control is not only limited to Spacecrafts but to anything that can move in 3-D space. Thus, these concepts are used in various domains like Robotics, Aircrafts, Drones, Submarines, Rockets, Satellites, Self-Driving-Cars etc. For a satellite, Attitude Determination and Control Subsystem is responsible for the stabilization and maintenance of the desired orientation of the Satellite.
To describe how an object is placed in space, three things are required:
As with anything in life, proper Attitude is essential for success. In Engineering, this desired Attitude is one where the Body Fixed Frame of our object of study is aligned with a pre-defined Frame which acts as a reference. The responsibility of maintaining this Attitude falls on ADCS.
In Earth pointing satellites like ours, the Reference Frame is called the Orbital Reference Frame, which is dependent on the current position and the orbit of the Satellite. An Orbit Propagator is used to find our Translation as a function of time using initial inputs of position and velocity found using a GPS receiver.
There are various ways you can represent the Attitude of an object. For example, popular representations of Attitude include Directional Cosine Matrices, Euler Angles, Rodrigues Parameters and Quaternions. We will explore the various methods of Attitude representations, the functioning of the Orbit Propagator and the construction of this Reference Frame in future articles.
The process of finding our orientation is known as Attitude Determination. We need at least two measured vector quantities to find our Attitude. These measurements are made using sensors, which are devices that measure a physical quantity and convert it into a signal that can be read. For a Spacecraft, we have a variety of such physical quantities and corresponding sensors we can use to find our Attitude. Our choices include Sun Sensors, Star Sensors, Horizon Sensors, Magnetometers, Gyroscopes etc. The output of these sensors is fed into an Estimation Algorithm. This algorithm deals with the errors and the noise in the output of the sensors and calculates our Attitude.
The choice of Sensors and Estimation Algorithms is heavily dependent on the mission profile and stability criteria. For example, the output of a horizon sensor can change between the sunlit and eclipse regions of the orbit; the sun sensor can’t be used in the eclipse region of the orbit; a star sensor may be difficult to use in the sunlit region etc. We will study the choice of our sensors and our estimation algorithms in future articles.
The next step after Attitude Determination is Attitude Control, which is the process of bringing a system from its current state to a reference state. Attitude Control utilizes the use of a Control Algorithm, which gives a control signal as an input to an Actuator, which in turn changes the state of the system. An actuator is a device that turns a control signal into mechanical action. There are various choices of Actuators for Spacecrafts which include Reaction Wheels, Control Moment Gyros, Magnetorquers, Thrusters etc. The process of choosing appropriate actuators is very interesting as it is dependent on the weight constraints, power constraints and the mission profile of the satellite.
Various control algorithms have been developed over the years. The two broad classifications are Open Loop and Closed Loop Control Algorithms. The Output of an Open Loop Control Algorithm is not dependent on the current state of the system whereas in Closed-Loop Control Algorithms continuous or periodic measurements of the state of the system are used to calculate the control signal. We will study the choice of actuators and the development and tuning of Control Algorithms in future articles.
The development and use of these Sensors, Estimation Algorithms, Actuators and Control Algorithms requires various stages of hardware tests, and Software-In-Loop and Hardware-In-Loop Simulations.
As you will find very soon, ADCS is a fascinating subsystem that uses knowledge from various fields of engineering like Computer Science, Power Electronics, Control Systems, Estimation, Mechanical Engineering as well as core principles of Dynamics and Mathematics to help us maintain the proper Attitude of our Satellite. We are looking forward to exploring its various aspects with you.
Title: Frames of Reference
Authors: Smit Kamal and Carina
For a satellite to control its attitude, it needs to know its position and its orientation. Apart from this, the satellite may also require the positions and orientations of other objects in space like a star, the sun, the moon, other satellites, or other heavenly bodies in our galaxy. But how do we represent all this information?
In engineering and physics, we often deal with quantities known as vectors. Vectors are helpful as they can be used to represent a quantity with a direction and a magnitude. To record a vector, we need a frame of reference. And in this situation, where everything, from the sun to moon, earth to satellite, is not stationary, we need to be extremely careful while selecting the frames.
Considering the non-inertial nature of satellites and the heavenly bodies, it would have been ideal if we could do our observations from an inertial frame of reference as it would have eased our calculations by getting rid of the additional pseudo forces that could have been imparted from using the non-inertial frame. But as no such frame exists, so we must create our reference frames with respect to objects we can observe.
We need the following to describe a right-handed co-ordinate system:
Different frames of reference are described by the different ways these properties can be defined.
The Earth Centred Inertial (ECI) frame
The Geocentric-Equatorial Coordinate System a.k.a. the Earth-Centred Inertial frame has its origin at the centre of the earth. But this frame is not fixed to the earth as it does not rotate with the earth. The fundamental plane contains the earth’s equator, the X-axis pointing towards the vernal equinox, the Z-axis pointing towards the geographical North Pole and the Y-axis consequently completing the right-handed co-ordinate axes.
Figure 1. The Earth Centred Inertial(ECI) frame
The vernal equinox is usually defined as “the place in the sky where the sun rises on the first day of Spring”. This definition is vague and confusing. To get a better picture of the vernal equinox, imagine the Sun’s orbit around the earth. Yes, the earth orbits the Sun, but from the point of view of the earth, the math is equally valid to say the opposite (remember the concept of relative motion?). The plane formed by the hypothetical orbit of the Sun around the earth is called the ecliptic which intersects the earth’s equatorial plane at two points, one where the Sun crosses the equator while ascending (going from the southern hemisphere to the northern), and the other one when the Sun is descending (going from the northern hemisphere to the southern). If you join these two points, you will get the nodal line for the Sun’s orbit. The Sun’s ascending node is called the Vernal Equinox.
The Perifocal Frame is popularly known as the “natural frame” for an orbit. Its origin is at the centre of the earth. The fundamental plane (XY plane) is the orbital plane, X-axis is directed to the eccentricity vector, Z-axis is in the direction of the satellite’s angular momentum which lies perpendicular to the orbital plane, and the Y-axis completes the right-hand set of co-ordinate axis.
Figure 2. Perifocal Frame
For our convenience we treat the ECI frame and the Perifocal Frames as Inertial Frames, this is because due to the concepts of conservation of energy and angular momentum the orbital plane and the angular momentum vectors of the earth with respect to the sun and the satellite with respect to the earth remain constant in direction for an extended period of time. But we do require the use of some non-inertial frames as well. This is because certain things (for example, the earth’s magnetic field) depend on the satellite’s with respect to the ground, and we require a body-fixed frame to analyze our satellite’s dynamics.
Earth Centred Earth Fixed (ECEF) Frame
This frame keeps on rotating with the Earth. Its origin is at the centre of the Earth, its fundamental plane (XY plane) is the Earth’s equatorial plane, the X-axis points towards the point of intersection of the prime meridian and the equator, the Z-axis points towards the geographical north pole and the Y-axis completes the right-handed set of co-ordinate axes.
The green axes in the video represent the ECEF frame.
Figure 3. Earth Centred Earth Fixed (ECEF) frame
Orbit Reference Frame
The orbit reference frame has its origin at the centre of mass of the satellite, the Z-axis points towards the centre of mass of the earth, the Y-axis points towards the angular momentum of the satellite in orbit, and the X-axis completes the right-hand set of co-ordinate axis. In this article, the axes of the orbit reference frame are denoted with a subscript R (XR, YR and ZR). This frame is independent of the satellite’s orientation in space.
Figure 4. Inertial frame, Orbit Reference Frame and the Satellite Body frame
The inertial frame has been denoted by the axes set XI, YI, and ZI.
Satellite Body frame
Satellite body frame is fixed to the satellite’s body, with its origin at the centre of mass of the satellite. This frame is used to represent the actual satellite in space. The XB, YB and ZB axis need to be perpendicular to each other and should be popping out of the different faces of the satellite. An example of the Satellite Body frame has been given in figure 5.
Figure 5. Satellite Body Frame
Title: Introduction to Communications and ground systems
Author: Himanshi Tanwer
The Communication and Ground Station Subsystem has the major goal of providing a robust link between the satellite and the Ground station for the purpose of achieving Health monitoring data, Telemetry, and monitoring the satellite using Telecommands. It has the following objectives:
The subsequent communication on-board the satellite is achieved by configuring transmitter and receiver ICs via the coding of microcontrollers. This also includes the communication between the PCBs on-board the satellite. This is achieved by various protocols that help in the transfer of data across the satellite. All such communication takes place in the amateur UHF and VHF bands.
For communication, we need to use different bandwidth frequencies for uplink, downlink and payload image transmission. However, obtaining three different frequencies for the nanosat wasn’t feasible so we have used two frequency bands: Very High Frequency (VHF) and Ultra High Frequency (UHF). Multiplexing techniques have been used to perform two different functions in the same frequency band.
An antenna is the interface between radio waves propagating through space and electric currents moving in metal conductors, used with a transmitter or receiver. An antenna is an array of conductors electrically connected to the receiver or transmitter. Antennas can be designed to transmit and receive radio waves in all horizontal directions equally (omnidirectional antennas), or preferentially in a particular direction (directional or high-gain antennas).
The basic function of an antenna is to convert an electric signal into an electromagnetic wave, in the case of transmission. And to convert electromagnetic waves into electric signals, in the case of reception.
Dipole Antenna works on the principle of electric dipole. When a positive charge and a negative charge oscillate linearly, they produce an electromagnetic wave propagating outward.
Similarly, in the antenna, the two ends of a rod act like the positive and negative charges. A varying voltage signal is given to the mid-point of the rod and electrons start to fluctuate between one end of the rod to the other. At a given time, the end where all the electrons accumulate becomes the negative end and the other end becomes positive.
This constant to-and-fro movement of electrons from one end to the other end of the rod produces an electromagnetic wave propagating outward.
The frequency of this wave is the same as the frequency of the varying voltage signal.
For perfect transmission, the length of the antenna rod should be half the wavelength of the transmitted wave.
It is a type of antenna where a single rod is fixed to the ground. A varying voltage supply is connected to the lower end of the antenna with respect to a ground plane. During supply of voltage, a virtual monopole identical to the physical monopole forms below the ground. Hence the electromagnetic wave generated will also be half physical and half virtual.
For an ideal transmission or reception to take place, the total length of the physical and virtual monopole together must be half of the wavelength. Hence, the length of the physical monopole will be quarter-wavelength. This is an advantage of monopole antenna.
But by only generating half the physical electromagnetic wave, monopole antenna has a lower radiation efficiency compared to dipole antenna.
Yagi-Uda is just a modified version of a simple dipole antenna. It consists of 3 parts:
The AC supply is fed to the driven component, which is the dipole antenna rod. Electromagnetic waves are generated from the antenna propagating outward along both directions. The reflector component comes into play, by reflecting the backward radiation forward. Hence, backward radiation is reduced, and forward radiation is enhanced.
The enhanced forward radiation is passed through a series of directors arranged parallelly. As a result, the forward radiation beam becomes more streamlined, and its bandwidth decreases. More the number of director rods used, the more streamlined the beam gets. This the reason why Yagi-Uda is considered a directional antenna.
In our student project, after considering various factors we have chosen to mount a monopole antenna and a dipole antenna on our nano-satellite and use Yagi-Uda antennas for our Ground station.
The properties of the antennas such as their length and material are chosen by running simulations on an appropriate software.
Title: Electronic components and ground station.
Authors: Himanshi Tanwer and Rakshit R Nayak
It is a device that increases the voltage, current or power of a signal. It takes in an input signal waveform and produces stronger waveform at the output using external power source.
Weak signal amplifiers are used in wireless receivers for small input signal. Power amplifiers are used in wireless transmitters. They increase magnitude of power of input signal.
There are various classes of Amplifiers:
Class A - Single transistor is used to amplify both positive and negative halves of the waveform, design is simple, active element is in use all the time (even when there's no input) so heat loss occurs, efficiency is low-25% in common configuration, conduction angle is 360°-signal distortion is less.
Class B - Two transistors are used, one amplifies positive half and the other amplifies negative half of the input signal, conduction angle is 180°, efficiency is improved over A-75% theoretically, superposition of two halves of waveform leads to small distortion at the crossover region.
Class AB - Efficiency is better than A, distortion levels are less than B, combination of resistors and diodes gives bias voltage which reduces the distortion of waveform near crossover region, efficiency-60% .
Class C - Greater efficiency, low conduction angle-90°, quality is low, greater distortion, used for HF oscillator or RF signals, contain a tuned load which filters and amplifies input signals of a certain frequency and other waveforms are suppressed.
A filter is a circuit that removes, or “filters out,” a specified range of frequency components. In other words, it separates the signal’s spectrum into frequency components that will be passed and frequency components that will be blocked.
(i) Low Pass Filter (LPF): Low pass filter is a filter that passes signal below the set cut-off frequency to pass without attenuation and attenuates signals with frequencies higher than that of cut-off frequency.
(ii) Band Pass Filter (BPF): Band pass filter is a filter that passes signal between a specific range and attenuates signal with frequencies outside the range
(iii) High Pass Filter (HPF) : A filter which blocks low frequencies and passes high frequencies, is a high-pass filter.
(iv) Band Stop Filter : It blocks only a relatively narrow range of frequencies.
Multiplexing is a way of sending multiple signals or streams of information over a communications link at the same time in the form of a single, complex signal. It is a technique used to facilitate transmission of multiple data streams over a single medium. The receiver recovers the separate signals, a process called demultiplexing.
Networks use multiplexing for two reasons:
(i) To make it possible for any network device to talk to any other network device without having to dedicate a connection for each pair. This requires shared media
(ii)To make a scarce or expensive resource stretch further -- e.g., to send many signals down each cable or fiber strand running between major metropolitan areas, or across one satellite uplink.
In analog radio transmission, signals are commonly multiplexed using frequency-division multiplexing (FDM), in which the bandwidth on a communications link is divided into subchannels of different frequency widths, each carrying a signal at the same time in parallel. Analog cable TV works the same way, sending multiple channels of material down the same strands of coaxial cable.
In our satellite design, the uplink reception unit and beacon transmission unit are multiplexed. Since both data streams are multiplexed, a switching mechanism is required. This is where the RF switch comes into play. A RF switch or a microwave switch is a device to route high frequency signals through transmission paths.
A link budget is an accounting of all of the power gains and losses that a communication signal experiences in a telecommunication system; from a transmitter, through a medium (free space, cable, waveguide, fiber, etc.) to the receiver. It is an equation giving the received power from the transmitter power, after the attenuation of the transmitted signal due to propagation, as well as the antenna gains and feedline and other losses, and amplification of the signal in the receiver or any repeaters it passes through. A link budget is a design aid, calculated during the design of a communication system to determine the received power, to ensure that the information is received intelligibly with an adequate signal-to-noise ratio.
A Ground Station is a terrestrial station designed for communication with a satellite. It is the source of interaction with satellites. Hence, it is very important to establish a good communication link between the satellite and the ground station.
A ground station performs the following tasks:
(i) Tracking and determination of the position of the satellite.
(ii) Telemetry operations like acquiring and recording satellite data and status.
(iii) Commanding operations to control the satellite and its functions.
(iv) Storing the downlink payload data received from the satellite.
(v) Data processing operations to present the payload data collected by the satellite in the required format.
The basic flow of data in a ground station :
Title: The Payload Subsystem
Author: Stephen Eric
The Payload subsystem plays a very crucial role in the mission of a satellite. It is responsible for the proper functioning and post-launch operations of the satellite’s payload, i.e. the components and scientific devices present onboard the satellite which carry out the mission’s primary and secondary objectives. In that sense, all other subsystems are required to suit the requirements of the satellite’s payload. There are two payloads onboard Parikshit:
Satellites in orbit move at a very high speed. Once in orbit, it is very difficult to get a satellite out of orbit. Since the beginning of space exploration, it has been a standard to leave satellites in their orbits after its mission life as expired. These dysfunctional satellites continue to orbit the earth for decades and even centuries before atmospheric drag slows them down and they burn up during descent. In fact, without assistance, a CubeSat is estimated to take over 150 years to de-orbit from an 800 km altitude.
These satellites and the debris that is created by them orbit at very high speeds in criss-cross paths and pose a very deadly threat to space travel in the future. Around 2,600 satellites and more than 530,000 pieces of satellite debris have been traced to be in orbit and it is estimated that there are at least 100,000,000 pieces so small that they cannot be traced. This collection of debris is called space junk. Orbital speeds are so high, that getting hit by a debris the size of a pea is like being shot by a plasma gun. On impact, the debris vaporizes and produces enough energy to punch holes through solid metal
On March 27, 2019, ISRO announced it successfully completed an anti-satellite missile test, creating a new cloud of at least 400 pieces of debris, which increased the risk of impacts to the ISS by an estimated 44 percent over a 10-day period.
Currently, many space organisations have been working on reducing space junk and prevent further space missions from contributing to it by installing de-orbiting mechanisms in the satellites. The most common de-orbiting method is the use of thrusters. But thrusters require a high amount of fuel, and every kilogram of fuel must be carried up. This reduces the satellite’s payload capacity and decreases efficiency. Therefore, it is necessary to come up with an efficient and cost-effective de-orbiting mechanism.
This is where Parikshit’s secondary payload comes into play.
Objective: To de-orbit the nanosat at the end of its mission life by using an electrodynamic tether (EDT)
An electrodynamic tether conducts current in order to act against planetary magnetic field. Tethers can be active or passive. The active tether makes use of electron emitters to accelerate the flow of electrons between the tether and the space plasma increasing the amount of current and power generated in the tether.
Even-though the active tether offers a higher current generation hence faster rate of deorbit, it requires the incorporation of the tether and electron emitter with the rest of the satellite system which greatly increases its complexity. Hence to keep complexity to a minimal and due the unavailability of electron emitters our payload team decided to go with a passive tether system, and so we will constrain our discussion to only passive electrodynamic tethers.
A passive EDT is essentially just a long piece of wire. At the end of the satellite’s mission life, the tether will be deployed in the nadir direction (i.e., the axis pointing towards the earth from the satellite). By being dragged through a magnetic field, an EMF is developed across its two ends given by the relation:
A potential difference is created with the high end of the wire to be positive. So, current is driven in the zenith direction. Now that we have a current carrying wire in a magnetic field, a force is exerted on it due to Laplace force, which is the macroscopic effect observed in the wire due to the Lorentz force acting on every point charge in the wire. Laplace force is given by:
This force acts in the direction opposing the velocity of the wire, in accordance with Lenz’s Law, and thus effectively retarding the velocity of the satellite over time, until it descents into a lower altitude and burns up during re-entry due to atmospheric drag and aerodynamic heating.
By using this simplistic de-orbiting model, the budget of the mission is greatly reduced, and the model is highly efficient due to the lack of extra fuel for thrusters. It is important to note that this method of de-orbit is still very experimental and are being researched upon by space agencies like the European Space Agency and similar space tether models have been tested by NASA, most notably the SEDS mission in 1993 and 1994.
However, the use of a passive electrodynamic tether as the de-orbiting mechanism for a satellite is the first of its kind. If the mission is successful and the objective is accomplished, this experiment would prove to be a huge advancement in the space industry and in prevention of space junk.
For the tether to properly de-orbit the satellite, it would need to be stabilised in a nadir orientation. This sounds to be difficult to achieve due to the high orbital speed of our nanosat. Not only would it need to be in nadir direction but there should also be tension in the wire to keep it taut, to prevent itself from coiling around the satellite’s body. This is achieved by something called as an artificial gravity gradient.
As we know, gravity follows the inverse square law, which means the force of gravity by a mass on another mass is inversely proportional to the square of the distance between the two masses. The length of tether that is being used for our mission is around 300 meters. Which means the distance between the tether spool (the mass attached to the end of the tether) and the satellite body is going to be 300 meters. This distance is enough to create a prominent difference in the gravitational pull of the Earth experienced by the two masses. Also, while we are analysing the motion in earth’s frame of reference, which itself is rotating, a centrifugal force is taken into account which linearly varies with the distance r from the centre mass of the satellite+spool+tether system. Owing to a length of 300m, this centrifugal force also forms a gradient on the tether, By extending the long axis perpendicular to the orbit, the "lower" part of the orbiting structure (the spool) will be more attracted to the Earth. The effect is that the satellite will tend to align its axis of minimum moment of inertia vertically.
However, it is not possible to align the tether in perfect nadir orientation even after stabilization. This is because of the Lorentz force that was discussed earlier.
Since the tether is not a rigid body, the shape of the tether at stabilisation is not straight. As discussed earlier, the Lorentz force will oppose the direction of velocity of the satellite, causing electrodynamic drag. This causes the tether to deviate from nadir orientation in a curved manner.
Title: Thermal Satellite Imagery
Author: Deeksha Sabhari
Satellite imagery or Earth observation imagery are images of the Earth (or other planets) collected by imaging satellites. Satellite images find application in meteorology, geology, cartography, intelligence and warfare, to name a few. Images may be in visible colours and in other spectra. Interpretation and analysis of such imagery is conducted using specialized remote sensing software.
The first images from space were taken on suborbital flights. The U.S-launched V-2 flight on October 24, 1946 took one image every 1.5 seconds. With an apogee of 105 km, these photos were from five times higher than the previous record, the 22 km by the Explorer II balloon mission. The first satellite (orbital) photographs of Earth were made on August 14, 1959 by the U.S. Explorer 6.
The first crude image taken by the satellite Explorer 6 shows a sunlit area of the Central Pacific Ocean and its cloud cover. The photo was taken when the satellite was about 17,000 mi (27,000 km) above the surface of the earth on August 14, 1959. At the time, the satellite was crossing Mexico.
Thermal imaging is an example of infrared imaging that concerns everything from the generation, collection, analysis, modification and visualisation of images.
All bodies emit energy in the electromagnetic spectrum as a function of their temperature. Thermal cameras or thermographic cameras usually detect radiation in the long-infrared range of the electromagnetic spectrum (8-15 µm). In this region, sensors can obtain a completely passive picture of the outside world, based on thermal emissions alone and require no external light or thermal source
At any temperature above absolute zero, an object radiates infrared energy by the motion of atoms and molecules on its surface. The intensity of this radiation is a function of the temperature of the material (in simpler terms, higher the temperature, greater the intensity of IR energy emitted and vice-versa). Try to think of a relation between the temperature of an object and the irradiance incident on the surface of the infrared detector. To begin with, describe these terms - emissivity, irradiance and radiance.
Satellite thermal infrared images of Iceland-Faroes front and associated eddies. Iceland and the Faroes Islands are outlined by the white dots. North is upward. The purple and blue colors represent cold arctic water. The red and yellow colors indicate warmer water of the Gulf Stream.
The primary payload of the Parikshit student satellite is the thermal imaging camera that will take thermal infrared images of the Indian subcontinent. These observations in the infrared spectrum give us information about the Earth and its atmosphere which normally cannot be obtained from the visible region. The deliberate processing of these images help us understand and apply them practically — for example, urban heat islands, metropolitan areas far warmer than their surroundings, and even cloud monitoring, mapping the thermal distribution of clouds to help us understand their effect on climate
A bolometer is a device for measuring the power of incident electromagnetic radiation via the heating of a material with a temperature-dependent electrical resistance. A microbolometer is a specific type of bolometer used as a detector in a thermal camera. The Long Wave Infrared Radiation (LWIR) is incident upon the detector material, and heats it, changing its electrical resistance. This resistance change is measured and processed into temperatures which can be used to create an image. Microbolometers, unlike other infrared detecting equipment, do not require cooling.
Image of an Uncooled Microbolometer
The long wave infrared radiation covers a range of 8-15µm. Emitted radiation lies in the LWIR and that will be picked by the sensor. Thus, the sensor responds to the LWIR region only. The distribution of flux due to objects at widely varying temperatures is smaller in the LWIR band than in the MWIR band while observing a scene having the same range of temperature. Put simply, the LWIR imaging system can image and measure ambient temperature objects with high sensitivity and resolution and at the same time extremely hot objects as well. This is consistent with our payload applications, which requires the measurement and observation of phenomena at average terrestrial temperatures. Objects at temperatures as low as -100oC emit very little IR radiation - an LWIR system can measure the radiation associated up to such low temperatures as well, with a high sensitivity.
Title: Microcontroller Basics
Author: Ritika T
Microcontroller is a small sized computer that can be embedded on a chip (computer-on-a-chip). Microcontroller typically includes a microprocessor, memory, and some input/output peripherals on a single chip.
Microcontrollers are embedded in some other device to run some program, and MCUs are dedicated to one task and run one specific program – from vehicles, medical devices to vending machines, they are used as simple miniature computers everywhere.
The main program is stored in the ROM (read only memory) and generally does not change. A microcontroller has a dedicated input device, and the output is usually through an LED or LCD display.
One of the most significant requirements of an MCU in an embedded system is to reduce power consumptions and space.
Elements of a Microcontroller:
Image showing different elements of a microcontroller
Timing and Clocks
A Clock is a device (a microchip) that is used to define the timing and speed of all the functions a computer performs.
A clock signal is a type of signal that oscillates between high and low, and this is followed by the circuit to coordinate its actions. A clock signal is produces by a crystal resonator (crystal) -- this crystal vibrates at a specific frequency when electricity is applied. The physical shape and size of the crystal is important as it determines the frequency of oscillations.
Internal clock—Real Time Clock -- provides precise time and date which can be used for various time and date.
Cycle time -- is the duration of a clock cycle.
Jitter -- random variations in the time in which an edge of the pulse makes a transition. If jitters are seen in the frequency domain, they are called as phase noise.
Propagation Delay -- The output changes a finite time after the input is received—it is a change in input to output.
Glitch -- Glitches occur due to propagation delays—it is a short-term incorrect state in a digital system
Multiple clocks -- If more than one clock is used they can be defined to have different waveforms or frequencies; there is a base period over which all these waveforms repeat. Base Period is the LCM of all the time periods.
Timers vs. Counters -- A timer measures time intervals, it counts down from a specific time and used to generate a time delay. A counter is a device that stores the number of times a particular time occurred, with respect to a clock signal. Timer uses internal clock to produce delay, but counter uses external clock signal to count pulses.
Phase Locked Loop
It is an electronic circuit with an oscillator which constantly adjusts to match the frequency of an input signal. They consist of a voltage-controlled oscillator—the VCO seeks and locks onto the desired frequency. When there is a difference between a VCO frequency and the reference frequency, the phase comparator produces an error.
Title: Operating Systems, Context Switching and Scheduling
Author: C Sai Kasyap
Interrupts and Interrupt Handling:
Interrupts are special signals sent to the CPU by hardware or software. They can be masked: sometime the OS will prioritize the process and interrupts are not read. Non-maskable Interrupts(NMI): must be dealt with immediately regardless of other tasks.
This is a feature of a multitasking OS, it allows the sharing of a single CPU for multiple processes.
It involves storing the context/data of the process so that it can resume execution from the same point. This is stored on the Process Control Block.
Context switching may take place when it is required to shift between user mode and kernel mode, or when the CPU is interrupted to return from a disk – disks are slow, so the process is stopped for the moment and switched to another.
Context switching between two threads of the same process is faster than between processes.
Time Multiplexing - The resources are used by multiple users and each user has access to it for an interval of time. Several processes wait for access of the CPU and by using context switching The CPU switches between the processes.
Process scheduling is the removal of a running process from the CPU and selection of another process-- OS allows more than one process to be loaded in the executable memory and loaded processes share the CPU.
All the PCBs are stored in queues, processes in the same execution state are placed in the same queue.
Types of Schedulers
Swapping – A suspended process cannot make any progress towards completion. In this condition, to remove the process from memory and make space for other processes, the suspended process is moved to the secondary storage.
Throughput – It is the number of processes that complete execution in a unit of time.
Turnaround Time – Amount of time to execute a certain process.
Waiting time – Amount of time a process has been waiting in the queue ready for execution
The OS uses different policies to maintain each queues. A scheduling system allows one process to use the CPU when another is waiting for I/O and it makes the overall system efficient.
Pre-emptive - when OS favours another process pre-empting the currently executing one it temporarily suspends the running process. It can be caused ny Interrupts, Trap (generated by user or system, used to handle errors) or Supervisor Call (explicit request to perform some function by the kernel).
Non-Pre-emptive - When the currently executing process gives up the CPU voluntarily. It runs a process to completion.
Title: Solar Cells
Author: Manoj T
With rising growth in population and technological advancements, the world’s energy demand is also growing fast. Therefore, it is important to go for a reliable, cost-effective and everlasting renewable energy source for energy demand arising in the future. Solar energy, among other renewable sources of energy, is a promising and freely available energy source for managing long term issues in the energy crisis. The solar industry is developing steadily all over the world because it is superior in terms of availability, cost-effectiveness, accessibility, capacity, and efficiency compared to other renewable energy sources.
Working of the solar cell:
A solar cell is a sandwich of n-type silicon (blue) and p-type silicon (red). It generates electricity by using sunlight to make electrons hop across the junction between the different layers of silicon:
The absorption coefficient determines how far into a material light of a particular wavelength can penetrate before it is absorbed. In a material with a low absorption coefficient, light is only poorly absorbed, and if the material is thin enough, it will appear transparent to that wavelength. The absorption coefficient depends on the material and also on the wavelength of light which is being absorbed. Semiconductor materials have a sharp edge in their absorption coefficient since light which has energy below the bandgap does not have sufficient energy to excite an electron into the conduction band from the valence band. Consequently, this light is not absorbed. The absorption coefficient for several semiconductor materials is shown below.
The relationship between the absorption coefficient and wavelength makes it so that different wavelengths penetrate different distances into a semiconductor before most of the light is absorbed. The absorption depth is given by the inverse of the absorption coefficient, or α-1. The absorption depth is a useful parameter which gives the distance into the material at which the light drops to about 36% of its original intensity or alternately has dropped by a factor of 1/e. Since high energy light (short wavelength), such as blue light, has a large absorption coefficient, it is absorbed in a short distance (for silicon solar cells within a few microns) of the surface, while red light (lower energy, longer wavelength) is absorbed less strongly. Even after a few hundred microns, not all red light is absorbed in silicon.
Different semiconductor materials have different absorption coefficients. Materials with higher absorption coefficients more readily absorb photons, which excite electrons into the conduction band. Knowing the absorption coefficients of materials aids engineers in determining which material to use in their solar cell designs.
In Parikshit, we use improved triple-junction solar cells (ITJ) which are provided by Spectrolab. We use this particular type of solar cell as the triple junction ensures that a larger majority of the light spectrum is utilized by the cells to produce the maximum possible power. Therefore, these cells have a comparatively high beginning of life (BOL) efficiency of 26.8%.
Title: Power Management
Author: Archisha Tripathi
Power management in any distributed generation systems is important. It is responsible for trading off efficiently between the generated (or stored energy) and consumed energy. It helps in lightening the load on the battery. Battery Management is cardinal when the energy generated by one such distributed system gets stored in the battery pack.
Thus, factors like energy density, number of charge-discharge cycles, battery capacity, Coulombic efficiency etc. should be pondered upon while choosing a battery. The chosen battery is then arranged in one of the design extremes, PCM (Parallel cell module) or SCM(series cell module). Keeping in mind, series gives us a higher voltage and parallel gives us a higher current.
Once the battery pack is chosen, it is embedded with a battery management system (BMS). By incorporating BMS we get following advantages:
Thus, once power gets managed at source level by BMS implementation, it needs to be managed at load level, especially if at load level the components are complexly integrated. Thus, delineation of certain algorithm namely, power management algorithm comes into the picture.
Image describing State of Charge and Depth of Charge
Power Management Algorithm (PMA)
PMA has varied implementations in different fields where the generation of energy takes place via non-conventional means (distributed generation systems). These algorithms are designed with the intent of optimisation of power getting consumed thus keeping battery’s SOC at optimal levels.
Speaking of a complexly integrated system, one such system is a Nanosatellite. Thus we would now onwards talk about PMA implementation, alluding to a Nanosatellite. Now let us take a look on power system modelling.
POWER SYSTEM MODELLING
Modelling of the power system would help us understand where exactly power loss occurs and the amount of current being drawn so that preventive measures can be taken if it exceeds the threshold.
The on-board loads are firstly divided into operational modes based on the tasks they perform such that instantaneous power drawing can be controlled. Again there are many ways to logically implement PMA. We are discussing one such algorithm here. After identification of the operational modes of the satellite system, categorization of peripherals associated with each mode is done and a conditional priority-based model is realised. These modes keep in check the state of charge of the battery and ensure that depth of discharge doesn’t fall below a certain defined percentage. This way maximum number of charge-discharge cycles from the rechargeable battery can be achieved. All these loads, mutually drain the battery periodically, ensuring a low current is drawn from battery thus countering over-heating.
These modes are designed such that power-hungry tasks don’t run simultaneously. Hence scheduling is done to balance the charge that goes into and out of the battery.
The mode switching depends on multiple factors such as:
In the case of over-discharging, it falls to the mode where maximum components are switched off, and recharging of the battery becomes a major concern. The threshold DOD is decided by simulation and battery characterization. Regions of interest across globes are identified in order to switch off the transceiver when HAM cannot operate.
Hence with similar more considerations, a PMA can be efficiently implemented. It can prove to be efficacious in increasing the systems life period. While designing such algorithms, the battery is often given utmost importance, followed by preferencing other components on board. PMA aids us to improve SOH (state of health- quantifies ageing of cell) and SOL ( percentage of time left/calendar time) of the battery.