On the performance of Matlab and Parallel Computing


MATLAB is one of the most powerful scientific computing tools along with Python. Although Python is my favorite scientific programming language since it is opensource, well-documented and has plenty of libraries, I sometimes use MATLAB especially while dealing with very large matrices as MATLAB is highly optimized for large-scale matrix operations, consequently, it performs better at processing very large matrices.



From a parallel computing perspective, MATLAB actually strives to utilize all available CPU cores in a parallel way to maximize its performance and reduce the computation time when it is possible. Therefore, it does a kind of parallel computing when it is possible such as in matrix operations as these operations are very suitable to be run parallelly.  However, the parallel operation of the MATLAB might be restricted by bad coding practice of the users especially using for or while loops, because those loops are generally performed in a serial manner with an increasing or decreasing index.  Of course, MATLAB has advanced parallel processing libraries for more detailed operations, however, the best way to shorten the computing time and improve the performance of the calculation is avoiding implementing loops. For example, instead of performing K times MxN matrix multiplications in a loop, simply one KxMxN matrix (K, M, N are big numbers) multiplication would be enough, then the result matrix could be sliced if necessary for further operations. This would shorten the computation time by a great amount (10 times or more depending on the matrix size and operation) as this operation could be performed parallelly by Matlab using its core libraries.


Opensource or Public Datasets for Machine Learning Studies and Research



Machine learning (ML) techniques have been applied in many applications from academia to industry and have started to influence our daily lives such as in social media applications or online shopping. Hence, many machine learning algorithms have been developed to improve the performance of these ML techniques.



While learning machine learning basic or developing new algorithms it is essential to have reliable and large datasets which include logical connections and labels between data member. Especially in academia, having a well-known and extensively examined datasets is necessary in order to investigate the performance of newly developed machine learning algorithms and compare them to existing ones.

There are a large amount of publicly available datasets that could be used with various machine learning techniques such as deep learning, classification, reinforcement learning, clustering, etc. I would like to present the datasets that I really like to use:

1. UC Irvine Machine Learning Repository

Nearly all datasets have been published by university researchers in this repository. A wide range of datasets from various areas from marketing to wireless communication systems can be found and most of them are well documented. Link

2. Deeplearning.net Datasets

These datasets are mainly to be used in benchmarking deep learning algorithms. Link

3. Wikipedia: List of Datasets


A Wikipedia page lists plenty of datasets with comprehensive details about them including format, creators, reference study and descriptions of them. Link  




Extra: MIT Lectures on Machine Learning and Deep Learning








Matlab Phased Array Toolbox and Radar Examples



Matlab is one of the best software that can be for scientific and engineering research and computation, in addition to Python. Phased Array Toolbox of the Matlab provides a solid solution for antenna array analysis and radar research. Furthermore, Mathworks, owner company of Matlab, presents an extensive documentation of this toolbox. Here, I would like to present useful examples and documentation of Matlab radar studies, mainly from Mathworks website.


  1. Radar Data Cube:  Fundamental data structure for received radar data. https://uk.mathworks.com/help/phased/gs/radar-data-cube.html
  2. Building and Processing a Radar Data Cube: https://uk.mathworks.com/company/newsletters/articles/building-and-processing-a-radar-data-cube.html
  3. Designing a Basic Monostatic Pulse Radar: https://uk.mathworks.com/help/phased/examples/designing-a-basic-monostatic-pulse-radar.html
  4. Basic Radar Using Phase-Coded Waveform: https://uk.mathworks.com/help/phased/ug/basic-radar-using-phase-coded-waveform.html
  5. Increasing Angular Resolution with MIMO Radars: https://uk.mathworks.com/help/phased/examples/increasing-angular-resolution-with-mimo-radars.html
  6.  Range - Doppler Response Estimation using Matlab: https://uk.mathworks.com/help/phased/ug/range-doppler-response.html
  7. Matched filter : https://uk.mathworks.com/matlabcentral/answers/4502-matched-filter
  8. Doppler Shift and Pulse-Doppler Processing: https://uk.mathworks.com/help/phased/ug/doppler-shift-and-pulse-doppler-processing.html
  9. Range-Speed Response Pattern of Target: https://www.mathworks.com/examples/phased-array/mw/phased-ex92695153-range-speed-response-pattern-of-target
  10. Angle-Doppler Response to Stationary Target at Moving Array: https://www.mathworks.com/examples/phased-array/mw/phased-ex13073217-angle-doppler-response-to-stationary-target-at-moving-array
  11. Automotive Adaptive Cruise Control Using FMCW Technology: https://uk.mathworks.com/help/phased/examples/automotive-adaptive-cruise-control-using-fmcw-technology.html
  12. Doppler Estimation: https://uk.mathworks.com/help/phased/examples/doppler-estimation.html
  13. Radar Signal Simulation and Processing for Automated Driving https://uk.mathworks.com/help/driving/examples/radar-signal-simulation-and-processing-for-automated-driving.html
  14. Doppler Estimation: https://uk.mathworks.com/help/phased/examples/doppler-estimation.html
  15. Waveform Design to Improve Performance of an Existing Radar System: https://uk.mathworks.com/help/phased/examples/waveform-design-to-improve-performance-of-an-existing-radar-system.html
  16. Periodogram power spectral density estimate: https://uk.mathworks.com/help/signal/ref/periodogram.html
  17. Radar waveform analyzer https://uk.mathworks.com/help/phased/ref/radarwaveformanalyzer.html
  18. Simultaneous Range and Speed Estimation Using MFSK Waveform: https://uk.mathworks.com/help/phased/examples/simultaneous-range-and-speed-estimation-using-mfsk-waveform.html
  19. Concepts of Orthogonal Frequency Division Multiplexing (OFDM) and 802.11 WLAN: http://rfmw.em.keysight.com/wireless/helpfiles/89600b/webhelp/subsystems/wlan-ofdm/content/ofdm_basicprinciplesoverview.htm

Quantum Computing


Quantum computing is the use of quantum mechanics within computing in order to decrease the number of processes needed to find a solution to a problem. Quantum computers use qubits (quantum bits) instead of bits and these qubits can be subject to a manipulation that can’t be done to classical bits, such as quantum entanglement and superposition. There are a number of objects that can be used as qubits, photons, nuclei or electrons which means that while classical computers today are being limited because computer components cannot be decreased any further in size, quantum computer components would only have to be a few atoms in size.
                     


Quantum computers hold many advantages over classical computers. One such advantage is that while classical bits can only be presented as zeros or ones at any given time, qubits can exist in any superposition of these values up until they are measured. When harnessed, this allows quantum computers to process a vast amount of calculations simultaneously as 1s, 0s and superpositions of both 1s and 0s are used. This means that certain processes that were once thought to be impossible are now possible with the use of this new technology. This also means that while classical bits can represent n amount of information, where n equals the number of bits being used in the process, qubits, on the other hand, represent two to the power n amount of information in which case n instead equals the number of qubits being used. This means that if sometime in the future it was possible that a quantum computer could harness 300 qubits, n to the power of 300 would mean that this quantum technology would be able to use amounts of information equal to the number of particles in the universe.



Although quantum computing is still in the early stages of research, there are still some uses for this technology in the present day as it still somewhat trumps classical computing. An algorithm named “Shor’s algorithm” proved that where a classical computer would factor and extract discrete logarithms and find prime factors in exponential time, quantum computers would in polynomial time (Warren, 1997) (Bennett et al., 1997). A polynomial time when compared to exponential time is very efficient and Shor’s algorithm has already given a place to quantum computing in today’s world of mathematics but this raises the question of if all other mathematic functions could also be efficiently solved in quantum polynomial time.

Use of Artificial Intelligence Methods in Computer Games



The use of artificial intelligence (AI) has been developing over several years. In the early years for AI it has started off as experiments and scientific research, to it now being an inevitable part of our daily life. The use of artificial intelligence in computer games has also been developing tremendously over the past years and is now more widely being used throughout the industry. This essay will investigate the advancements of artificial intelligence used in computer games and, will also explore a range of artificial intelligence methods that have been used and the simple theory behind them. Artificial intelligence (AI) is the presentation of intelligence shown by computers or something that is computerized in which is alike to human intelligence. Artificial intelligence is used in many ways that is incorporated into our day-to-day lives. For example, AI is used very much for security in banks and homes, smartphone/ electrical devices and driverless vehicles that we now see being used more often – these use similar methods of artificial intelligence as used in computer games.



In large numbers of computer games, artificial intelligence is mainly seen being deployed in non-player characters (NPCs - a character not controlled by a player but controlled by computerization). Non-player characters are showing human-like characteristics more and more as time goes on. Many games have shown this when some of their NPCs are able to communicate with each other rather than each being programmed to work independently as a whole. They are able to decide whether certain situations in the game are more important than others. They are also able to act based on their programmed/learnt judgment. Funge (2004, p.13). Advancements game designers have started to use in computer games are Finite State Machine (FSM) algorithms to improve non-player character’s intelligence. FSM would usually do this by making an immediate decision to react to the human player’s action with programmed commands. This works by the machine having a limited number of states; therefore, depending on which state the machine is in, it would behave in accordance to that.



Non-player characters tend to also use Fuzzy logic to make decisions. The idea behind fuzzy logic is that it doesn’t just recognise as something as being true or false, but instead puts a scale on how true or false something is. NPC’s also use AI path finding which is an algorithm that is used to find the shortest path between two points. These two methods allow the non-player character to choose more accurate decisions based on the situation and gives excitement to the player as the NPC will make more of an unpredictable action.

Many artificial intelligence methods started off being used in games like Chess which is an abstract strategy game. Many sports games like ‘Pong’ and ‘Tennis-for-two’ also introduced the idea of infusing artificial intelligence into their game-play. Popular games such as; Black and White as well as Halo, are well known to use artificial intelligence methods and techniques. These games are probably the first games that started to develop the phenomenon of using AI as it appeared to be more challenging and interesting to play. Example where artificial intelligence methods have been used is in a computer program titled "AlphaGo" that plays an intense computerized version of the Japanese board game "GO" against a human opponent. The aim of the game is to conquer a greater part of the board than the opposing player. The player would have a choice of positions on the board to choose from and, Monte Carlo Search Tree would generate these positions and build a conclusion as to which position on the board is the best for success.

Current CPUs and Their Limitations


Currently a modern high-end processor can have up to 19 billion transistors that it utilises as switches with outputs of 0s and 1s to perform logical/arithmetic operations and control the flow of data. This allows it to perform countless amounts of instructions that leads to us being able to use a computer for a wide variety of tasks. Although, processors nowadays are sufficient for the everyday usage of the public they are not perfect, therefore, throughout this section of my report I will analyse the problems modern processor face that prevent innovation and advancements in the CPU field.




One of the major problems that all CPU manufacturers need to consider is heat and the effects that the temperature can have on performance. CPUs generally generate large amounts of heat and it is not unusual for them to run at approximately 70°C whilst performing demanding tasks such as video rendering. This is largely due to the voltage running through the transistor that’s main purpose is to produce motion and cause the transistor to alternate between states (closed and open which translate in to a numerical value either 1 or 0 within the data of the CPU). Also, the higher the voltage the greater the speed of the transistor’s alternations and the greater the reduction in time required for the CPU to complete tasks. However, this conversion of energy (from electrical to kinetic) is not 100% efficient, the amount of energy that is produced per one transistor is very small, this causes limitations as it is not efficient due to energy being lost in the form of heat, but this is balanced due to the large number of transistors that can be found in modern CPUs (Science studio, 2016). High temperatures can quickly become dangerous for a CPU as it could lead to it melting, forming micro-cracks or obtaining permanent damage to its transistors.




Most manufacturers get around this problem by providing or recommending cooling solutions with an appropriate TDP (thermal design power) ratings. That allows the user to get a cooler that will be efficient enough to dissipate the amount of heat necessary to prevent damage which will vary from CPU to CPU. Furthermore, many manufactures implement a feature known as CPU throttling which purposefully and temporarily reduces the speed of the transistors when temperatures are close to dangerous levels. To reduce the amount of heat being produce and allow the cooling system to catch up and reduce the temperature to more safe levels . However, these are not ideal as they just control the problem and do not remove the problem and therefore manufactures are usually limited to the amount and speed of transistors, as large cooling systems cannot be utilised in certain devices such as smartphone. Also, for the sake of convenience many manufactures choose not to use large cooling systems as they can be expensive and require regular maintenance which is expensive and could potentially prevent people from buying it.



DFT and FFT with Python and It is applications on various signals

Fast Fourier Transform (FFT) is one of the most important algorithms in computer science, electronics and signal processing engineering. It is a fast solver for Discrete Fourier Transform (DFT). Basically, DFT or FFT transforms signals from time-amplitude domain to frequency-amplitude domain. The reverse form of the FFT is known as Inverse Fast Fourier Transform which converts, naturally, signals from frequency domain to time domain.

FFT is heavily used in communication, radar or computer systems. For example OFDM (orthogonal frequency division multiplexing) is developed based on IFFT and FFT. Since Python is most common used scientific programming language beside Matlab, I would like to present some information about FFT and using it in Python.

Python or Koala 


This blog post (https://jakevdp.github.io/blog/2013/08/28/understanding-the-fft/) includes the basics of the FFT and very clear comparison of  it to DFT. Another blog post (https://www.ritchievink.com/blog/2017/04/23/understanding-the-fourier-transform-by-example/) which includes a very good example of the FFT. This page (https://plot.ly/python/fft-filters/) has FFT filters using Python. An OFDM example which utilizes FFT and IFFT in Python is presented here (https://dspillustrations.com/pages/posts/misc/python-ofdm-example.html) .

An extra link: (http://www.music.helsinki.fi/tmt/opetus/uusmedia/esim/index-e.html) in which you can find some .wav sound examples to process using FFT. An application of short-term FFT on sounds: short term 
Sound Processing with Short Time Fourier Transform

Another working Python example of the short-term FFT which examine .wav files to find out power of the sound at specific frequency and time blocks.


The Difference Between Artificial Intelligence and Machine Learning


I think the first question which must be answered clearly while starting teaching artificial intelligence and machine learning should be about the difference between them.

AI - Artificial Intelligence is a comprehensive concept that stating that the computers can learn, think and decide what they should do by themselves in every situation. However, fully AI concept is not possible at the moment as various operations such as image recognition, playing a game, creative thinking etc. require different algorithms which are striving to solve specific problems and tasks.

ML - Machine Learning is the specific application of AI, which is mostly relying on learning based on historical data to analyze future data and decide using these analyzes. It can be categorized into supervised and unsupervised learning. Former one utilizes the labelled data to train the machine learning core (brain) and the latter one uses an agent in order to solve the problems. Machine learning algorithms are generally trained in order to solve one or a few specific problems, consequently, they cannot find answers to every question or problem that you may ask or have.
An Image illustrating imaginary neural networks and neural nodes  

More information can be found on the following links with some examples:

What is MIMO Communication in 4G and WiFi Networks ?

Recently, the wireless communication systems have been transformed and now they have more robust communication link and higher spectral efficiency. One of the main improvement, which has been implemented into current 4G and WiFi networks, is MIMO (Multiple-Input Multiple-Output) technique.

MIMO communication networks include more than one transmitter and receiver antennas in order to use multiple channel at the same time and frequency resources. The idea behind this technique is each antenna port can have a separate channel due to reflection and the scattering of the microwaves during the propagation. These channels are utilized using software based receivers and equalizer in order to simultaneously transmit data.


MIMO enhances the spectral efficiency, thus the capacity of the link besides providing more communication links.  

Polarisation of Electromagnetic Waves



The polarisation of electric field states the orientation and magnitude of its field vectors and their alteration through the time. Polarisation is related to the transverse electromagnetic waves (TEM), in which directions and magnitudes of both electric and magnetic fields vary by time. Polarisation of EM waves from an antenna is classified into three main categories: linear, circular or elliptical polarisations. Furthermore, the direction of polarisation may be clockwise (CW, right-hand polarisation) or counter-clockwise (CCW, left-hand polarisation). For instance, the equation indicates a circularly polarized wave which consists of two components in the x and y directions. If polarisation of the receiver antenna does not match with the polarisation of incoming waves, the amplitudes of the received waves decrease. This polarisation mismatch will cause polarisation loss and reduce the power of the received signal. On the other hand, polarisation discrepancy can be employed to transmit two signals simultaneously at the same frequency-time resources using two different polarisations such as in satellite communication.



Antenna polarisation is another opportunity for MIMO systems. Because the vertical and horizontal polarisation may be utilized to increase the number of antennas in a given area. Single polarized and dual polarized 2\times2 MIMO systems have been compared [1], in which they show that if there is a high correlation between two antennas, then polarized MIMO have better performance over a single polarized system. In another similar study, they perform broadband outdoor channel measurement to verify the performance of 2x3 dual-polarized MIMO system at 2.5 GHz frequency and they achieved higher capacity using dual polarisation especially in the close range. On the other hand, aforementioned two studies demonstrate that if the distance between transmitter and receiver is large enough, then single polarised MIMO system performs better than the dual polarised system.