Smart Wearables and IoT for Smart Healthcare
The use of smart wearables enables regular monitoring of human activities, which can lower risks of health complications due to cardiovascular diseases, diabetes, etc. The signal data generated by the Accelerometer and Gyroscope of the inertial measurement unit (IMU) within wearables aid in recognizing motion. In the past, researchers have used Signal Processing techniques for human activity recognition (HAR). However, these techniques are not adaptive to variations of the same activities in different people. Machine Learning and Deep Learning are comparatively tunable but require more computational power, especially the latter. To circumvent high computational requirements, state-of-the-art solutions typically deploy the learning algorithms on a platform with more compute power, e.g., a smartphone, or even Cloud. We are working on proposing a solution that is suitable for deployment in the resource-constrained environment of smart wearables and IoT.
Application
Implementation
Reliable Large WiFi Networks
We are addressing the problem of automating the process of network troubleshooting for a large-scale WiFi network. For example, unnecessary active scans in WiFi networks, that are known to degrade the WiFi performance. We collected 340 hours-worth of data with several thousands of episodes of active scans to train various machine learning models. We collected data with 27 devices across vendors in varied network setups under a controlled setting. We studied unsupervised and supervised machine learning techniques to conclude that a multilayer perceptron is the best model to detect the causes of active scanning. Further, we performed an in-vivo model validation in an uncontrolled real-world WiFi network. Our proposed mechanisms have the potential of being incorporated in the existing WiFi controllers, such as that of Cisco and Aruba.
Application
Implementation
Indoor Localization
During fires, smoke inside a building may become so thick and dense that occupants become quickly disoriented and unable to exit it. We propose the development of a Mobile App to assist in emergency evacuations from public buildings, made possible with the convergence of three new technologies. The first is WiFi-based indoor localization. GPS technology is an example of outdoor localization, which accurately locates users outdoors. With the widespread adoption of smartphones with GPS-receivers, online mapping, and localization on handheld devices, e.g. Google Maps, along with calculating best routes to a destination have become routine activities. However, GPS receivers do not work inside buildings. Indoor localization uses the principles of GPS triangulation but instead relies on WiFi routers and their received signal strength (RSS) to identify the precise 3D location of a smartphone user within a building.
Application
Implementation
Minority Training Regime for Effective Prediction in Large Microarray Expression Data Sets:
Rigorous mathematical investigation of learning rates used in back-propagation in shallow neural networks has become a necessity. This is because experimental evidence needs to be endorsed by a theoretical background. Such theory may be helpful in reducing the volume of experimental effort to accomplish desired results. We leverage the functional property of Mean Square Error, which is Lipschitz continuous to compute learning rate in shallow neural networks. We aim to propose a method that reduces tuning efforts, especially when a significant corpus of data has to be handled. The learning rate is the inverse of the Lipschitz constant. The work results in a novel method for carrying out gene expression inference on large microarray data sets with a shallow architecture constrained by limited computing resources.
Conceptual
Application
Theoretical and Experimental validation of Lipschitz Adaptive Learning Rate in Regression and Neural Networks:
We propose a theoretical framework for an adaptive learning rate policy for the Mean Absolute Error loss function and Quantile loss function and evaluate its effectiveness for regression tasks. The framework is based on the theory of Lipschitz continuity, specifically utilizing the relationship between learning rate and Lipschitz constant of the loss function. we argue that Quantile Regression (QR) can be used to provide prediction intervals, which is one of the ways of quantifying uncertainty. In the context of DNNs, QR models can be fit by minimizing the Check Loss function. Moreover, the number of independent regression models that are fit is equivalent to the number of quantiles desired.
Conceptual
ABC-GAN:
A novel data generation paradigm called ”ABC-GAN” is an enhancement of the popular Generative Adversarial Network (GAN) paradigm. We propose an improvement over the existing GAN paradigm by designing a generative model with the ability to incorporate weakly informative priors about the target data, thereby allowing inferences on the data to lie in a restricted range. This proposed model introduces regularization to the existing GAN paradigm by using principles of Approximate Bayesian Computation (ABC) like the known and user supplied generative model, bypassing of likelihood evaluation and model averaging to overcome existing problems in the training of GANs
Conceptual
Implementation
Relational Machine Learning
Inductive Logic Programming (ILP) is a research area at the intersection of Machine Learning and Logic Programming. ILP systems develop descriptions in first-order logic from examples and domain knowledge. The examples, domain knowledge and final descriptions are all described as logic programs. The explicit provision for the inclusion of prior knowledge when constructing models (in the data-analytic sense) is a distincting aspect of ILP systems, enabling them to constuct models from data that can account for domain-expertise. This, in addition to the natural comprehensibility associated with logical descriptions, makes ILP a natural choice for machine learning in domains, where substantial prior knowledge exist and explanatory models are necessary. A unifying theory of ILP exists around lattice-based concepts such as refinement, least general generalisation, inverse resolution and most specific corrections. In addition, recent advances have added to these conceptual foundations by introduction of probabilistic semantics, and proof-theoretic techniques for meta-interpretation. These allow the discovery from data of complex models with probabilities, recursion, invention of new domain-knowledge and so on.
Ashwin Srinivasan and his coleagues work on aspects related to the theory, application, and implementation of Inductive Logic Programming.
Conceptual
Application
Implementation
Biological Systems as Programs
In principle, the mathematical language of differential equations, the differential calculus, and data about initial conditions provide a uniform way to describe relations between variables of interest, and how these change as a function of time, given the initial conditions. But, with biological systems, the equations involved are often highly non-linear, making it difficult to obtain analytical solutions. Inevitably therefore, simplifications of the equations result, and the model progressively reflects less of the underlying biological process being studied. So, a biologist has little choice but to resort to verbal or pictorial descriptions, often in the form of annotated graphs, with much of the kinetics being lost in translation.
Ashwin Srinivasan and his colleagues have been looking at a computational approach to biological system identification. In this view, a system is comprised of a program and data.The behaviour of the system is described by how the program computes in stepwise fashion, and the possible state-transformations that result, given the data, and system identification translates to identification of transition systems from data. Transition-system learning is accomplished using a form of meta-interpretive relational learning from data.
Conceptual
Application
Neural-Symbolic Machine Learning
It is not therefore inevitable that machine learning models be either purely neural or purely symbolic. There are problems for which one or the other is much better suited. Prominent recent examples of successful sub-symbolic learning abound with the use of deep neural networks in automatic analysis of image-based data, speech-recognition, machine-translation and so on. It is equally evident that for learning recursive theories, program synthesis, and learning with complex domain-knowledge, symbolic machine learning has proved to be remarkably effective. But there are also amber of problems that could benefit from an approach that requires not one or the other, but both forms of learning.
Ashwin Srinivasan and his colleagues have been looking at several ways of combining neural and symbolic machine learning. In particular, they have looked at using the results of neural-learning as inputs to a symbolic-learner, and vice-versa; and at generating explanations on-demand for black-box neural-models by constructing symbolic-models as proxies.
Conceptual
Implementation
Responsible
Deep Learning
Neural Networks has had a long and rich history, and the reincarnated viewpoint has shifted towards “Deep Neural Networks” and “Deep Learning”. Deep learning and its ever-evolving implementations have been achieving remarkable successes in subareas of science and technology. I am inclined towards conceptual and implementational aspects of deep learning for problems arising in biology, chemistry, computer vision, robotics and games. This includes: deep learning for structured- and unstructured-data, transfer learning, model compression, adversarial learning, deep reinforcement learning.
Conceptual
Neuro-Symbolic Machine Learning
Neural networks can learn from non-symbolic or structured data, and they are found to be robust to noise. However, they suffer from two important issues: their learning is severely affected if data is scarce; they are not interpretable. Whereas, symbolic machine learning models are very data efficient and interpretable. Neuro-symbolic models exploit the advantages of neural networks and symbolic models. My research focus in this area is a combination of deep neural networks and Inductive Logic Programming (ILP).
Conceptual
Applied Machine Learning
I am interested in the development of machine learning and deep learning systems to address specific learning problems. A learning problem can be characterised by observations comprised of either input-output pairs (supervised learning) or only inputs (unsupervised learning). I am mostly interested in solving problems arising in biology, chemistry, internet-of-things (IoT), and finance. I also like exploring applications of stochastic (and evolutionary) optimisation to machine learning.
Application
Implementation
Software Analytics
There is a huge wealth of data available with IT companies about the various software artifacts like code, design documents, execution logs and bug reports etc. With fast growing advancements in machine learning and related areas, Software Analytics mines software repositories to provide insights to enhance quality of software and productivity of software professionals. The Defect Prediction is a typical example of Software Analytics where in prediction models are built using source code, change and complexity metrics to identify defect-prone source files in software systems. N.L.Bhanu Murthy and his colleagues work on predicting defects, automatic generation of code comments etc. to enhance quality of the software.
Application
Aspect-Based Sentiment Analysis
Opinion mining is a sub-discipline of computational linguistics that analyzes people’s opinions and sentiments about a product or an event. Traditionally sentiment analysis was focused on capturing the overall sentiment of an entity rather than capturing the sentiment polarity of each aspect/feature of that entity. This approach was unable to provide a deep insight into public opinion about possible aspects of the entity. Aspect based sentiment analysis (ABSA) provides a solution to this problem by doing opinion mining in a fine-grained way. N.L.Bhanu Murthy and his colleagues work on applying deep learning techniques to ABSA.
Application
AI for Health Informatics and Clinical Systems
Jabez and his team primarily focus on Application of Intelligent systems in the medical sector in two broad areas: System-development and Data Management. A Health Information System provides the necessities for clinical decision support and also orchestrates data generation, compilation, analysis and synthesis, and communication. With the advancement in intelligent technologies and availability of resources for computing and networking, Jabez and his team are involved in the design and development of these systems that are enhanced by XAI and machine learning approaches. Junior clinicians can utilise these predictions to support their medication advices in the absence of domain experts. Regularity in updating comprehensive medical information digitalization over a period helps reducing misclassification rate for future cases. In India, most of the hospitals depends on unstructured paperwork to document patients’ records and medical observations. Advancements in Cloud-based systems can help in maintaining digitized medical records (Electronic Health Records). Patients need not carry a hard copy of reports whenever they visit a physician. It helps physician to have a complete look over a patients’ medical history. The team works on developing smart systems and applications that save time and reduces redundancy when patients migrate from one physician to another or from one geographical location to another as well.
Conceptual
Application
Indexing of Biometric Modalities
What happens when a forensic expert acquires a fingerprint impression from a crime scene. He has to determine with high certainty to whom this fingerprint belongs. Suppose there is a fingerprint database of one million persons (3 impressions of each of the ten fingers). Even if the fingerprint matching algorithm is fast and takes 0.1 seconds to compute a similarity score for a pair of impressions. It would take around 34 days to produce the matching results on the complete database. To make this process fast, we must devise some index structure around the database. We are developing indexing schemes for a few popular biometric traits such as Fingerprint, Face, Iris, Palmprint, etc.
Application
Implementation
Autonomous Vehicle (Driver-less car)
Driverless car is a fascinating idea. An AI agent driving a vehicle can potentially save a lot of human time. The problem is challenging as it involves many sub-tasks that need to be done efficiently in real-time. The first task is to determine where the lane is. It could be of different types and could be occluded as well. The second is to determine what are the indication on the signboard and traffic light. The next task is to assess traffic congestion. There is always an inherent need for goal-based planning to determine the route. Reflex agents are also needed to handle sudden and random events in the traffic. We are working on similar lines in this domain.
Application
Implementation
Human Action Recognition
HAR is an important area where the computer tries to understand what action a human is doing. There are various applications of the same. An AI agent could help elderly persons in his movement. An expert Yoga instructor could assist in correcting postures. A clinical agent could help to deliver medicines to the patients. He can analyze players' moves in a football match and let them know what best strategy could be followed to win the game. We are currently working on some of these issues.
Application
Implementation
One Class Classification Problem
There are many situations when we do not have data for all the categories in our database, or the database could be very heavily biased. Consider a scenario where a database related to a specific disease is collected from the patients coming for treatment in the hospital. Such databases do not have instances related to healthy individuals. However, the learned model is mostly applied to the general population, where most people would be healthy. The question here is as follows "How to build generalizable models trained only on positive or negative examples."
Conceptual
Implementation
Integrating Mathematical Morphology with Machine Learning
Mathematical Morphology (MM) is a theory of image processing based on lattices and offers non-linear operators contrasting with the traditional image processing operators. In the pre-deep learning era, MM-based image processing techniques were popularly used. The popularity of MM can be attributed to the simplicity of the theory and easy adaptability of its operators. Recently, deep learning has become a de facto go-to method for image processing. The popularity of deep learning can also be attributed to the simplicity of the theory and wide adaptability of the basic techniques. Recall that the fundamental units of DL comprise a large number of linear functions and link functions (or activation functions) that connect layers in the network. However, this is predominantly based on linear operators. Hence a natural question arises - Can one combine the fundamentally non-linear MM tools with the current deep learning techniques to obtain better models? This is broadly the main topic of my research. It is hypothesized that integrating MM with deep learning would help improve the robustness of the techniques, leading to better generalizability and stable models.
Conceptual
Implementation
Image Processing and Computer Vision
This is an interdisciplinary research including the aspects of mathematical techniques in image processing and their applications. The numerous innovative handheld devices and visual systems have enabled the end users to access high-resolution images conveniently. The convenience of advanced handheld devices has made users to capture, share and upload the high-resolution images from any place at any time. However, the images acquired in real time suffers due to distortions and degradations. Therefore, it is important to acquire good visual quality images which can be used in healthcare and computer vision applications. We aim to proposed mathematical models which should be robust, less complex and efficient. The proposed statistical techniques based on polynomial coefficients and probability coefficient result in a novel approach and shows high correlation with human opinion score. These models are used in variety of applications such as biometric, rigid registration, non-rigid registration, lungs cancer detection, brain tumor detection, Retinopathy, biomarkers, etc.
Conceptual
Implementation
Artificial Intelligence and Machine learning Applications
We are addressing the variety of applications based on the concept artificial intelligence and machine learning. In this, a novel technique proposed to predict the cement strength using statistical models and artificial neural network. The effectuality and feasibility of the proposed approach is demonstrated on three different cement strength data sets collected from the cement industry situated in UAE for 2 days, 7 days and 28 days. In another applications, we proposed an efficient and simple technique to analyze the electricity consumption using statistical model and Artificial neural network. This is an alternative application in the field of improving the feedback provided to consumers about the consumption of the electricity. The feedback can be based on the comparison of the real-time electricity consumption with the desired consumption pattern in the past. This type of feedback will make well-suited suggestions that provide the awareness among the consumers about the electricity consumption and its optimization. The presented approach can be applied on microgrid environments, small industrial parks, residential and commercial buildings.
Conceptual
Implementation
Stability Analysis of System
The mathematical modelling of a physical system and processes in any areas of engineering leads to the more complex nonlinear systems. This brings several difficulties in the performance analysis and synthesis and, therefore, researchers have been seeking efficient and simple approaches for the analysis of the systems. Among the system performance parameters, stability is one of the important parameters for the continuous as well as the discrete system. We proposed a new technique to determine the stability margin of a discrete system using reduced convictism of eigenvalues and the Gerschgorin circle theorem. It is simple and gives an improved stability margin as compared to other methods. The proposed approach can be useful in various applications such as design and analysis of control system, filters, amplifiers and oscillators.
Conceptual
Implementation
Federated Learning, Private AI, Edge AI
Privacy concerns have limited the expansion of conventional centralized Machine Learning. Federated Learning has emerged as an alternative which not only preserves privacy, but also drastically reduces communication costs. Another advantage of Federated Learning is that it learned models are more comprehensive and robust. But, at the same time throws up new challenges like secure model communication and model aggregation. Working on developing new Federated Learning frameworks for different kinds of applications.
Application
Implementation
Applications of AI
Developing applications in agriculture, health care, biodiversity, forensics, and cyber security.
Application
Implementation
HPC Solutions for AI
GPU computing is playing an important role in furthering AI research. Parallelizing AI/ML algorithms on a GPU cluster can be a challenging task even for most seasoned programmers. This research is about providing programming and parallelization abstraction to programmers. Our experience of having done similar work for CPU clusters will provide a good head start.
Application
Implementation
Querying Surveillance Videos
Developing models for time series analysis using deep learning for querying surveillance videos. The queries could be in the form of text, image, or video clip.
Application
Implementation
Time Series Analytics
Multivariate time series analytics is at the core of many important applications. Most of the literature on MVTS assumes interdependencies among the MVTS variables. The interdependencies can be directly observed or can be latent. The statistical methods exploit linear dependencies whereas, deep learning based methods are capable of capturing non-linear dependencies.
Application
Implementation
Anytime Mining for Data Streams
A real-time data stream is characterized by continuously arriving data objects at a fast and variable rate, ordered by time. Mining data streams is typically constrained by limited available time to process and limited memory to store the incoming data objects. The time available to process each arriving object depends upon the stream speed. And, within these constraints, evolving patterns have to be captured. We are developing models which are able to process any stream speed and we have successfully processed stream speeds up to 80k data points per second. Higher speed is handled using deferred insertions & processing. The spare time available while processing lower speed streams is utilized for refining the information received and produce immediate mining results with compromised accuracy.
Application
Implementation
Unsupervised representation of Videos
Learning spatiotemporal representations without human supervision from videos is a well-researched problem. Visual feature learning problems are broadly explored using supervised approaches, be it a pixel-level or object-level representation. These approaches demand huge annotated datasets and it’s a costly task. This work aims at learning representations that support reasoning at various levels of correspondence. The main idea is to use self-supervised methods to learn visual and temporal features from unlabeled video dataset. We can obtain unlimited supervision to learn correspondence using cycle-consistency to track objects in backward and forward directions and by capturing visual invariance.
Application
Implementation
Multi-modal Knowledge Graphs
We have built a knowledge base (Visio-Textual KB called VTKB) which is the first KB that exploits both modalities (textual and visual). It is an automatically built KB that uses reliable sources (dictionaries, Wikipedia, etc., rather than just web articles) and establishes relations that are textually and visually important. It avoids generating noisy patterns which is a major problem in automatically built existing KBs. We have used VTKB to embed knowledge into a corpus of images for image annotation and image retrieval for complex queries and getting remarkable improvement in the results of image retrieval and image tagging. VTKB is a generic knowledge base which has both visual and textual (semantic) representation of concepts. Visual items are better represented using VTKB, and thereby improving the performance of an applications involving two modalities.
Application
Implementation
Conceptual and algorithmic machinery for understanding neural circuits in the brain
Over the past decade or so, Neuroscience has made extraordinary progress in experimental techniques that now allow us to probe neural circuits in awake behaving animals with unprecedented spatial and temporal resolution. Already, in some organisms (e.g. larval zebrafish, hydra), we now have the capability to simultaneously infer activity from every neuron in the brain. We are also beginning to couple this with the ability to perturb arbitrary subsets of neurons and observe the effects of doing so, both on neural activity and indeed behaviour. Concomitantly, however, the field lacks the availability of canonical techniques -- both conceptual and algorithmic -- that will prescribe such experiments in order to distill an understanding of computation in such neural circuits. The goal here is to build such theoretical machinery. In early work, I have shown that a number of such questions are algorithmically intractable. That is, unless P=NP, no sub-exponential sequence of experiments exists that are guaranteed to answer said questions.
Conceptual
Application
Understanding the role of selectivity in deep neural networks
The Nobel-prize winning work of David Hubel and Torsten Wiesel in the visual system of the anaesthetised cat in the 1960s, demonstrated the presence of neurons that selectively fire in response to the presence of certain features in the visual stimulus. This led to the hypothesis that such features are important for visual processing -- a hypothesis driven by David Marr and others and one that was very influential in Computer Vision. Classical Computer Vision, however, had limited success in tackling many of its core problems, in practice, in spite of pursuing these and other ideas with extensive work. With the success of Deep Learning on many of these core problems, such as object recognition, coming as it does without an adequate understanding of its inner workings, these questions have emerged, yet again. In contrast to nervous systems, however, deep networks can be accessed and manipulated at will. As a result, it is, in principle, possible to determine what precise role selectivity plays in computation. Early work by others on this question has been inconclusive. In a variety of networks (e.g. AlexNet, Inceptionv1, VGG-19) while neurons selective to many nontrivial types of features have been found, other work suggests that often ablating selective neurons leads, counter-intuitively, to a gain in classification accuracy and ablating non-selective neurons can be detrimental to classification accuracy. This direction therefore requires further work, including conceptual. One also hopes that progress in this direction will help address the corresponding questions in the neuroscience setting.
Conceptual
Application
Applications of AI in advancement of human category learning theory
Human category learning studies show that the nature of experimental procedure affects the category representation that is learned. Current theories posit that in classification learning participants process information in an analytical manner, which leads to a preference for a category representation based on a single stimuli dimension. We conduct category learning behavioural studies to test the current theories of human category learning. We perform various statistical tests on the behavioural data. Our results show that the preference for the unidimensional rule could be due to these rules being less effortful. Our results of Bayesian modeling reveal that the preference for unidimensional rule is stronger when only some information is learned accurately and not others. We are conducting further studies to confirm our preliminary findings. There is another set of studies that look at how humans sort items into two categories; this is known as array-based classification task. There are two particular studies that report opposite results for the array-based classfication tasks. These studies have both since been replicated. The difference in the results has something to do with the nature of the stimuli. We have hypothesized that the difference in the above results could be because some stimuli dimensions have more effect on the global structure of the stimuli. Future studies are being planned to explore the reasons for the difference in the results. We will be using data analysis tools and Bayesian modeling techniques to test whether the behavioural data supports our hypothesis.
Conceptual
Application
Implementation
Analysis of Hyperspectral Images using Watershed.
Hyperspectral image classification is an area of active research thanks to its application to earth observation, land cover classification, and agriculture management. This problem has a unique set of challenges. Unlike other datasets, Hyperspectral images are both high dimensional and have spatial structure. For efficient processing one needs novel approaches to exploit both these aspects. These are best tackled by the watershed classifiers and its related methods. Watersheds are a tool from Mathematical Morphology (MM) which have been extensively used for image segmentation. Recently watersheds have been adapted to the classification problem where it is shown to have properties akin to maximum margin classifiers, and perform on par with other state-of-the-art approaches such as random forest, SVM. Moreover, as watershed classifiers are derived from watersheds they are well suited for problems with spatial structure like hyperspectral images. The main objective of this project is to use and adapt watershed classifiers and related methods from MM to solve the problems in the domain hyperspectral imaging.
Conceptual
Application
Prof. Snehanshu Saha
Professor-CS&IS and Head- (APPCAIR),
BITS PILANI K K Birla Goa Campus
Senior Member-IEEE, Senior Member-ACM, Fellow-IETE
BITS Pilani, K.K. Birla Goa Campus
snehanshus@goa.bits-pilani.ac.in
+91 832 2580 855
BITS Pilani K K Birla Goa Campus