Tag Archive : development

/ development

Applications of Computer Vision in Healthcare

Computer-based intelligence frameworks in therapeutic administrations are the use of complex figuring and programming to assess human recognition in the assessment of jumbled helpful data. Specifically, AI is the limit with regards to Computer computations to unpleasant closures without direct human information. What perceives AI development from ordinary progressions in healthcare is the ability to get information, process it and give well-described respect to the end-customer. Computer-based knowledge does this through AI figuring.

The basic purpose of prosperity related AI applications is to research associations between neutralizing activity or treatment systems and patient outcomes. Man-made consciousness activities have been made and associated with practices, for instance, investigation structures, treatment show headway, sedate improvement, redid remedy, and patient checking and care.

What Is Computer Vision?

Computer vision is a type of man-made reasoning where PCs can “see” the world, investigate visual information and after that settle on choices from it or addition understanding about the earth and circumstance. One of the driving components behind the development of Computer vision is the measure of information we produce today that is then used to prepare and improve Computer vision. Our reality has endless pictures and recordings from the inherent cameras of our cell phones alone.

Be that as it may, while pictures can incorporate photographs and recordings, it can likewise mean information from warm or infrared sensors and different sources. Alongside a gigantic measure of visual information (more than 3 billion pictures are shared online consistently), the registering force required to investigate the information is currently open and progressively reasonable.

As the field of Computer, the vision has developed with new equipment and calculations so have the precision rates for item recognizable proof. In under 10 years, the present frameworks have arrived at 99 percent exactness from 50 percent making them more precise than people at rapidly responding to visual sources of info.

Applications of Computer Vision

Examples Of Computer Vision

Google Translate application

All you have to do to peruse signs in an unknown dialect is to point your telephone’s camera at the words and let the Google Translate application reveal to you what it implies in your favored language in a flash. By utilizing optical character acknowledgment to see the picture and increased reality to overlay a precise interpretation, this is an advantageous device that utilizations Computer Vision.

Facial acknowledgment

China is certainly at the forefront of utilizing facial acknowledgment innovation, and they use it for police work, installment entryways, security checkpoints at the air terminal and even to apportion tissue and anticipate burglary of the paper at Tiantan Park in Beijing, among numerous different applications.

Social insurance

Since 90 percent of every single therapeutic datum is picture based there are plenty of employments for Computer vision in medication. From empowering new therapeutic symptomatic strategies to break down X-beams, mammography and different outputs to checking patients to distinguish issues prior and help with healthcare procedure, anticipate that our medicinal foundations and experts and patients will profit by Computer vision today and much more later on as it’s turned out in human services.

Role Of Computer Vision In HealthCare

1. Computer Vision for Predictive Analytics and Therapy

The Computer vision system has indicated extraordinary application in healthcare procedures and the treatment of certain infections. As of late, three-dimensional (3D) displaying and fast prototyping advancements have driven the improvement of therapeutic imaging modalities, for example, CT and MRI. P. Gargiulo et al. in Iceland “New Directions in 3D Healthcare Modeling: 3D-Printing Anatomy and Functions in Neurosurgical Planning” join CT and MRI pictures with DTI tractography and use picture division conventions to 3D model the skull base, tumor, and five expressive fiber tracts. The creators give an extraordinary potential treatment approach for cutting edge neurosurgical planning.

Human movement acknowledgment (HAR) is one of the generally considered Computer vision issues. S. Zhang et al. in China “A Review on Human Activity Recognition Using Vision-Based Method” present a diagram of different HAR approaches just as their developments with the agent old-style written works. The creators feature the advances of picture portrayal approaches and grouping strategies in vision-based movement acknowledgment. Portrayal approaches, for the most part, incorporate worldwide portrayals, nearby portrayals, and profundity based portrayals. They in like manner separate and portray the human exercises into three levels including activity natives, activities/exercises, and cooperations.

Likewise, they condense the characterization systems in HAR application which incorporate 7 kinds of technique from the great DTW and the freshest profound learning. In conclusion, they address that applying these current HAR approaches in genuine frameworks or applications has incredible tests even though up to now ongoing HAR methodologies have made extraordinary progress. Additionally, three future bearings are suggested in their work.

2. Examination of Healthcare Image

This topic endeavors to address the improvement and new procedures on the examination strategies for a therapeutic picture. To start with, the joining of multimodal data did from various indicative imaging methods is basic for a thorough portrayal of the area under assessment. Thusly, picture coregistration has turned out to be critical both for subjective visual appraisal and for quantitative multiparametric examination in research applications.

S. Monti et al. in Italy “An Evaluation of the Benefits of Simultaneous Acquisition on PET/MR Coregistration in Head/Neck Imaging” analyze and survey the exhibition between the conventional coregistration strategies applied to PET and MR gained as single modalities and the acquired outcomes with the certainly coregistration of a half breed PET/MR, in complex anatomical areas, for example, the head/neck (HN). The trial results demonstrate that crossbreed PET/MR gives a higher enlistment exactness than the reflectively coregistered pictures.

Presently, the conventional way to deal with diminishing colorectal disease-related mortality is to perform normal screening in the quest for polyps, which results in polyp miss rate and failure to perform a visual appraisal of polyp danger. D. Vazquez et al. in Spain and Canada “A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images” propose an all-encompassing benchmark of colonoscopy picture division and set up another solid benchmark for colonoscopy picture examination. Via preparing a standard completely convolutional systems (FCN), they demonstrate that in endoluminal scene division, the presentation of FCN is superior to the aftereffect of the earlier investigates.

Computer Vision In Healthcare

3. Key Algorithms for Healthcare Images

Most of this issue centers around the exploration of improved calculation for therapeutic pictures. Organ division is essential for CAD frameworks. Truth be told, the division calculation is the most significant and fundamental for picture handling and furthermore improves the degree of malady expectation and treatment. A positive input module dependent on EPELM centers around obsession territory to increase objects, hindering commotions, and advancing immersion in recognition. Tests on a few standard picture databases demonstrate that the novel calculation beats the traditional saliency location calculations and furthermore sections nucleated cells effectively in various imaging conditions

Therapeutic ultrasound is generally utilized in the determination and evaluation of interior body structures and furthermore assumes a key job in treating different illnesses because of its wellbeing, noninvasive, and well resistance in patients. In any case, the pictures are constantly defiled with spot clamor and henceforth upset the ID of picture subtleties.

4. AI Algorithms for Healthcare Images

The development of the more seasoned grown-up populace on the planet is astounding and it will greatly affect the human services framework. The older folks consistently need self-care capacity and consequently, social insurance and nursing robots attract a lot of consideration in late years. Albeit somatosensory innovation has been brought into the movement acknowledgment and medicinal services connection of the older, conventional recognition technique is consistently in a solitary modular. To build up a proficient and helpful collaboration partner framework for healthcare attendants and patients with dementia, X. Darn et al. in China “An Interactive Care System Based on a Depth Image and EEG for Aged Patients with Dementia” propose two novel multimodal meager autoencoder structures dependent on movement and mental highlights. To begin with, the movement is separated after the preprocessing of the profundity picture and after that EEG flag as the psychological component is recorded. The proposed novel framework is intended to be founded on the multimodal profound neural systems for the patient with dementia with extraordinary needs.

The info highlights of the systems incorporate (1) extricated movement highlights dependent on the profundity picture sensor and (2) EEG highlights. The yield layer is the sort acknowledgment of the patient’s assistance prerequisite. Trial results demonstrate that the proposed calculation disentangles the procedure of the acknowledgment and accomplished 96.5% and 96.4% (exactness and review rate), individually, for the rearranged dataset, and 90.9% and 92.6%, separately, for the ceaseless dataset. Likewise, the proposed calculations rearrange the procurement and information handling under high activity acknowledgment proportion contrasted and the customary strategy.

As of late, profound learning has turned out to be extremely prevalent in man-made consciousness. Q. Tune et al. in China “Utilizing Deep Learning for Classification of Lung Nodules on Computed Tomography Images” utilize a convolution neural system (CNN), a profound neural system (DNN), and stacked autoencoder (SAE) for the early conclusion of lung malignant growth to specialists. The exploratory outcomes propose that CNN chronicled the best execution than DNN and SAE.

Development tools for AI and ML

Artificial Intelligence a popular technology of computer science is also known as machine intelligence. Machine Learning is a systematic study of algorithms and statistical models.

AI creates intelligent machines that react like humans as it can interpret new data. ML enables computer systems to perform learning-based actions without explicit instructions.

AI global market is predicted to reach $169 billion by 2025. Artificial Intelligence will see increased investments for the implementation of advanced level software. Organizations will strategize technological advancements.

Various platforms and tools for AI and ML empower the developers to design powerful programs.

Tools for AI and ML

Tools for AI and ML:

Google ML Kit for Mobile:

Software development kit for Android and IOS phones enables developers to build robust applications with optimized and personalized features. This kit allows developers to ember the machine learning technologies with cloud-based APIs. This kit is integration with Google’s Firebase mobile development platform.

Features:

  1. On-device or Cloud APIs
  2. Face, text and landmark recognition
  3. Barcode scanning
  4. Image labeling
  5. Detect and track object
  6. Translation services
  7. Smart reply
  8. AutoML Vision Edge

Pros:

  1. AutoML Vision Edge allows developers to train the image labeling models for over 400 categories it capacities to identify.
  2. Smart Reply API suggests response text based on the whole conversation and facilitates quick reply.
  3. Translation API can convert text up to 59 languages and language identification API forms a string of text to identify and translate.
  4. Object detection and tracking API lets the users build a visual search.
  5. Barcode scanning API works without an internet connection. It can find the information hidden in the encoded data.
  6. Face detection API can identify the faces in images and match the facial expressions.
  7. Image labeling recognizes the objects, people, buildings, etc. in the images and with each matched data; ML shares the score as a label to show the confidence of the system.

Cons:

  1. Custom models can grow in huge sizes.
  2. Beta Release mode can hurt cloud-based APIs.
  3. Smart reply is useful for general discussions for short answers like “Yes”, “No”, “Maybe” etc.
  4. AutoML Vision Edge tool can function successfully if plenty of image data is available.

Accord.NET:

This Machine Learning framework is designed for building applications that require pattern recognition, computer vision, machine listening, and signal processing. It combines audio and image processing libraries written in C#. Statistical data processing is possible with Accord. Statistics. It can work efficiently for real-time face detection.

Features:

  1. Algorithms for Artificial Neural networks, Numerical linear algebra, Statistics, and numerical optimization
  2. Problem-solving procedures are available for image, audio and signal processing.
  3. Supports graph plotting & visualization libraries.
  4. Workflow Automation, data ingestion, speech recognition,

Pros:

  1. Accord.NET libraries are available from the source code and through the executable installer or NuGet package manager.
  2. With 35 hypothesis tests including two-way and one-way ANOVA tests, non-parametric tests useful for reasoning based on observations.
  3. It comprises 38 kernel functions e.g. Probabilistic Newton Method.
  4. It contains 40 non-parametric and parametric statistical distributions for the estimation of cost and workforce.
  5. Real-time face detection
  6. Swap learning algorithms and create or test new algorithms.

Cons:

  • Support is available for. Net and its supported languages.
  • Slows down because of heavy workload.

Tensor Flow:

It provides a library for dataflow programming. The JavaScript library helps in machine learning development and the APIs help in building new models and training the systems. Tensorflow developed by Google is an opensource Machine Learning library that aids in developing the ML models and numerical computation using dataflow graphs. Use it by installing, use script tags or through NPM.

Features:

  1. A flexible architecture allows users to deploy computation on one or multiple desktops, servers, or mobile devices using a single API.
  2. Runs on one or more GPUs and CPUs.
  3. It’s yielding scheme of tools, libraries, and resources allow researchers and developers to build and deploy machine-learning applications effortlessly.
  4. High-level APIs accedes to build and train for ML models efficiently.
  5. Runs existing models using TensorFlow.js, which acts as a model converter.
  6. Train and deploy the model on the cloud.
  7. Has a full-cycle deep learning system and helps in the neural network.

Pros:

  1. You can use it in two ways, i.e. by script tags or by installing through NPM.
  2. It can even help for human pose estimation.
  3. It includes the variety of pre-built models and model subblocks can be used together with simple python scripts.
  4. It is easy to structure and train your model depending on data and the models with you are training the system.
  5. Training other models for similar activities is simpler once you have trained a model.

Cons:

  1. The learning curve can be quite steep.
  2. It is often doubtful if your variables need to be tensors or can be just plain python types.
  3. It restricts you from altering algorithms.
  4. It cannot perform all computations on GPU intensive computations.
  5. The API is not that easy to use if you lack knowledge.

Infosys Nia:

This self-learning knowledge-based AI platform accumulates organizational data from people, business processes and legacy systems. It is designed to engage in complicated business tasks to forecast revenues and suggest profitable products the company can introduce.

Features:

  1. Data Analytics
  2. Business Knowledge Processing
  3. Transform Information
  4. Predictive Automation
  5. Robotic Process Automation
  6. Cognitive Automation

Pros:

  1. Organizational Transformation is possible with enhanced technologies to automate and increase operational efficiency.
  2. It enables organizations to continually use previously gained knowledge as they grow and even modify their systems.
  3. Faster data processing adds to the flexibility of data visualization, analytics, and intelligent decision-making.
  4. Reduces human efforts involved in solving high-value customer problems.
  5. It helps in discovering new business opportunities.

Cons:

  1. It is difficult to understand how it works.
  2. Extra efforts needed to make optimum use of this software.
  3. It has lesser features of Natural Language Processing.

Apache Mahout:

Mainly it aims towards implementing and executing algorithms of statistics and mathematics. It’s mainly based on Scala and supports Python. It is an open-source project of Apache.
Apache Mahout is a mathematically expressive Scala DSL (Domain Specific Language).

Features:

  1. It is a distributed linear algebra framework and includes matrix and vector libraries.
  2. Common maths operations are executed using Java libraries
  3. Build scalable algorithms with an extensible framework.
  4. Implementing machine-learning techniques using this tool includes algorithms for regression, clustering, classification, and recommendation.
  5. Run it on top of Apache Hadoop with the help of the MapReduce paradigm.

Pros:

  1. It is a simple and extensible programming environment and framework to build scalable algorithms.
  2. Best suited for large datasets processing.
  3. It eases the implementation of machine learning techniques.
  4. Run-on the top of Apache Hadoop using the MapReduce paradigm.
  5. It supports multiple backend systems.
  6. It includes matrix and vector libraries.
  7. Deploy large-scale learning algorithms using shortcodes.
  8. Provide fault tolerance if programming fails.

Cons:

  1. Needs better documentation to benefit users.
  2. Several algorithms are missing this limits the developers.
  3. No enterprise support makes it less attractive for users.
  4. At times it shows sporadic performance.

Shogun:

It provides various algorithms and data structures for unified machine learning methods. Shogun is a tool written in C++, for large-scale learning, machine learning libraries are useful in education and research.

Features:

  1. Huge capacity to process samples is the main feature for programs with heavy processing of data.
  2. It provides support to vector machines for regression, dimensionality reduction, clustering, and classification.
  3. It helps in implementing Hidden Markov models.
  4. Provides Linear Discriminant Analysis.
  5. Supports programming languages such as Python, Java, R, Ruby, Octave, Scala, and Lua.

Pros:

  1. It processes enormous data-sets extremely efficiently.
  2. Link to other tools for AI and ML and several libraries like LibSVM, LibLinear, etc.
  3. It provides interfaces for Python, Lua, Octave, Java, C#, C++, Ruby, MatLab, and R.
  4. Cost-effective implementation of all standard ML algorithms.
  5. Easily combine data presentations, algorithm classes, and general-purpose tools.

Cons:

Some may find its API difficult to use.

Scikit:

It is an open-source tool for data mining and data analysis, developed in Python programming language. Scikit-Learn’s important features include clustering, classification, regression, dimensionality reduction, model selection, and pre-processing.

Features:

  1. Consistent and easy to use API is also easily accessible.
  2. Switching models of different contexts are easy if you learn the primary use and syntax of Scikit-Learn for one kind of model.
  3. It helps in data mining and data analysis.
  4. It provides models and algorithms for support vector machines, random forests, gradient boosting, and k-means.
  5. It is built on NumPy, SciPy, and matplotlib.
  6. BSD license lets you use commercially.

Pros:

  1. Easily documentation is available.
  2. Call objects to change the parameters for any specific algorithm and no need to build the ML algorithms from scratch.
  3. Good speed while performing different benchmarks on model datasets.
  4. It easily integrates with other deep learning frameworks.

Cons:

  1. Documentation for some functions is slightly limited hence challenging for beginners.
  2. Not every implemented algorithm is present.
  3. It needs high computation power.
  4. Recent algorithms such as XGBoost, Catboost, and LightGBM are missing.
  5. Scikit learns models take a long time to train, and they require data in specific formats to process accurately.
  6. Customization for the machine learning models is complicated.
AI and ML development

Final Thoughts:

Twitter, Facebook, Amazon, Google, Microsoft, and many other medium and large enterprises continuously use improved development tactics. They extensively use tools for AI and ML technology in their applications.

Various tools for AI and ML can ease software development and make the solutions effective to meet customer requirements. Make user-friendly mobile applications or other software that are potentially unique. Using Artificial Intelligence and Machine Learning create intelligent solutions for improved human life. New algorithm creation, using computer vision and other technology and AI training requires skills and development of innovative solutions that need powerful tools.

Relationship between Big Data, Data Science and ML

Data is all over the place. Truth be told, the measure of advanced data that exists is developing at a fast rate, multiplying like clockwork, and changing the manner in which we live. Supposedly 2.5 billion GB of data was produced each day in 2012.

An article by Forbes states that Data is becoming quicker than any time in recent memory and constantly 2020, about 1.7MB of new data will be made each second for each person on the planet, which makes it critical to know the nuts and bolts of the field in any event. All things considered, here is the place of our future untruths.

Machine Learning, Data Science and Big Data are developing at a cosmic rate and organizations are presently searching for experts who can filter through the goldmine of data and help them drive quick business choices proficiently. IBM predicts that by 2020, the number of employments for all data experts will increment by 364,000 openings to 2,720,000

Big Data Analytics

Big Data

Enormous data is data yet with tremendous size. Huge Data is a term used to portray an accumulation of data that is enormous in size but then developing exponentially with time. In short such data is so huge and complex that none of the customary data the board devices can store it or procedure it productively.

Kinds Of Big Data

1. Structured

Any data that can be put away, got to and handled as a fixed organization is named as structured data. Over the timeframe, ability in software engineering has made more noteworthy progress in creating strategies for working with such sort of data (where the configuration is notable ahead of time) and furthermore determining an incentive out of it. Be that as it may, these days, we are predicting issues when the size of such data develops to an immense degree, regular sizes are being in the anger of different zettabytes.

2. Unstructured

Any data with obscure structure or the structure is delegated unstructured data. Notwithstanding the size being colossal, un-organized data represents various difficulties as far as its handling for inferring an incentive out of it. A regular case of unstructured data is a heterogeneous data source containing a blend of basic content records, pictures, recordings and so forth. Presently day associations have an abundance of data accessible with them yet lamentably, they don’t have a clue how to infer an incentive out of it since this data is in its crude structure or unstructured arrangement.

3. Semi-Structured

Semi-structured data can contain both types of data. We can see semi-organized data as organized in structure however it is really not characterized by for example a table definition in social DBMS. The case of semi-organized data is a data spoken to in an XML document.

Data Science

Data science is an idea used to handle huge data and incorporates data purifying readiness, and investigation. A data researcher accumulates data from numerous sources and applies AI, prescient investigation, and opinion examination to separate basic data from the gathered data collections. They comprehend data from a business perspective and can give precise expectations and experiences that can be utilized to control basic business choices.

Utilizations of Data Science:

  • Internet search: Search motors utilize data science calculations to convey the best outcomes for inquiry questions in a small number of seconds.
  • Digital Advertisements: The whole computerized showcasing range utilizes the data science calculations – from presentation pennants to advanced announcements. This is the mean explanation behind computerized promotions getting higher CTR than conventional ads.
  • Recommender frameworks: The recommender frameworks not just make it simple to discover pertinent items from billions of items accessible yet additionally adds a great deal to the client experience. Many organizations utilize this framework to advance their items and recommendations as per the client’s requests and the significance of data. The proposals depend on the client’s past list items

Machine Learning

It is the use of AI that gives frameworks the capacity to consequently take in and improve for a fact without being unequivocally customized. AI centers around the improvement of PC programs that can get to data and use it learn for themselves.

The way toward learning starts with perceptions or data, for example, models, direct involvement, or guidance, so as to search for examples in data and settle on better choices later on dependent on the models that we give. The essential point is to permit the PCs to adapt naturally without human mediation or help and alter activities as needs are.

ML is the logical investigation of calculations and factual models that PC frameworks use to play out a particular assignment without utilizing unequivocal guidelines, depending on examples and derivation. It is viewed as a subset of man-made reasoning. AI calculations fabricate a numerical model dependent on test data, known as “preparing data”, so as to settle on forecasts or choices without being expressly modified to play out the assignment.

The relationship between Big Data, Machine Learning and Data Science

Since data science is a wide term for various orders, AI fits inside data science. AI utilizes different methods, for example, relapse and directed bunching. Then again, the data’ in data science might possibly develop from a machine or a mechanical procedure. The principle distinction between the two is that data science as a more extensive term centers around calculations and measurements as well as deals with the whole data preparing procedure

Data science can be viewed as the consolidation of different parental orders, including data examination, programming building, data designing, AI, prescient investigation, data examination, and the sky is the limit from there. It incorporates recovery, accumulation, ingestion, and change of a lot of data, on the whole, known as large data.

Data science is in charge of carrying structure to huge data, scanning for convincing examples, and encouraging chiefs to get the progressions adequately to suit the business needs. Data examination and AI are two of the numerous devices and procedures that data science employments.

Data science, Big data, and AI are probably the most sought after areas in the business at the present time. A mix of the correct ranges of abilities and genuine experience can enable you to verify a solid profession in these slanting areas.

In this day and age of huge data, data is being refreshed considerably more every now and again, frequently progressively. Moreover, much progressively unstructured data, for example, discourse, messages, tweets, websites, etc. Another factor is that a lot of this data is regularly created autonomously of the association that needs to utilize it.

This is hazardous, in such a case that data is caught or created by an association itself, at that point they can control how that data is arranged and set up checks and controls to guarantee that the data is exact and complete. Nonetheless, in the event that data is being created from outside sources, at that point there are no ensures that the data is right.

Remotely sourced data is regularly “Untidy.” It requires a lot of work to clean it up and to get it into a useable organization. Moreover, there might be worries over the solidness and on-going accessibility of that data, which shows a business chance on the off chance that it turns out to be a piece of an association’s center basic leadership ability.

This means customary PC structures (Hardware and programming) that associations use for things like preparing deals exchanges, keeping up client record records, charging and obligation gathering, are not appropriate to putting away and dissecting the majority of the new and various kinds of data that are presently accessible.

Therefore, in the course of the most recent couple of years, an entire host of new and intriguing equipment and programming arrangements have been created to manage these new kinds of data.

Specifically, colossal data PC frameworks are great at:

  • Putting away gigantic measures of data:  Customary databases are constrained in the measure of data that they can hold at a sensible expense. Better approaches for putting away data as permitted a practically boundless extension in modest capacity limit.
  • Data cleaning and arranging:  Assorted and untidy data should be changed into a standard organization before it tends to be utilized for AI, the board detailing, or other data related errands.
  • Preparing data rapidly: Huge data isn’t just about there being more data. It should be prepared and broke down rapidly to be of most noteworthy use.

The issue with conventional PC frameworks wasn’t that there was any hypothetical obstruction to them undertaking the preparing required to use enormous data, yet by and by they were excessively moderate, excessively awkward and too costly to even consider doing so.

New data stockpiling and preparing ideal models, for example, have empowered assignments which would have taken weeks or months to procedure to be embraced in only a couple of hours, and at a small amount of the expense of progressively customary data handling draws near.

The manner in which these ideal models does this is to permit data and data handling to be spread crosswise over systems of modest work area PCs. In principle, a huge number of PCs can be associated together to convey enormous computational capacities that are similar to the biggest supercomputers in presence.

ML is the critical device that applies calculations to every one of that data and delivering prescient models that can disclose to you something about individuals’ conduct, in view of what has occurred before previously.

A decent method to consider the connection between huge data and AI is that the data is the crude material that feeds the AI procedure. The substantial advantage to a business is gotten from the prescient model(s) that turns out toward the part of the bargain, not the data used to develop it.

Conclusion

AI and enormous data are along these lines regularly discussed at the same moment, yet it is anything but a balanced relationship. You need AI to get the best out of huge data, yet you don’t require huge data to be capable use AI adequately. In the event that you have only a couple of things of data around a couple of hundred individuals at that point that is sufficient to start building prescient models and making valuable forecasts.