Contact
Digital Solutions for Aquaculture | Fish Farm Software
Request a Demo
Close

Revolutionizing Aquaculture with Machine Learning: real-world applications and success stories

KAMAHU
Satellite view of Breizh/Brittany

Article by Mikael Dautrey (ISITIX) and Kilian Delorme (KAMAHU) initially published on www.isitix.com/en/blog

Machine learning is transforming aquaculture with groundbreaking applications in image classification, image segmentation, video analysis, and object detection.

In this post, we explore four innovative projects that demonstrate real results in the field:

  1. Image classification using ultrasound – to detect frog maturity, essential for breeding programs, improving breeding cycles and increasing yield.
  2. Image segmentation – to identify malformations in fish, aiding in early health interventions.
  3. Video analysis and scene understanding – to provide continuous monitoring of fish farms, detecting anomalies in behavior and environmental conditions.
  4. Object detection algorithms – to identify fish cages from satellite images, optimizing farm management and resource allocation.

Image classification : detecting frog maturity using ultrasound

In a frog hatchery, to optimize tadpoles production, it is necessary to estimate frog maturity in order to separate the spawning females. To do this, the breeder performs ultrasound scans on the frogs. Initially, this work requires a highly qualified biologist to decide on frog maturity. Agnes Joly who runs the AQUAPRIMEUR hatchery wanted anyone on her farm to be able to carry out the operation, through the use of an automatic classification system. 

Designing a device to automatically classify ultrasound scans in an industrial environment is quite a challenge.

Key challenges and solutions:

  • Image collection and validation: we validated the feasibility of using machine learning to classify ultrasound images, despite having only a few hundred classified scans initially.
  • Ultrasound scanner interface: we interfaced with the ultrasound scanner to automate the task, using auxiliary electronics to read and process the video signal.
  • Ergonomic solutions: in the environment of an aquaculture farm, we implemented an audio interface, enabling hands-free operation, and complemented it by a touch screen.
  • Reliability and performance: we tackled issues like peripheral interfacing, noise reduction, and power supply.

Collecting images and designing a ML model, the easy part of the project

When we began this project in 2020, we faced significant challenges. Traditional models were hard to optimize, and deep learning required thousands of images, while we had only a few hundred classified ultrasound scans. Today, advances in ML tools and our growing expertise have made this task routine. It remains an important part of the project, especially in collaborating with experts, whose maturity classifications can vary with their subjective judgment.

Interfacing with the ultrasound scanner, a matter of electronics and DIY

We’ve only done this for one model today, but we’re confident in our ability to interface with any ultrasound scanner on the market. After testing several methods, we decided to read the video signal directly and process it via the auxiliary electronics box. There’s still plenty of room for improvement, but it works.

Finding ergonomic solutions to facilitate the operator’s work, still a work in progress

In an aquaculture farm, the environment is obviously challenging; workers often need both hands to carry loads or handle animals. Solutions that work in an office, like a mouse and keyboard, are no longer suitable. We opted for an audio interface complemented by a touch screen.

The audio interface works both ways, allowing for speaking and listening, with the user wearing a headset with a microphone and earphones. 

The touch screen provides visual feedback to validate commands sent to the system, read information, or perform certain complex actions via audio. This was one of the most difficult parts of the project, and we are still working to improve the solution.

Addressing various reliability and performance issues, 80/20 law

A complex project like this brings its share of time-consuming little problems, such as headset peripheral interfacing, noise reduction, power supply, shock and humidity protection, model and response time optimization, and application architecture. This has a direct impact on development costs, and at some point you have to ask yourself whether it’s worth it.

A long-term investment, beyond bottom-line improvement

Automation and work simplification were the primary drivers for this project. The system  will eliminate the access barrier to using the ultrasound scanner, ensuring consistent use and standardized results, regardless of who performs the operations.

 

Image segmentation : detecting malformations

A tedious but essential work

Research laboratories studying fish deformities need to measure various dimensions of fingerlings, such as head size, body size, camber, yolk sac appearance and eye shape. The specific measurements often depend on the research focus or the requirements of a particular publication. Traditionally, this involves taking photos and measuring dimensions in pixels using a scale shown in the photo. This method is slow, time-consuming, and prone to inaccuracies.

A model for this task

A research laboratory which had supervised a doctoral thesis about the direct and transgenerational ecotoxicity of glyphosate on the health of rainbow trout approached us to test the feasibility of using machine learning for measuring fish deformities. Using a sample of around 100 measured images, we developed a model tailored to this task.

Annotations reloaded

The images had been annotated by drawing lines with an image editor and then recording the values in an Excel file. Although the information was available, it couldn’t be easily integrated into a machine learning pipeline. Therefore, we had to redo all the annotations using a dedicated tool. For this project, we used VGG VIA, a lightweight image annotation software developed by the Visual Geometry Group (VGG) at the University of Oxford. 

From pixels to geometry

We then trained a segmentation model that allowed us to easily measure the main characteristics monitored by the researchers. Next, we had to convert the pixel measurements into units of measurement (mm). This task was complicated by the absence of documentation or traces of the device used to take the photos, including camera optics and mounting, making it difficult to accurately estimate the geometric deformations of the images.

A well-functioning model, but with a limited scope

The model worked well, but it was limited by the specific research topic for which it had been designed. Each research topic would require a different model. We don’t see a cost-effective opportunity to create customized Deep Learning models for biology researchers, as each research topic comes with its own set of measurements and therefore a dedicated model that will serve only once and for a few thousand images.

It would be more advantageous to develop a software environment dedicated to customizing ML models for researchers. This environment would provide a standardized photography tool, standardize the image annotation process and automatically produce customized models tailored to their specific research topics.

 

Video analysis and scene understanding : monitoring fish farms

Many aquaculture start-up projects use underwater cameras to monitor fish, providing information on fish quantity, size, and health status. While this idea is excellent in theory, operating underwater cameras is difficult due to water turbidity, the need for frequent glass cleaning, and the high cost of watertight housing. To address these issues, we tested a different solution by positioning aerial cameras above the farm’s tanks. Since birds excel at spotting fish from above, we thought, why not us?

Positioning cameras

A fish farm with a small but efficient facility on the banks of a river in Brittany wanted to monitor its tanks more effectively. Our test site breeds trout with moderately sized raceways, approximately 50 meters by 5 meters, and uses a RAS system located under cover. We strategically positioned cameras around the tank, taking into account the constraints of the location, available height, power supply, and network. This setup allowed us to collect data on a NAS, which we then processed.

Processing videos, more data, but not significantly more information, except time

Processing videos brought more data, but information varies slowly from image to image. When we started this project in 2021, we had primarily been working with still images. Initially, we were overwhelmed by the volume of data produced by just a few cameras. Gradually, we developed our tools and methods to better handle the specifics of video data: a large quantity of similar images with different temporalities, ranging from seconds to minutes, hours, days, and months.

Going further: a territory to explore

Guided by Robert Le Coat and Amaury Guet, we have today a solid understanding of the production process. We can detect simple activities such as movement around the tanks, feeding, and aerator activation. However, there is still much work to be done in optimizing camera positions, selecting the right optics, and defining what truly constitutes valuable information in images of an aquaculture farm. This ongoing exploration promises to refine our approach and enhance the effectiveness of our monitoring solutions.

 

Object detection : detecting cages from satellite images

Satellite imagery offers a vast potential for monitoring aquaculture installations globally.

The globe at your fingertips

We were put in touch with a satellite data operator who was looking for new fields of application and new opportunities to resell its data. Satellites enable us to work on an unimaginable scale, covering the surface of the globe in just a few clicks. 

A very simple challenge : detecting fish cages

To test the potential of satellite data, we started with a very simple problem: detecting fish cages installed by offshore fish farmers around the world. This initial challenge allowed us to explore the capabilities of satellite imagery in identifying and monitoring aquaculture installations on a global scale.

Satellite Data Sets: a complex field

Each satellite operator manages its own constellations, comprising specific types of satellites equipped with sensors that have distinct capabilities, measured wavelengths, measurement precision, and spatial resolution. Additionally, the route taken by a constellation is well-defined depending on its orbit, and the resulting covered area of the globe. These routes and coverage areas can also vary with the seasons. Furthermore, some measurements may or may not be available, depending on the sun’s position and the cloudy weather when the satellite passes over the area.

Starting up with products of level 3 or higher

The raw data thus produced, representing thousands of small squares of data covering the surface of the globe at close but different times, are then reprocessed to produce the satellite images we consult, such as Google maps, which are in fact assemblies of images to reconstruct a global view of the earth.

There are five levels of products sold by satellite operators.

We’ve started our development work with products of level 3 or higher.

Level Description
0 Raw telemetry data; includes received signals without any correction
1 Data corrected for instrumental errors
1A Radiometrically corrected data
1B Georeferenced and calibrated data
2 Data derived from Level 1 with specific products
3A Data synthesized on regular grids (mosaics); used for global or regional analyses
4 Modeled or assimilated data; combination of observational data and numerical models

Geojson and image annotation

We first used geojson data from farms we wanted to track. It turned out that reconciling geojson and satellite data was complex. 

Reconciling GeoJSON data with satellite imagery presented challenges, particularly due to differences in projections and coordinate systems. GeoJSON typically uses the WGS 84 coordinate reference system (CRS) with geographic coordinates (latitude and longitude). Satellite imagery might use different projections (e.g., UTM, Mercator) or datums, leading to discrepancies.

As we didn’t have much time, we reverted to the classic image annotation that we had mastered. 

Conclusion: model training

Training a model is also a real challenge. Imagine that, to cover a coastal region such as Brittany, several tens of thousands of 240×240 images are required. Approximately 1/10000th of these images will include an aquaculture farm. The model must therefore be adapted to this situation.

We are still fine-tuning the model and hope to produce other significant results soon.

For more insights and updates on our innovative projects, connect with us on LinkedIn:
Mikael Dautrey, Kilian Delorme

The AQUASCAN project led by Isitix and KAMAHU initially received part funding from the Brittany Region. KAMAHU received part funding from the European Space Agency BIC Nord France for a project using satellite imagery in her farm management software.