EERS
Montréal
Groupe Luxnor
Boucherville
Groupe Luxnor
Boucherville
Groupe Luxnor
Montréal
Groupe Luxnor
Boucherville
Groupe Luxnor
Boucherville
Halomax
Boucherville
Cabico
Coaticook
Cabico
Coaticook
Cabico
Coaticook
EERS
Category
Engineering, Science / Research
About EERS 2 Live
EERS 2 Live develops enabling hardware and software for noise management, hearing protection, audio clarity, and voice intelligibility through devices that are comfortable and secure. We are experts in real-time signal processing, building applications for augmented hearing in industrial, medical, and consumer use cases.
Our biomechanical expertise allows us to map and address the complexity of the human ear canal to build novel, comfortable, usable, and manufacturable earpieces that are acoustically efficient.
The EERS team is made up of experts in audiology, acoustics, embedded software, hardware design, biomechanical engineering, industrial design, and prototype assembly—capable of working on the smallest form factors.
Job Description
Reporting to the Principal Audio R&D Scientist, the Audio R&D Scientist is responsible for carrying out acoustical measurements, conducting data collection, and participating in design processes pertaining to in-ear acoustics and audio signal processing algorithms.
The ML & Applied DSP role focuses on applying classical DSP together with modern machine-learning techniques to improve voice pickup and audio clarity for EERS’ in-ear communication handsets, particularly in high-noise environments. The successful candidate will survey relevant literature, curate and label audio datasets, prototype algorithms in Python, train and evaluate ML models (e.g., for adaptive denoising, voice activity detection, or speech enhancement), and assist in porting validated solutions to C++ for real-time execution on embedded platforms.
The successful candidate must be detail-oriented, organized, and possess strong signal processing knowledge. This is a dynamic role requiring collaboration to integrate multiple development streams into commercially viable products.
Responsibilities
Support the end-to-end R&D process of real-time audio processing algorithms:
• Research, design, and prototype DSP- and ML-based speech enhancement, denoising, and audio classification algorithms.
• Develop experiment protocols, help organize field trials and internal evaluation sessions, and participate in data collection (lab and field).
• Evaluate and tune audio algorithms using subjective listening and objective metrics.
• Optimize DSP algorithms and ML models to run on battery-powered, resource-constrained embedded systems.
• Summarize, document, and communicate findings, solutions, and strategies. Collaborate with cross-functional teams to inform firmware and earpiece designs.
• Conduct code reviews. Ensure research reproducibility and knowledge sharing within the Audio Team.
Desired Profile
Qualifications
• Master’s degree in Electrical Engineering, Computer Science, Audio/Music Technology, or a related field, or Bachelor’s degree with relevant internships and project experience in audio research.
• Approximately 5 years of relevant professional or research experience in real-time audio signal processing and ML model design and deployment.
• Strong systems identification and modeling skills. Strong knowledge of DSP concepts such as adaptive filtering, Wiener filtering, spectral and multi-rate processing, spectral estimation, time-frequency analysis, with expertise in one or more of the following:
◦ Echo or crosstalk cancellation
◦ Denoising
◦ Dynamic processing: automatic gain control, compressors, noise gates, etc.
◦ Active noise control
◦ Adaptive feedback suppression
◦ Perceptual speech/audio codecs (G.729, Opus, etc.)
◦ Adaptive equalization
◦ Bandwidth extension
• Strong proficiency in Python and working knowledge of C++; ability to read MATLAB code is strongly recommended.
• Production experience with machine-learning and optimization frameworks (e.g., scikit-learn, PyTorch, TensorFlow) applied to audio or speech on low-power devices, including dataset preparation, augmentation, model training, optimization, evaluation, and deployment.
• Awareness of the constraints of embedded audio platforms (fixed-point arithmetic, memory and compute budgets, real-time scheduling) and ability to collaborate effectively with an embedded engineering team to hand off algorithms for deployment.
• Experience with DAWs such as REAPER and Audacity.
• Hands-on experience with sound and recording equipment (microphones, audio interfaces, loudspeakers, digital mixing consoles).
• Experience building automated and efficient workflows for analyzing and processing audio data.
• Basic knowledge of statistical analysis.
• Familiarity with Git for version control.
• Ability to communicate with a bilingual team (English/French).
Nice to Have
• Experience with the Linux audio ecosystem (JACK, PipeWire, ALSA, or PulseAudio).
• Familiarity with audio plugin frameworks such as VST3, LV2, CLAP, DPF, or JUCE, and immediate-mode UI libraries (ImGui, egui) for quick prototyping.
• Knowledge of audio neural network architecture designs and optimization techniques.
• Experience with ML deployment frameworks such as TensorFlow Lite (LiteRT) or ONNX.
• Basic knowledge of psychoacoustics. • Strong interpersonal skills, including the ability to provide and receive constructive feedback.
• Structured, detail-oriented technical approach.
• Excellent analytical and problem-solving skills.
• Strong time management and organizational skills; ability to handle multiple tasks and meet deadlines.
• Ability to work both autonomously and collaboratively within multidisciplinary teams.
• Self-motivated and focused.
If you have most but not all of the skill set but are interested, we encourage you to apply.
Website
www.eers.caPhone
514-375-0378
Join EERS and take the next step in your career.
By applying, you agree to our Terms of Service and Privacy Policy.