Page Not Found
Page not found. Your pixels are in another canvas.
</p>A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
</p>About me
</p>This is a page not in th emain menu
</p>Published:
This is a recent paper submitted to the Journal of the Audio Engineering Society. In this paper, we take word embeddings, and map them directly onto EQ parameters, using a Fully-Connected Neural Network. We show that a neural network can learn equaliser settings for completely unknown words, which produce EQ results that are both intutive, and perceptually sound plausable. Further subjective evaluations are required to validate these results, but in principal, the idea of mapping semantic word descriptors directly onto any audio effect parameters. This approach could be developed in the future, rolled out to a number of different semantic approaches to create a suite of semantically driven audio effects.
</p>Published:
The recent work of Satvik Venkatesh, on the YOHO paper. In this recently published paper, we present a neural network approach for audio detection. In this paper transition points, or sonic objects, are identifed directly through the neural network design, rather than the traditional approach of block based processing of audio and performing classification per block. The traditional approach quantizes the classification of the signal, and relies on accurate classification of every time step, which can be problematic in noisy environments. In this approach, the prediction of the model is a regression, of the transition points exactly, which means the model is much less likely to oscillate, and the predictions are generally considered more robust. A rigorous review of this approach, in noise environments, was presented in a paper at NeurIPS. The full paper is available here.
</p>Published:
David Moffat, Rod Selfridge, Joshua Reiss, "Sound Effect Synthesis." In the proceedings of Foundations in Sound Design for Interactive Media: A Multidisciplinary Approach, Chapter 13, 2019. https://doi.org/10.4324/9781315106342
Download here
Published:
David Moffat, "AI Music Mixing Systems." In the proceedings of Handbook of Artificial Intelligence for Music, Chapter 13, 2021. https://doi.org/10.1007/978-3-030-72116-9_13
Published:
Clive Mead, David Moffat and Gary Bromham, "A History of Distortion." In Distortion in Music Production, The Soul of Sonics. Chapter 1, Editors Gary Bromham and Austin Moore. Focal Press, June 2023. https://doi.org/10.4324/9780429356841
Published:
M. Nyssim Lefford, David Moffat and Gary Bromham, "From Intelligent Digital Assistant to Intelligent Digital Collaborator." In Innovation in Music Performance: Technology and Creativity. (pp. 278-291) Editors Jan-Olof Gullö, Russ Hepworth-Sawyer, Justin Paterson, Rob Toulson and Mark Marrington. Focal Press. March 2024 https://doi.org/10.4324/9781003118817
Published:
Use Google Scholar for full citation
Nicholas Jillings, Brecht De, David Moffat, Joshua Reiss, "Web Audio Evaluation Tool: A Browser-Based Listening Test Environment." In the proceedings of Proceedings of the Sound and Music Computing Conference, 2015.
Published:
Use Google Scholar for full citation
David Moffat, David Ronan, Joshusa Reiss, "An Evaluation of Audio Feature Extraction Toolboxes." In the proceedings of Proceedings of the 18th International Conference on Digital Audio Effects (DAFx-15), 2015.
Published:
Use Google Scholar for full citation
David Ronan, David Moffat, Hatice Gunes, Joshua Reiss, "Automatic Subgrouping of Multitrack Audio." In the proceedings of Proceedings of the 18th International Conference on Digital Audio Effects (DAFx-15), 2015.
Published:
David Moffat, Joshua Reiss, "Implementation and Assessment of Joint Source Separation and Dereverberation." In the proceedings of Proceedings of the Audio Engineering Society Conference: 60th International Conference: DREAMS (Dereverberation and Reverberation of Audio, Music, and Speech), 2016. https://dx.doi.org/10.17743/aesconf.2016.978-1-942220-07-7
Published:
Use Google Scholar for full citation
Lucas Mengual, David Moffat, Joshua Reiss, "Modal Synthesis of Weapon Sounds." In the proceedings of Proceedings of the Audio Engineering Society Conference: 61st International Conference: Audio for Games, 2016.
Published:
Use Google Scholar for full citation
Nicholas Jillings, Brecht De, David Moffat, Joshua Reiss, "Web Audio Evaluation Tool: A framework for subjective assessment of audio." In the proceedings of Proceedings of the 2nd Web Audio Conference, 2016.
Published:
Use Google Scholar for full citation
Brecht De, Nicholas Jillings, David Moffat, Joshua Reiss, Ryan Stables, "Subjective comparison of music production practices using the Web Audio Evaluation Tool." In the proceedings of 2nd AES Workshop on Intelligent Music Production, 2016.
Published:
Use Google Scholar for full citation
David Moffat, Joshua Reiss, "Perceptual evaluation of synthesised sound effects." In the proceedings of DMRN+11: Digital Music Research Network Workshop, 2016.
Published:
Use Google Scholar for full citation
Rod Selfridge, David Moffat, Joshua Reiss, "Real-Time Physical Model for Synthesis of Sword Swing Sounds." In the proceedings of Proceedings of the 14th International Conference on Sound and Music Computing (SMC), 2017.
Published:
Use Google Scholar for full citation
Rod Selfridge, David Moffat, Joshua Reiss, Eldad Avital, "Real-Time Physical Model for an Aeolian Harp." In the proceedings of Proceedings of the 24th International Congress on Sound and Vibration, 2017.
Published:
Use Google Scholar for full citation
Rod Selfridge, David Moffat, Joshua Reiss, "Physically Derived Sound Synthesis Model of a Propeller." In the proceedings of ACM Audio Mostly Conference, 2017.
Published:
Use Google Scholar for full citation
David Moffat, David Ronan, Joshusa Reiss, "Unsupervised Taxonomy of Sound Effects." In the proceedings of Proc. 20th International Conference on Digital Audio Effects (DAFx-17), 2017.
Published:
Use Google Scholar for full citation
David Moffat, Joshusa Reiss, "Objective Evaluations of Synthesised Environmental Sounds." In the proceedings of Proceedings of the 21th International Conference on Digital Audio Effects (DAFx-18), 2018.
Published:
Use Google Scholar for full citation
David Moffat, Florian Thalmann, Mark Sandler, "Towards a Semantic Web Representation and Application of Audio Mixing Rules." In the proceedings of Proceedings of the 4th Workshop on Intelligent Music Production (WIMP), 2018.
Published:
Use Google Scholar for full citation
David Moffat, Mark Sandler, "Adaptive Ballistics Control of Dynamic Range Compression for Percussive Tracks." In the proceedings of Audio Engineering Society Convention 145, 2018.
Published:
Use Google Scholar for full citation
Gary Bromham, David Moffat, György Fazekas, Mathieu Barthet, Mark Sandler, "The impact of compressor ballistics on the perceived style of music." In the proceedings of Audio Engineering Society Convention 145, 2018.
Published:
Use Google Scholar for full citation
David Moffat, Mark Sandler, "Automatic Mixing Level Balancing Enhanced through Source Interference Identification." In the proceedings of Audio Engineering Society Convention 146, 2019.
Published:
Gary Bromham, David Moffat, Mathieu Barthet, Anne Danielsen, György Fazekas, "The Impact of Audio Effects Processing on the Perception of Brightness and Warmth." In the proceedings of ACM Audio Mostly Conference, 2019. https://doi.org/10.1145/3356590.3356618
Published:
Use Google Scholar for full citation
David Moffat, Mark Sandler, "An Automated Approach to the Application of Reverberation." In the proceedings of Audio Engineering Society Convention 147, 2019.
Published:
Use Google Scholar for full citation
David Moffat, Mark Sandler, "Machine Learning Multitrack Gain Mixing of Drums." In the proceedings of Audio Engineering Society Convention 147, 2019.
Published:
Use Google Scholar for full citation
Satvik Venkatesh, David Moffat, Eduardo Miranda, "RadioMe: Artificially Intelligent Radio for People with Dementia." In the proceedings of DMRN+14: Digital Music Research Network One-Day Workshop, 2019.
Published:
Use Google Scholar for full citation
Gary Bromham, David Moffat, Mathieu Barthet, György Fazekas, "Retro in Digital: Understanding the Semantics of Audio Effects." In the proceedings of DMRN+14: Digital Music Reserach Network One-Day Workshop, 2019.
Published:
Use Google Scholar for full citation
M. Lefford, Gary Bromham, David Moffat, "Mixing with Intelligent Mixing Systems: Evolving Practices and Lessons from Computer Assisted Design." In the proceedings of Audio Engineering Society Convention 148, 2020.
Published:
Use Google Scholar for full citation
Clive Mead, David Moffat, Eduardo Miranda, "Composing, Recording and Producing with Historical Equipment and Instrument Models." In the proceedings of Audio Engineering Society Convention 139, 2020.
Published:
Use Google Scholar for full citation
Shakeri Gozel, Stephen Brewster, Satvik Venkatesh, David Moffat, Alexis Kirke, Eduardo Miranda, Sube Banerjee, Alex Street, Jörg Fachner, Helen Odell-Miller, "RadioMe: Challenges During the Development of a Real Time Tool to Support People With Dementia." In the proceedings of Conference on Human Factors in Computing Systems (CHI), 2021.
Published:
Use Google Scholar for full citation
Satvik Venkatesh, David Moffat, Alexis Kirke, Gözel Shakeri, Stephen Brewster, Jörg Fachner, Helen Odell-Miller, Alex Street, Nicolas Farina, Sube Banerjee, Eduardo Miranda, "Artificially synthesising data for audio classification and segmentation to improve speech and music detection in radio broadcast." In the proceedings of International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021.
Published:
Di Campli San Vito, P., Brewster, S., Venkatesh, S., Miranda, E., Kirke, A., Moffat, D., Banerjee, S., Street, A., Fachner, J. and Odell-Miller "RadioMe: Supporting Individuals with Dementia in Their Own Home... and Beyond?" In the proceedings of 2022 CHI Conference on Human Factors in Computing Systems (CHI 22) Workshop 32, New Orleans, LA, USA, 30 Apr 2022.
Published:
Luca Turchet, David Moffat, Ana Tajadura-Jiménez, Joshua Reiss, Tony Stockman, "What do your footsteps sound like? An investigation on interactive footstep sounds adjustment." Applied Acoustics, 2016. https://doi.org/10.1016/j.apacoust.2016.04.007
Download here
Published:
Rod Selfridge, David Moffat, Joshua Reiss, "Sound Synthesis of Objects Swinging Through Air Using Physical Models." Applied Sciences, 2017. https://doi.org/10.3390/app7111177
Download here
Published:
David Moffat, Joshua Reiss, "Perceptual Evaluation of Synthesized Sound Effects." ACM Transactions on Applied Perception (TAP), 2018. https://doi.org/10.1145/3165287
Download here
Published:
Rod Selfridge, David Moffat, Eldad Avital, Joshua Reiss, "Creating Real-Time Aeroacoustic Sound Effects Using Physically Derived Models." Journal of the Audio Engineering Society, 2018. https://doi.org/10.17743/jaes.2018.0033
Download here
Published:
David Moffat, Mark Sandler, "Approaches in Intelligent Music Production." Arts, 2019. https://doi.org/10.3390/arts8040125
Download here
Published:
Thomas Wilmering, David Moffat, Alessia Milo, Mark Sandler, "A History of Audio Effects." Applied Sciences, 2020. https://doi.org/10.3390/app10030791
Download here
Published:
Marco Martínez, Daniel Stoller, David Moffat, "A Deep Learning Approach to Intelligent Drum Mixing with the Wave-U-Net." Journal of the Audio Engineering Society, 2021. http://doi.org/10.17743/jaes.2020.0031
Download here
Published:
M. Lefford, Gary Bromham, György Fazekas, David Moffat, "Context Aware Intelligent Mixing Systems." Journal of the Audio Engineering Society, 2021. https://doi.org/10.17743/jaes.2020.0043
Download here
Published:
Satvik Venkatesh, David Moffat, Eduardo Miranda, "Investigating the Effects of Training Set Synthesis for Audio Segmentation of Radio Broadcast." Electronics, 2021. https://doi.org/10.3390/electronics10070827
Download here
Published:
Satvik Venkatesh, David Moffat, Eduardo Miranda, "You Only Hear Once: A YOLO-like Algorithm for Audio Segmentation and Sound Event Detection." Applied Sciences, 12, 3293. 2022. DOI 10.3390/app12073293 https://doi.org/10.3390/app12073293
Download here
Published:
David Moffat, Brecht de Man and Joshua D. Reiss, "Semantic Music Production: A Meta-study." Journal of the Audio Engineering Society, vol. 70, no. 7/8, pp. 548-564, July 2022 https://doi.org/10.17743/jaes.2022.0023
Download here
Published:
Satvik Venkatesh, David Moffat, Eduardo Miranda, "Word Embeddings for Automatic Equalization in Audio Mixing." Journal of the Audio Engineering Society, Vol 70 no.9 pp. 753-763; September 2022 https://doi.org/10.17743/jaes.2022.0047
Download here
Published:
Phil Wilkes, Mathias Disney, John Armston, Harm Bartholomeus, Lisa Bentley, Benjamin Brede, Andrew Burt, Kim Calders, Cecilia Chavana-Bryant, Daniel Clewley, Laura Duncanson, Brieanne Forbes, Sean Krisanski, Yadvinder Malhi, David Moffat, Niall Origo, Alexander Shenkin, Wanxin Yang, "TLS2trees: a scalable tree segmentation pipeline for TLS data." Methods in Ecology and Evolution. Wiley. October 2023. https://doi.org/10.1111/2041-210X.14233
Download here
Published:
Timothy J. Smyth, David Moffat, Glen A. Tarran, Shubha Sathyendranath, François Ribalet and John Casey "Determining drivers of phytoplankton carbon to chlorophyll ratio at Atlantic Basin scale." Frontiers in Marine Science. Volume 10, July 2023 https://doi.org/10.3389/fmars.2023.1191216
Download here
Published:
Aser Mata, David Moffat, Sílvia Almeida, Marko Radeta, William Jay, Nigel Mortimer, Katie Awty-Carroll, Oliver R. Thomas, Vanda Brotas, Steve Groom "Drone imagery and deep learning for mapping the density of wild Pacific oysters to manage their expansion into protected areas." Ecological Informatics. 102708, July 2024 https://doi.org/10.1016/j.ecoinf.2024.102708
Download here
Published:
Ming-Xi Yang, David Moffat, Yuanxu Dong and Jean-Raymond Bidlot "Deciphering the variability in air-sea gas transfer due to sea state and wind history." PNAS Nexus, pgae389. September 2024 https://doi.org/10.1093/pnasnexus/pgae389
Download here
Short description of portfolio item number 1
Short description of portfolio item number 2
Published:
The application of AI and machine learning is rapidly growing across the environmental research field. State-of-the-art machine learning techniques can be used to analyse and exploit environmental data, to produce greater insight into the current data captured, and enable better understanding of the environment.
</p>Published:
The detrimental effects of harmful algal blooms (HABs) on the marine ecosystem, human health, and shellfish and aquaculture industry are well known. Anthropogenic activities have led to an increase in frequency, extent and magnitude of HAB activity. As a result, the detection, monitoring and forecasting of HABs are key to agencies and marine managers, allowing them to implement prevention and remediation strategies.
</p>Published:
The application of artificial intelligence (AI) and machine learning (ML) is rapidly growing across Earth Observation (EO). State-of-the-art ML techniques can be used to analyse and exploit vast quantities of data, to produce greater insight into data, and enable better understanding of environmental issues. Over the past year, NEODAAS have been utilising their MAGEO GPU cluster to work with users on a range of projects including harmful algal bloom detection, tree monitoring, monitoring global mangroves, ocean oil-spill detection, road vehicle and ship exhaust tracking and underwater image maerl detection. This range of projects has provided opportunities and insights into common challenges in ML with EO data, and how best to overcome them.
</p>Published:
The detrimental effects of harmful algal blooms (HABs) on the marine ecosystem, human health, and shellfish and aquaculture industry are well known. Anthropogenic activities have led to an increase in frequency, extent and magnitude of HAB activity. As a result, the detection, monitoring and forecasting of HABs are key to agencies and marine managers, allowing them to implement prevention and remediation strategies. However, HAB events are relatively rare events, that can be challenging to detect. HAB detection with satellite image data improves the coverage and efficiency of tracking HABs. Existing remote sensing-based methods frequently rely on statistical classification algorithms. While comparison with cell concentration in situ data has identified two issues: reduced accuracy for the detection of certain species, and accuracy dependency with satellite training data availability. This talk will present a deep-learning technique to improve the performance of the existing models for HAB detection from ocean colour. To this end, we developed a Machine Learning (ML) system using a few-shot learning approach for the detection of Phaeocystis and Pseudo-nitzschia HABs across the French-English channel. We assessed the performance of the ML model in comparison to in situ cell abundance data. The ML system showed better performance than the S-3 EUROHAB model, with results for the detection of Phaeocystis blooms being particularly promising.
</p>Published:
Connecting Music and Environment Science: from philosophical approaches to fluid mechanics, and how computer science research can cover a range of domains. In this talk we will discuss a range of different computer science domains and approaches, and demonstrate how non-traditional career paths can be both more interesting but also more advantageous than traditional software development houses.
</p>Published:
In this talk, we will provide an overview of Machine Learning (ML) and it’s uses in Earth Observation (EO). This talk will introduce the concepts of AI and ML, with the aim of explaining the differences and the fundamental approaches that can be taken in the field, with some simple examples. We will then go on to discuss a variety of different potential approaches than can be taken, and how ML can be used to overcome some consistent challenges in EO, and where it is not necessarily appropriate to use.
</p>Published:
In this talk, we will provide an overview of recent ML developments from PML, discussing how ML has allowe us to extract further insight and extract meaningful information from earth observation data.
</p>Published:
I was a panel member at the opening of the inaugaral Machine Learning for Earth Observation Remote Sensing with the Environment Intelligence Network workshop at University of Exeter. The panel discussed example of how machine learning can enhance Earth observation science, emerging trend in AI or machine learning that are particularly exciting, the greatest challenges of using machine learning for Earth Observation studies, ethical and societal considerations associated with using AI and machine learning for Earth observation and future research priorities in using AI and machine learning for remote sensing in Earth observation science.
</p>Published:
In this talk, we will provide an overview of Machine Learning (ML) and it’s uses in Earth Observation (EO). This talk will highlight a range of different ML Methodologies and how they can be applied to different remote sensing and environmental monitoring problems, demonstrating where ML can be applied to remote sensing with positive impacts - with a focus on where the benefits of using ML are.
</p>Teaching Associate, Queen Mary University of London, Teaching, 2018
I was involved in a range of teaching course, including
</p>Workshop, University of Plymouth, PhD Student Supervision, 2019
I supervised three PhD students through their studies, as their PhD supervisor.
</p>Workshop, University of Plymouth, Research Masters Student Supervison, 2019
I supervised Research Masters students through their studies.
</p>Workshop, University of Plymouth, Computing Audio and Music Technology BSc., 2019
As a lecturer at the University of Plymouth, I was the Programme Leader for Computing Audio and Music Technology BSc. teaching on the following modules
</p>One day, 17 participants Hybrid. Presented and delivered remotely, with collegues supporting in person, , 2021
</p>Half day, 21 participants Remote. Supported delivery of training for five day course, , 2021
</p>Half day, 350 participants Remote. Co-lead training course, , 2021
</p>Two day, 34 participants In Person. Co-lead training course and all delivery, , 2022
</p>One day, 23 participants Hybrid. Co-lead training course and all delivery, , 2022
</p>Two day, 37 participants In Person. Co-lead training course and all delivery, , 2022
</p>Five day, 9 participants In Person. Delivered ML4EO training and led five day training course, , 2022
</p>Half day, 11 participants Hybrid. Attendees supported in person by collegues, I presented and supported remotely., , 2022
</p>Two day, 16 participants In Person. Supported additional training on atmospheric correction and EO, , 2022
</p>Two hour, 120 participants Remote., , 2023
</p>Two hour, 200 participants Remote., , 2023
</p>Half day, 45 participants In Person. Supported one day of training on general EO and project based supervision., , 2023
</p>Two day, 28 participants In Person. Supported additional training on research techniques, atmospheric correction and EO, , 2023
</p>Two hour, 24 participants In Person., , 2024
</p>