The next TechTake will take place on 26 October at 12:00pm AEDT. You can register for upcoming TechTake sessions here: https://anu.zoom.us/webinar/register/WN_sn_d0jdOQ9mIr7EX-8fBCA
NCI Presents: TechTake is an exciting opportunity for international computational and data science leaders to discuss and demonstrate how technology supports research.
Taking place on the last Tuesday of each month, this event will run online in order to reach diverse audiences across the globe and from all fields.
TechTake is designed to prompt engaging and in-depth conversations about both the current state and potential futures of technology to broaden and deepen understanding.
Dr Raffaella Demichelis is a Senior Lecturer at Curtin University. She currently holds an ARC Future Fellowship and leads an emerging team who does research in the fields of computational materials chemistry and geochemistry. Her most significant achievements are in the fields of mineral structure and crystal growth. She led landmark research that proves a new and more comprehensive theory explaining how minerals form from aqueous solutions, and provided models that can explain the structure of debated mineral phases.
Raffaella also contributes to develop software and potential models that are used in laboratories conducting research in chemistry, materials science and earth science worldwide.
She spreads her enthusiasm for science through engaging with outreach and community building activities, and actively advocates for a mentally safe, flexible and inclusive research environment, allowing for more sustainable and diverse career paths.
She has received national and international recognition for her research and community engagement through being the recipient of the 2015 Caglioti prize for Early Career Chemists (Italian Academy of Science) and a 2020 WA Young Tall Poppy awardee.
Chemistry is small and fast. The atomic details underlying a specific chemical process or contributing to make a specific material more stable than others may not be easy to access with experimental techniques.
Computational methods based on quantum and classical physics are nowadays able to address chemical complexity and provide atomic information that are otherwise unaccessible. They are powerful tools to assist with interpreting experiments, providing quantitative support to qualitative observations, making predictions and designing new materials and chemical processes.
This talk will provide a general introduction to computational materials science for broad audience - where does this field come from, where are we now, where do we want to go. Specific examples and applications will be taken from the work of Dr. Demichelis’ group, who are using Australia’s latest supercomputing facilities at NCI and at Pawsey Centre to investigate mineral structure and crystal growth.
Click here to watch the recording
Louvere Walker-Hannon is a MathWorks Application Engineering Team Leader, who provides direction on technical workflows for various applications and leads a team of other Application Engineers. She has a bachelor’s degree in Biomedical Engineering and a master’s degree in Geographic Information Technology with a specialization in Remote Sensing. Based on her fields of study, she assists with the following topics image processing, computer vision, machine learning, deep learning, geospatial analysis, parallel computing in clusters, deployment in the cloud/edge devices, and data analytics when discussing technical workflows. Louvere has worked in three different engineering roles throughout her 20 plus year career while at MathWorks.
Heather Gorr holds a Ph.D. in Materials Science Engineering from the University of Pittsburgh and a Masters and Bachelors of Science in Physics from Penn State University. Since 2013, she has supported MATLAB users in the areas of mathematics, data science, deep learning, and application deployment. She currently acts a Senior Product Marketing Manager for MATLAB, leading technical marketing content in data science, AI, deployment, and advanced MATLAB and Python programming. Prior to joining MathWorks, she was a Research Fellow, focused on machine learning for prediction of fluid concentrations.
Peter Brady is an application engineer with MathWorks striving to accelerate our customer’s engineering and scientific computing workflows across maths, statistics, and machine learning. Prior to joining MathWorks, Peter worked in computational fluid and thermodynamics as well as high-performance computing for a number of defence and civil contractors as well as a few universities. He has worked in fields as diverse as cavitation, wave/turbulence interactions, rainfall and runoff, nano-fluidics, HVAC and natural convection including scale out cloud simulation techniques. Peter holds a Doctorate in free surface computational fluid dynamics and a Bachelor of Civil Engineering both from the University of Technology Sydney.
Bradley Horton is a member of the Academic Customer Success team at MathWorks, helping faculty members better utilize MATLAB and Simulink for education and research. Bradley has supported and consulted for clients on projects in process control engineering, power systems simulation, military operations research, and earthquake impact modelling. Before joining MathWorks, Brad spent 5 years as a systems engineer with the Defence Science & Technology Organisation (DSTO) working as an operations research analyst. Bradley holds a B.Eng. in Mechanical engineering and a B.Sc. in Applied mathematics.
See how engineers and scientists who work on AI projects know when to move some, or all, of their development to clusters and clouds. You’ll also learn about MathWorks’ cloud options and the advantages they offer across the development stages of a system based on AI. Through a case study of training, tuning, and deploying a semantic segmentation model, you’ll hear about the following topics:
You’ll also learn about the following capabilities:
31 August 2021 at 11:00am AEST
As a researcher in Professor Andrea Morello's group, Benjamin and his UNSW peers are setting out to design and build a large-scale quantum computer, a device that uses quantum mechanics to solve otherwise intractable computational problems. The group was the first to demonstrate a qubit, the smallest building block of a quantum computer, in silicon, which paved the way for CMOS foundry compatible approaches pursued today.
Last year, the tech giant Google has made headlines by showing that a quantum computer can solve a problem intractable for even the fastest supercomputers. Research in Australia today is focused on qubits, the smallest building block of a quantum computer, in silicon to pave the way for a foundry compatible approach. While silicon is the preferred material for scaling, its crystal structure plays a crucial role for the processor layout. A successful design requires a good understanding of the system and accurate models.
Simulating quantum systems is a formidable task, one of the reasons Richard Feynmann originally saw the need to build a quantum computer. Many approaches are computationally heavy and are unfit to produce the large data sets needed to find optimal qubit configurations, while others have relied on unjustified approximations. In this talk, I will show that our multi-valley effective mass theory can efficiently model donor-based qubits in silicon with high accuracy. Full configuration interaction simulations on a compact but precise basis set allowed us to show the scalability of this qubit implementation and provide invaluable data for future chip designs.
Click here to review the recording of Ben's presentation.
Click here to view slides from Ben's presentation.
Michael has been active in the virtualization and infrastructure for well over a decade, as a partner, vendor and always as an advocate for improved business outcomes. Whether it’s with a mining, government or defense customer, or an architectural firm looking to solve challenges or increase productivity, he brings a wealth of knowledge and experience to benefit his customers. In addition, he is the Intelligent Video Analytics Solutions architect for Deep Learning based solutions at NVIDIA ANZ. Michael holds multiple technical certifications, is an NVIDIA Certified Deep Learning Instructor and has a Masters in IT Security, and degrees in Psychology and Philosophy.
Gabriel Noaje has more than 14 years of experience in accelerator technologies and parallel computing. Gabriel has a deep understanding of users’ requirements in terms of manycore architecture after he worked in both enterprise and public sector roles. Prior to joining NVIDIA, he was a Senior Solutions Architect with SGI and HPE where he was developing solutions for HPC and Deep Learning customers in APAC. Previously, he was a Senior Computational Scientist at A*STAR Computational Resource Centre in Singapore (A*CRC) supporting users with deploying their applications on GPUs and large HPC systems. Gabriel was also involved in the commissioning of the first petaflop supercomputer in Singapore for the National Supercomputing Centre (NSCC) providing his expertise in all stages from specifications drafting to the production phase. Gabriel holds a PhD in Computer Sciences from the University of Reims Champagne-Ardenne, France and a BSc and MSc in Computer Sciences from the Polytechnic University of Bucharest, Romania.
Michael and Gabriel will provide coverage of all the latest announcements and updates from NVIDIA’s recent virtual GTC event. This will provide a high level view of the latest innovations into GPU computing, both at the infrastructure level as well as everything from CUDA to AI innovations from NVIDIA.
Click here to view the recording of NVIDIA's presentation.
Click here to view the slides from NVIDIA's presentation.
Peter is the Coordinator of Machine Learning and AI activities at the European Centre for Medium Range Weather Forecasts (ECMWF), and is a Royal Society University Research Fellow in the Research Department of ECMWF. He has professional interests in high-resolution weather and climate simulations, high-performance computing for weather and climate models, and machine learning for weather and climate predictions.
The next decade of Machine Learning at ECMWF – an introduction to the machine learning roadmap
Click here to view the slides from Dr Peter Dueben's presentation.
Click here to view the full recording of Dr Peter Dueben's presentation.
Richard is Chair of Computational Mechanics in the Department of Mechanical Engineering at the University of Melbourne. His main interest is in high-fidelity simulation of turbulent flows and the associated noise generation in order to gain physical understanding of flow and noise mechanisms. He also uses the data to help assess and improve low-order models that can be employed in an industrial context, in particular by pursuing novel machine-learning approaches.
He received his PhD in 2004 in Aerospace Engineering at the University of Arizona and prior to joining the University of Melbourne, he was a Professor of Fluid Dynamics and Aeroacoustics in the Aerodynamics and Flight Mechanics research group at the University of Southampton and headed the UK Turbulence Consortium (www.turbulence.ac.uk), coordinating the work packages for compressible flows and flow visualisations and databases. He was awarded a veski innovation fellowship in July 2015 entitled: "Impacting Industry by enabling a step-change in simulation fidelity for flow and noise problems" and has been granted an Australian Research Council Future Fellowship for 2020-2023.
How can high-performance computing help design more efficient jet engines: physical insight and machine learning
CFD predictions are becoming increasingly important in the design of aircraft engines because correlation-based methods are unable to further improve efficiency and laboratory experiments with the required fidelity are prohibitively expensive and often cannot provide the details needed.
This presentation will show how physical insight relevant to designers can be extracted from high-fidelity simulations enabled by the latest HPC systems. It will also discuss how the HPC-generated data can be used to develop better predictive models using a machine-learning approach that is based on gene-expression programming. It will be shown that the machine-learnt models outperform traditional models both for the cases they were trained on and for cases not seen before.
Click here to view the slides from Prof Sandberg's presentation.
If you would like to present your work as part of TechTake, get in touch with the NCI Training team: email@example.com