Skip to content
View alexander-soare's full-sized avatar
Block or Report

Block or report alexander-soare

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
alexander-soare/README.md

I started my career in experimental quantum computing with a Masters degree and a few publications in noise-tolerant quantum control of trapped-ion qubits. I took 5 years to try my hand at the world of business and startups, part of which involved launching the international expansion of a Spanish logistics startup in the UK. It was a blast, but not enough to keep me from being drawn back to my technical/analytical roots. In 2019 I watched the AlphaGo documentary, trained some reinforcement learning agents in gym, and trained an MNIST classifier. The ML bug got a hold of me, and I haven't looked back since.

Projects

In between running my own machine learning consultancy and heading up perception for Dextrous Robotics (which unfortunately had to wind down in late 2023), I love to explore and contribute to the ML ecosystem. See below for some highlights. For a summary of my professional work please see my LinkedIn.

Consistency Policy

I distilled Diffusion Policys into consistency models. This was part of a push for me to understand diffusion models in depth.

This contribution leverages PyTorch's symbolic tracing toolkit to provide a compact and intuitive API interface for extracting hidden layers from TorchVision models.

I authored a related blog post in the official PyTorch blog.

I also made a YouTube tutorial.

Contributions to timm

timm is the go-to library for SOTA vision backbones in PyTorch. Some of my contributions include:

Educational content on YouTube

I believe in teaching to learn, so I occasionally record a screencast of myself explaining an ML concept. Check out my YouTube channel. This video on understanding attention in transformers has been particularly popular.

Kaggle competitions

Kaggle was a great resource for spinning up my ML knowledge.

In the Bristol Myers Squibb - Molecular Translation competition I landed 27th place (9th amongst solo competitors). For this GIF, I visualize one of the attention maps in my vision transformer + text decoder while it transcribes the molecule's international chemical identifier.

30th place in Kaggle's Global Wheat Detection competition.

Interactive web demo of GANSpace

After doing a short introductory course to Angular, I flexed my skills with a web-based front-end that would allow users to flexibly tune attributes of a GAN's output. At the time this was mind-blowing stuff for the general population and computer vision practitioners alike (can you believe that was just 2019!).

A tutorial on the Variational Quantum Eigensolver

Just before jumping into ML, I took a quick detour back to quantum computing to check what I'd missed. I'm a strong believer in teaching to learn. So I made a tutorial on VQEs. Check it out here.

Pinned

  1. huggingface/pytorch-image-models huggingface/pytorch-image-models Public

    The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (V…

    Python 30.1k 4.6k

  2. framework-agnostic-vqe-tutorial framework-agnostic-vqe-tutorial Public

    A brief introduction to VQE with framework agnostic code

    Jupyter Notebook 17 7

  3. consistency_policy consistency_policy Public

    Forked from real-stanford/diffusion_policy

    Distilling Diffusion Policy into consistency models

    Python 7