**PhD dissertation defense (Jul 2020)**

I have defended my PhD dissertation, a pdf can be found here. It was a great pleasure for me to have Stephen Wright, Tong Zhang, Raúl Tempone, and Bernard Ghanem on my committee. I owe a big thanks to my PhD advisor, Peter Richtárik for his great support during those 3 years and before!

**Paper accepted to IEEE Transactions on Signal Processing (Jul 2020)**

Title: Best Pair Formulation & Accelerated Scheme for Non-convex Principal Component Pursuit,* *joint work with Aritra Dutta, Jingwei Liang and Peter Richtárik

**SIAM MDS session and talk (Jun 2020)**

With Konstantin and Peter, we organized a session at SIAM MDS conference titled Optimization for Deep Learning. I gave a talk at the on the following topic: The Real Reason Why LARS Works and More (preliminary results)

**Federated Learning One World (FLOW) seminar (Jun 2020)**

I gave a talk at the FLOW seminar on the following topic: Federated Learning of a Mixture of Global and Local Models (video, slides)

**2 Papers accepted to ICML 2020 (Jun 2020)**

Title: Stochastic Subspace Cubic Newton Method, joint work with Nikita Doikov, Peter Richtárik and Yurii Nesterov

Title: Variance Reduced Coordinate Descent with Acceleration: New Method with a Surprising Application to Finite-Sum Problems, joint work with Dmitry Kovalev, Peter Richtárik

**Paper accepted to UAI 2020 (May 2020)**

Title: 99% of Worker-Master Communication in Distributed Optimization is Not Needed, joint work with Konstantin Mishchenko, Peter Richtárik

**Research Assistant Professor, TTIC (April 2020)**

I have accepted an offer for a Research Assistant Professor (RAP) at Toyota Technological Institute at Chicago (TTIC). Most likely, I will be starting at some point in Fall 2020.

**Machine Learning Meetup (MLMU) Košice (April 2020)**

I gave a talk at the Machine Learning Meetup at Košice, Slovakia on the following topic: Optimization for ML: From Theory to Practice and Back (video, slides)

**Paper out (Feb 2020) **

Title: Stochastic Subspace Cubic Newton Method, joint work with Nikita Doikov, Peter Richtárik and Yurii Nesterov

**Paper out (Feb 2020) **

Title: Federated Learning of a Mixture of Global and Local Models, joint work with Peter Richtárik

**Paper out (Feb 2020) **

Title: Variance Reduced Coordinate Descent with Acceleration: New Method with a Surprising Application to Finite-Sum Problems, joint work with Dmitry Kovalev, Peter Richtárik

**Paper accepted to AISTATS 2020 (January 2020)**

Title: A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent, joint work with Eduard Gorbunov, Peter Richtárik.

**Visit of Alexandre D’Aspremont, INRIA (SIERRA), Paris (6. – 10. January 2020) **

I have visited Alex D’Aspremont at INRIA. I gave a seminar talk at the SIERRA seminar on the following paper: One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods (slides).

**Visit of Martin Jaggi, EPFL (15. – 20. December 2019) **

I have visited Martin Jaggi at EPFL. I gave a seminar talk at the MLO seminar on the following paper: One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods (slides).

**KAUST NeurIPS meetup (10. – 12. December 2019) **

I have attended KAUST NeurIPS meetup and gave a talk on the following topic: Better optimization for deep learning (ongoing work).

**Visit of Yurii Nesterov (11. – 15. November 2019) **

I have visited Yurii Nesterov at UC Louvain. I give a seminar talk at the Center for Operations Research and Econometrics (CORE) seminar on the following paper: One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods (slides).

**Google Research (8. July – 4. October 2019) **

I did an internship at Google research (New York), trying to speed up neural network training. I had a chance to collaborate with Sashank Reddi, Srinadh Bhojanapalli and Sanjiv Kumar.

**Berkeley (26. May – 18. June 2019) **

I am attending Deep Learning Boot Camp at Simons Institute, as well as visiting the group of prof. Michael Mahoney.

**3 papers out (May 2019) **

Title: One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods, joint work with Peter Richtárik

Title: A Unified Theory of SGD: Variance Reduction, Sampling, Quantization and Coordinate Descent, joint work with Eduard Gorbunov, Peter Richtárik

Title: Best Pair Formulation & Accelerated Scheme for Non-convex Principal Component Pursuit,* *joint work with Aritra Dutta, Jingwei Liang and Peter Richtárik

**Paper out (Jan 2019) **

Title: 99% of Parallel Optimization is Inevitably a Waste of Time, joint work with Konstantin Mishchenko, Peter Richtárik

**AISTATS 2019 (16. – 18. April 2019)**

I have attended AISTATS conference (Okinawa), where I presented a poster on the following paper: *Accelerated Coordinate Descent with Arbitrary Sampling and Best Rates for Minibatches.*

**AAAI (27. Jan – 1. Feb 2019) **

I am attending AAAI conference (Honolulu). We have both oral and poster on paper: *A Nonconvex Projection Method for Robust PCA.*

**Paper accepted to AISTATS 2019 (Dec 2018)**

Title:*Accelerated Coordinate Descent with Arbitrary Sampling and Best Rates for Minibatches,* joint work with Peter Richtárik

**NeurIPS (2. Dec – 9. Dec 2018) **

I am attending NeurIPS. 2 posters are presented for the main track and one poster for PPML workshop.

**Microsoft Research (21. Oct – 16. Nov 2018) **

I have visited Lin Xiao in Seattle for almost a month. During the say, I gave talk at Microsoft Research on following paper: *Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization*

**Informs annual meeting (3. Nov – 7. Nov 2018) **

I have attended Informs annual meeting in Phoenix. I have organized a session with the following speakers: Naman Agarwal and Majid Jahani. I will also give a talk on *Accelerated Coordinate Descent with Arbitrary Sampling and Best Rates for Minibatches, (*joint work with Peter Richtárik)

**Paper accepted to NIPS 2018 Privacy Preserving Machine Learning workshop (31. Oct 2018) **

Title: *A *Privacy Preserving* Randomized Gossip Algorithm via Controlled Noise Insertion,* joint work with Jakub Konecny, Nicolas Loizou, Peter Richtárik and Dmitry Grishenko

**Paper accepted to AAAI (31. Oct 2018) **

Title: *A Nonconvex Projection Method for Robust PCA, *joint work with Aritra Dutta and Peter Richtárik

**NIPS Travel award ($1500) (Oct 2018) **

I have received a travel award to attend NIPS (2-8 Oct 2018) in Montreal.

**2 papers accepted to NIPS 2018 (5. Sep 2018)**

Title: *SEGA: Variance Reduction via Gradient Sketching**, *joint work with Konstantin Mishchenko, Peter Richtárik

Title: *Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization, *joint work with Robert Gower, Sebastian Stich and Peter Richtárik

**Paper out (10. Aug 2018)**

Title: *Accelerated Bregman Proximal Gradient Methods for Relatively Smooth Convex Optimization**, *joint work with Peter Richtárik and Lin Xiao

**Amazon Research (15. Jun – 30. Sep 2018)**

I have done an internship with an Amazon research team in Berlin for 3.5 months, under Rodolphe Jenatton. It was also my pleasure to work with Matthias Seeger and Cedric Archambeau. This is how Amazon stock dropped after I left:

**3 papers out (May 2018)**

Title: *A Nonconvex Projection Method for Robust PCA, *joint work with Aritra Dutta and Peter Richtárik

Title: *Accelerated Coordinate Descent with Arbitrary Sampling and Best Rates for Minibatches, *joint work with Peter Richtárik

Title: *SEGA: Variance Reduction via Gradient Sketching**, *joint work with Konstantin Mishchenko, Peter Richtárik

**Microsoft Research (Mar 2018)**

I am visiting Lin Xiao for a week. We are working on the following project: *Accelerated Relative Gradient Descent*

**Informs conference on Optimization (Mar 2018)**

I am attending Informs Optimization conference (Denver, CO) chairing one session. Talk title: *Randomized and Accelerated Algorithms for Minimizing Relatively Smooth Functions.*

**New paper out (Mar 2018)**

*Title: Fastest Rates for Stochastic Mirror Descent, *joint work with Peter Richtárik

**New paper out (Feb 2018)**

Title: *Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization, *joint work with Robert Gower, Sebastian Stich and Peter Richtárik

**Optimization and Big Data (Feb 2018)**

I am attending OBD conference, giving a short talk together with a poster presentation on the following topic: *Randomized and Accelerated Algorithms for Minimizing Relatively Smooth Functions *

**End of Semester (Dec 2017)**

I have passed KAUST qualifying exams, consisting of the following courses: *Probability and Statistics, Numerical Linear Algebra and Partial Differential Equations*.

**OMS conference (Dec 2017)**

I am attending Optimization Methods and Software conference on Cuba. I am co-organizing one minisymposium, and giving a talk on: *Randomized and Accelerated Algorithms for Minimizing Relatively Smooth Functions *

**Group Seminar (Sep 2017 – Jun 2018)**

I am organizing a group seminar at KAUST jointly with Aritra Dutta and Peter Richtárik.

**KAUST (Sep 2017 – Jul 2020)**

I am joining KAUST. Officially I am starting a fresh PhD, practically I am transferring my PhD from University of Edinburgh.