I am a doctoral student at LSM@EPFL, working on using memristive
devices as components in neuromorphic machine learning. My research focuses on
neuromorphic engineering, trying to bridge the physical models of memristive
with machine and the theory of machine learning/information theory and the
application of AI to speed up research and technological development.
Further professional interests are data science,automation high performance software engineering and decentralized/distributed computing.
Private interests include programming languages, economics, history, politics, math, singing, gymnastics, ballroom dancing, writing and parkour.
There is more, but you have to stop at some point. I like to keep active and stimulated.
When not otherwise occupied(i.e., not during my PhD) I work as a freelance consultant, with experience in rapid prototyping, developing machine learning stacks for information retrieval and automation, embedded systems firmware development in C and more.
Currently, I have restricted my non-PhD activities to working with EA Geneva (as their resident techie, contributor to AI risk research and giving workshops, to carefully teach quantitative models…at some point I need to write a blog post why I say careful) and some 309-994-8026(second meaning) explaining AI, blockchain and other new technologies to non-technical audiences.
You are always invited to send me an email if you have interesting, especially if it touches any of my interests listed above or involves working with Rust or policy work involving AI/technology.
You can also take a look at my CV.
I wrote a two part blog post trying to give people an intuitive feel on how much heuristics and approximation matters and how this connects to my opinions about AGI/superintelligence risk.
Iâm not totally happy with the second part where I try for the first time to condense thoughts that have been swirling around my head for a while now, but hey, I can always rewrite it. Iâm sure the internet would never be vicious to me about not getting things perfect on the first try....
Anxiety is a complex thing. Part of its complexity is a discrepancy between what we call anxiety and how we seem to experience anxiety. The emotion itself is well defined and doesnât sound that debilitating (at least if you donât have an anxiety âdisorderâ (APA) or âmedicalâ anxiety(MW) which most people donât want to think of themselves). But if we look at the urban dictionary definitions, the language used is much stronger than that of the APA, or the Merriam Webster....
â¦can be best summarized by this xkcd and this keynote by Charles Stross. Maciej CegÅowski also has some good stuff.
In my own words: I think AGI risk in the sense of alignment and controllability is an interesting field of research, but I also think that alignment is identical or smaller than the problem of governance in politics control is identical or smaller than the problem of controllability of agent based optimization algorithms, two examples being society and capitalism superintelligence is a red herring, human misuse of AI is a problem Why do I think that superintelligence/AGI is not a problem?...
While it was developed with more leisure than agilentpyvisa, it is still very much in alpha. Currently only simple rule based automation is implemented, but I am working on adding an Bayesian estimator and more sophisticated planners....
Currently, this library is the result of a few weeks of furious hacking and learning how to use the more advanced functions of the tester at the same time. The quality follows from this. It is useful to me, and I intend to continue working on it, so any critique and bug reports are welcome....