Here is a small collection summarizing some of the work I have completed in my personal time as well as during my work Experience You can find some of the source code for the projects on my github (ppublicgit).
This project is ongoing. I have posted some of my code to my GitHub (finance repository). The code is all first version tests of what I want to implement. Due to the nature of stock trading, I have not shared all my code, ideas or final versions, only the early versions codes and the simple concept ideas, mainly for back-testing purposes.
I find the stock market an intriguing system of global interaction. Many common mathematics and even physics processes surprisngly show up in financial markets. One of the most important and famous of these patterns is the price of stocks. Louis Bachelier, a French mathematician, is credited with the discovery of the brownian motion of stock prices in his Ph.D. dissertation, "The Theory of Speculation". The book by Burton Malkiel, "A Random Walk Down Wall Street", brought this revelation and theory to the public. Nowadays, with the advent of faster and faster computers and simple, easy to use programming languages, the popularity of quantitative trading has taken off. Things such as high-frequency trading and statistical arbitrage have become common practices for hedge funds and day traders. Furthermore, large data facilitates traders in formulating trading ideas and back-testing their performance against historical data.
I decided to dip my feet into quantitative trading to learn more about the subject and to practice my Python and math skills. I read several books, including my favorite, by E.P. Chan, "Algorithmic Trading," to learn more on the topic and improve my understanding of the field. Following this guide, as well as posts on various blogs and forums, I created my own tools for trading and finding edges in the market. Among these tools, I have a webscraper for Yahoo Finance key statistics, Twitter and Stocktwit that collects data via an input of stock symbol and returns a pandas dataframe of important information. I also have various tests for cointegration (Hurst exponent, ADF, Johansen and Variance Ratio) and an asset allocation optimizer (Kelly formula) given stock symbols as well.
Beating the house by card counting and winning millions in a single night of blackjack is an exciting dream. How feasible is this scheme really? Programming allows you to simulate and solve the question once and for all.
I had not used Python's class structure before and thought it would be a great way to get practice with object orientated programming (OOP). The code I created is a little rough, as it was my first ever attempt at OOP in Pytohn, but while it is not pretty, it gets the job done. I developed Python code to simulate multiple hands of blackjack to see if counting cards was an effective and reliable means of making money. I used oop to create classes for cards, hands, decks and a game of blackjack as well. The user can call on the blackjack class and set the rules for the game. This includes if the dealer hits on a soft 17, how many decks the dealer uses, the blackjack payout rate and more. The code then uses the strategy tables that fit with the rules chosen by the user, and simulates whatever number of hands the user specifies. The code and some simple statistical analysis of a few simulations can be found on my github (Blackjack Simulation Repository)
What I found was that the margins for error in card counting are incredibly slim. First, the blackjack payout mus be strong, too small and the house always can expect to win. Another key rule is the depth of penetration into the deck (this is the point at which the dealer reshuffles all the cards, resetting the count). For the card counter to expect to win money, he/she needs a large amount of capital (as a serious drawdown period must be survivable, as well as enough money to place large bets when the count is in the players favor) and must never make a mistake on strategy or the count. To make matters worse, most casinos these days have an eye-in-the-sky watching all players and looking for counters. To hide that one is counting cards, one must occasionally bet against the count to better avoid detection. The prospects of winning money via card counting is nigh on impossible. Even if one plays perfectly, the winnings are still quite slim. The best method, and in my humble opinion the only real method, to winning lots of money counting cards requires a team of counters. The team could have several people sit at tables and bet small the whole time to get a count of the table, and then signal to a teammate to come into the table whenever the count is strongly in the player's favor. That way, the player can bet big and often early in the game, without having to absorb the losses while trying to establish and time the count. This brings in more risk of being caught, and the winnings are split among a larger group. In the end, there are probably easier means to obtain financial wealth than the high risk environment of card counting and casino blackjack.
I have completed several of the common beginner machine learning problems and uploaded the results to my github (Machine Learning Repository). These problems included the Iris dataset, Boston housing dataset and a height/weight predictor. The results and code are displayed using jupyter notebooks. The problems were good practice for implementing some of the common techniques I have learned through various online courses and through technical literature. The main packages used were pandas and scikit-learn. I demonstrated the effectiveness of several machine learning techniques, such as lasso and ridge regression and random forests, and I compared their performances.
I had two major stints at Sandia National Laboratories in Livermore, CA: as an inern following the completion of my BS in Physics from Arizona State University (ASU) and then as a private contractor to continue my work and research after an extended trip abroad. As an intern, I was tasked with creating a Graphical User Interface (GUI) in MATLAB for test equipment and collecting data. I spent several months designing the GUI and setting up the various protocols used for communicating with all the instruments in the research lab. The goal of the project was to create a solenoid pressure system to simulate an engine's cycle to extend and further the research being done in the lab into nozzle spray characteristics. By the end of the internship I had completed a working GUI that had full, automated control with all the experimental instruments and the initial visualization of each run's data that was saved for further analysis. The instrument communication was built primarily on object-oriented-programming, allowing future users an easy means for updating and upgrading the communications. Links to a PDF for the internship research paper and poster are provided below.
As a private contractor, I was tasked with creating a GUI in LabVIEW for a different experiment, as well as updating the MATLAB GUI I had previously designed. I added several new features to the MATLAB GUI to improve its efficiency and user satisfaction. I also spent time training my coworkers how to code in MATLAB and in GUI control and maintenance. The LabVIEW GUI was trickier than the MATLAB GUI as it had multiple data collection speeds and times of interest and required precise timing and synchronization between high and low speed data acquisition units. Eventually, I was able to implement an advanced system with precise digital triggering timing to synchronize all data acquistion. I also trained fellow lab workers in LabVIEW to facilitate ease of use and understanding for the GUI
Lastly, I collected data using my MATLAB GUI in different pressure system environments to research the effects of a variable pressure on nozzle spray characteristics. Data was collected for the pressure of the system, as well as high-speed imaging of the bubble characteristics inside an injector nozzle in order to investigate performance. I generated code in MATLAB to perform high-speed image processing and recognition to determine how the bubbles in the nozzle were affected by the pressure cycling in the system. I then visualized the data in MATLAB and authored a paper on the topic with the assistance of my principal investigator/supervisor and colleagues. The paper is currently pending publication.
During my junior and senior years at Arizona State University, I worked as an undergraduate research assistant in an astrophysics lab. The position doubled as a job and a class and allowed me to complete my honor's thesis (required for graduation from Barrett the Honor's College at ASU). I worked on the design, construction and testing of a low temperature (~2.5K) cryostat. It was similar to work I completed at Lawrence Berkeley National Lab, but this time I was able to test and characterize the noise of microwave kinetic inductance inductors(MKIDs). My honor's thesis focused on the analysis of the MKIDs noise data. I used Python to automate the data collection from the MKIDs. The MKIDs were brought down to a temperature of 2.9K and then a vector network analyzer (VNA) was used to send signals to the MKIDs and to record their responses. Optical fiber was run from the VNA outside the vessel to the MKID inside to limit thermal heating from conduction, as copper wiring would have heated the MKIDs to much higher than 2.9K. The collected data was analyzed in Python using various mathematics and physics techniques and equations to characterize the noise of the MKIDs. The results of the research and the thesis can be viewed in the link's below.
After my junior year in college I spent my summer as a research intern at LBNL in Berkeley, CA. My work focused on the design and construction of a cryostat (low temperature, low pressure vessel used to test superconducting detectors). I was tasked with designing and constructing the milliKelvin stage used as a testbed for research and development of detectors to observe microwave radiation from space. My work mainly consisted of three different areas. The first was CAD modeling. I used SolidWorks and other tools to design various parts for the system and to create the engineering drawings sent to machinists to create parts. I also was a part of the review process for quality control of incoming and outgoing drawings and parts. Phase two of my research was constructing the apparatus itself. This consisted of installing the parts upon arrival and researching the proper materials and equipment needed for experimental setup and data collection. The last portion of the research was testing the vessel. Not all the parts were able to be added by the time I completed the internship due to long lead times of some parts, but I was able to successfully test the performance of what had arrived. I created code in Python to communicate with an assortment of sensors and detectors to automate the data collection and subsequently used code in Python to analyze the outputs. The vessel passed all tests.
I decided I wanted to learn HTML/CSS and what better way to learn than applying the code to something. Which project did I build with HTML/CSS? Why this very website. I designed this website from scratch after reading through some online tutorial, texts, stack overflow and looking at other websites for inspiration. I am quite pleased with how it all turned out. Not bad for my first ever HTML/CSS project. And it only took a couple weeks from no experience to boot. You can find all my code for this website on my github page (github.com/ppublicgit/Personal-Website). I hope you have enjoyed perusing my website.
Another simple skill I decided to learn was SQL. I did so by completing several MOOCs online as well as following along a book on SQL programming. I used Alen Beaulieu's book, "Learning SQL". SQL is a rather simple language to learn to be honest, and I do not have much daily use for it but it was still fun to learn. You can see some of the practice problems I worked through on my github page (github.com/ppublicgit/SQL_Queries)
When setting up my laptop to run on linux, and while reading a textbook on the Linux OS, I decided to also learn some bash scripting. I read and worked through most of the problems of Machtelt Garrels' book, "Bash Guide for Beginners". It had some fun probably to work through and spent a lot of time on awk, sed and other common tools in the bash environment. Definitely worth a read if you ever decide to run Linux. My work through the practice problems can be found on my github page (github.com/ppublicgit/Bash-Scripts).
I decided it might be worthwhile to learn some R for statistical analysis at some point. I read through the book, "R for Data Science" by Hadley Wickham to begin my journey to learn R. I also went through a Udemy course online and did some practice problems in statistical analysis and data exploration. My work through a sample data set can be found on my github page (github.com/ppublicgit/R_Movies_EDA).