Book Review: Computer Age Statistical Inference

jacket_wave_cropped

It took me a while but I finally have some time to write my review about Efron and Hastie’s new book Computer Age Statistical Inference: Algorithms, Evidence and Data Science.

Bradley Efron is probably best known for his bootstrapping re-sampling technique. With his new book Computer Age Statistical Inference he provides a rather short overview of statistics as a whole. The book covers topics from the early beginnings of statistics to the new era of machine learning. As one can imagine covering such a huge amount of content is not an easy task and the two authors did their best to focus on a number of interesting aspects.

The book is separated into three major parts: classic statistical inference, early computer age methods, and twenty-first century topics. Hence, I will review each part individually as well. Despite the great number of topics the book covers it is definitely not meant for beginners. The authors assume a fair amount of algebra, probability theory as well as statistics. Nevertheless, I found it a great way to not only refresh my knowledge, but also delve deeper into various aspects of classical and modern statistics.

Classic Statistical Inference

Overall I think this is the strongest parts of the book. The authors did not go into extensive detail but covered interesting aspects of frequentist and bayesian inference. In addition, Efron and Hastie put emphasis on fisherian inference and maximum likelihood estimation, and demonstrated parallels between these different approaches as well as their historical connections. This really helped me to classify and interconnect all of these different methods. However, I found it a bit surprising on how little space is dedicated to frequentist and bayesian, compared to fisherian inference. On the one hand I really appreciated reading more about Fisher’s ideas and methods since it is often insufficiently covered in most text book. On the other hand, I would have hoped for some new insight into bayesian statistics.

Overall, I really enjoyed this part of the book. It helped me to get a deeper understanding of classical statistical methods.

Early Computer-Age Methods

This part of the book covers quite a variety of topics, from empirical Bayes, over generalized linear models (GLM), to cross-validation and the bootstrap. In particular the bootstrap is covered extensively and pops up in a number of chapters. While this is not particularly surprising given the background of the authors,  it does feel a bit too much.  Furthermore, I find that GLM are covered insufficiently (only 20 pages), considering the importance of  linear models in all areas of statistics. However, given the extensive scope of this part of the book, the authors do a fairly good job by discussing each topic in detail while not being too general.

I especially liked the notes at the end of each chapter, which provided additional historic and mathematical annotations. I often enjoyed these notes more than the actual chapter.

Twenty-first century topics

This is probably the weakest part of the book. While topics such as local false-discovery rate (FDR), sparse modeling and lasso are covered clearly and in detail, topics such as neural networks and random forests feel sparse and are in my view insufficiently discussed. The discussion of neural networks feels especially rudimentary. Again, this is not particular surprising given that neither author is an expert in machine learning. However, the book is good enough without venturing into machine learning topics. The additional space could have been used for more extensive discussions of FDR or GLM.

Hence if you are interested in learning more about machine learning this book might not be ideal for you. However, that does not mean that individual chapters of this book are bad. Indeed, topics such as support vector machines (SVM) and lasso are very well discussed. Nevertheless, although I enjoyed refreshing my knowledge about these methods I did not feel that I gained a deeper understanding compared to the previous parts of the book.

Conclusion

Overall I really enjoyed reading the book. It gave me a great view of current and past statistical applications. It was especially rewarding to discover and understand connections between various different methods and ideas. Furthermore, the book is covered with nice examples (the data and code for each example is also available on the author’s website).

If you want to refresh or update your knowledge about general statistics Efron and Hastie’s Computer Age Statistical Inference is an excellent choice. You can download the free PDF from the author’s website.

Unhackathon #4 december 10th

Here is our next event coming up on December 10th
This time on top of the usual “coding day” where people propose their project and form teams to work on it, we added 2 features :
– a beginner’s corner, for the ones starting off with Python, R or datascience itself.
– a talks corner to share during 30′ some thoughts, an experience, or introduce your project in depths. 3 talks are already planned for December 10th. If you feel like bringing one, just let us know !
All details including the location and the list of talks is on the eventbrite ticket.
See you on the 10th !

November Unhackathon

Our 3rd event !

Once again a small crowd of Data Scientists has been courageous enough to fight their impulse for just chilling out in the wonderful sunday’s weather in HongKong and instead came to hone their skills on 2 topics :

  • An exploration of HKEX data and its links to HK financial markets
  • A study of the very hyped cryptocurrencies

Crypto-currencies correlation

This topic stemmed from the follow-up of the previous “Coindex” subject.
The study of correlation should give an idea of how much diversification would be important in a portfolio or index of crypto-currencies, in other words, how much an index would provide a sense of the true performance of the currencies in the crypto world.

Here the focus has been given to a classical-flavored study of correlation among the currencies available on Poloniex Exchange on sep 16th, 2017.
First of all a joyplot retrieved the shapes of return distributions for many currencies :
ridge_plot.jpegSome currencies such as OMG (OmiseGo) and CVC (Civic) are too new and then have a short historics that meks them not at all normally distributed, and are then considered as outliers and removed from the scope.

Then we came up with proper correlation calculations

heatmap.png

And we can get a 36% global average correlation (average of all 1 to 1 correlations), hinting that diversification could be an important driver of portfolio efficiency.

If we graph this measure along time, we see that the correlation tends to increase along time, suggesting that there is some re-correlation of crypto markets.

histocorrel.png
Next step might be to understand why this re-correlation happens.

The complete analysis, including the used data, can be found on github.

 

September un-Hackathon

original

Our second event!

Following the success of our first event, we again met up at the MakerHive in Kennedy Town for our un-hackathon. This is our term for a hackathon where the agenda would be set by participants and people would have fun coding together, instead of being a competition. It’s a way to improve your skills and share projects you are passionate about with the community.

Some projects from our previous event were pitched again while a number of new projects were also started. After teams were formed, the coding quickly got under way.Attendees gathered for the presentation as the teams showed off their results.

Web scraping

A initiative to scrape public data with Python and R, Scrapy was used to pull HKEX data.

Visualisation of the block chain

On 12th May, computers worldwide were hit by the WannaCry ransomware attack. The attackers asked ransom payments to be made to a number of bitcoin wallets. Blockchain data about these wallets from the period of the attack was sourced and visualised using D3.

Horse racing prediction

“Anomalies” in betting market for horse racing mean that the outcome of a horse race could be predicted. RapidMiner and Python was used to scrape the data and create a predictive model.

horse racing team

The team were well organised and even produced a presentation of their results!

Traffic analysis

This team scraped data on traffic incidents using Scrapy (Python) and then visualised using R.

clean

corr

 

Crypto-currencies investment strategies

This project is a follow-up of the previous unhackathon, at the end of which we remained puzzled by some unexplainable moves in certain currencies.
This time we had better grasp at it and we went for analysing correlations and properties of simple indices made of a basket of currencies.

The global correlation among 20 first currencies amounted to 36% since 2017

2017_10_13_13_40_32_Coindex_Google_Slides

this is low enough to hope for some diversification effect to take place.

Building an index where each currency has the same weight is indeed providing a real overperformance if we consider BTCUSD as the benchmark.
Moreover scaling down the index so that volatility, or risk, is equivalent to the one of Bitcoin vs USD then produces significant gain of 15% over BTC.
2017_10_13_13_41_10_Coindex_Google_Slides

On top of this the skew while negative for Bitcoin becomes positive for the index : this means that frequent small losses encountered by the index are compensated by less frequent big much bigger gains !

This is encouraging to build up some other indices and strategies, and this project could yield to promising applications :

  • Trading strategies, either short or medium term, dynamic or static, including machine learning algorithms for the discovery of alpha in this market
  • The development of an algorithmic trading tools following these strategies
  • Also some online analytics on single currencies or portfolio of them
  • Potentially some advisory for portfolio construction

 

Our first event: Unhackathon at the Hive

hackathonDSHK

What is an Unhackathon anyway?

Data Science Hong Kong was set up to as a way for people interested in data science to network and share ideas. We have an active public Slack group where people regularly share articles and discuss all things tech and data science. The group has organised a number of informal meetups before but we wanted to a start a regular event based around coding and presenting, and not just on talking and networking.

There are many IT, tech and data science events in Hong Kong but they are infrequent and often serve primarily as a marketing or recruitment tool. Not satisfied with the state of tech events in Hong Kong, we set out to create an event that was started from the bottom up and would focus on who knew the most and not who spoke the loudest, which is inviting to beginners but not to those uninterested in technical details.

We have therefore started a regular unhackathon. This is our term for a hackathon where the agenda would be set by participants and people would have fun coding together, instead of being a competition. It’s a way to improve your skills and share projects you are passionate about with the community.

Our first event gets under way

Our first gathering was made possible by The Hive. They were very keen on supporting the data science community in Hong Kong and let us use the MakerHive in Kennedy Town which was a fantastic venue for our first event.

The event started with the floor being opened to pitches. After signing up for a slot by putting up a post-it, pitchers were given 5 minutes to convince others to work on their project.

OLYMPUS DIGITAL CAMERA

There were many great ideas and teams were formed around those that attracted enough interest. Discussions were soon under way on what each team wanted to achieve by the end of the day.

 

Of course, being a hackathon, there was coding, coding and more coding!

 

As it became time for lunch, teams headed out to Kennedy Town center to find a restaurant. Any loss of coding output was more than made up for by the opportunity that people got to better know their teammates. Real data scientists don’t skip lunch!

Presentation time

4 hours and much coding later the deadline for presentations loomed. All the teams gladly accepted a 20 minute grace period to put the final touches on their work.

 

Some of the projects presented were :

  • Address mapping in Hong Kong
  • Twitter topic analysis
  • Crypto-currency analysis
    2017_08_25_16_11_50_Coindex_Google_Slides.jpg
    This team aimed at building an index of cryptocurrencies similar to usual financial market indices, to be used as a benchmark of refined to explore portfolio strategies.

 

  • Facial Expression Recognition using Keras
    内嵌图片 3
    The team of 3 used a MNIST convolutional neural network model and retrained it on facial expression data from Kaggle, with 55% accuracy over 7 categories

 
Everyone had made great progress on their projects and a common theme across presentations was that so much more could have been accomplished with just a bit more time. It’s good then that we already have started planning for our next event in September!

Just because the event is over does not mean the coding stops! If you enjoyed the project you worked on or more importantly enjoyed the people you worked with then do continue collaborating and share with us what you did at our next event!

If this event seems interesting then please contact us by email, social media or join our slack group. We’ll keep you updated there about any future events.

Data Science Hong Kong

 

 

Welcome to Data Science Hong Kong

Data science is starting to become embedded in Hong Kong. We are a community for all data scientists in Hong Kong — from beginners to multi-decade practitioners of BI, artificial intelligence and data warehouse design, and from students to professors — and their fellow travellers, from business, government and academia. We want to create an environment where data scientists can learn from each other and share their stories, and a community that non-data scientists can turn to when they want to understand more about what data science can do for them. We will organise monthly events such as unhackathons and lectures as well as hosting social media platforms.

Join us on our other social platforms to keep up-to-date with our activities and to become part of our community:


slack_icon

meetup_icon

FBicon

linked-in_icon