April 15 Unhackathon #7

poster_7

We are organizing another Un-Hackathon on April 15th! You can sign up here! We have organized a number of talks and a day of collaborative, hands-on problem solving.

Details:

  • 9.30am – Arrive, registration
  • 10am – Welcome
  • 10.15am – Talks begin
  • 11.30am – Pitch session, recruitment
  • 12pm – Work on projects
  • 5.30 pm – Present results of work session

Location:

11F, 40-44 Bonham Strand, Sheung Wan, Hong Kong

Requirements:

Laptop and charger for those joining the coding.
Prepared data and project pitches for those submitting projects.
If presenting, send us your presentation slides ahead of time so we can prepare them.
50HKD in cash for admin and organisation.
Recommendations for project submissions:
Prepare data in advance as much as you can; spending the day cleaning or retrieving data won’t gather crowds of DS! Contact organisers if you need a data repository to share data with all your team members.
If the project is already underway, prepare an introduction to it so that people can join (if you’re presenting slides, send them to us before you arrive), and make sure the task you propose is feasible during the time of the event, and describe the skills you expect your team to have: R or Python? AWS, Spark? etc.

For final presentations:

Start writing the final presentation right from the start and add elements little by little all day long. Recall the context of the project and articulate the presentations to make it understandable by the non-initiated public around you.
If you wish your work will be published on the website datasciencehongkong.com with your bio, name, etc.

Other details:

50 participants max
Food / drink: Only water, coffee and snacks are provided. Attendees can order their own food to the venue, take a break to find a restaurant in Kennedy Town or bring their own lunch.
Price: 50 HKD. We charge a fee to cover costs. We are not a for-profit organisation and will aim to keep the costs of our events as low as possible to make it accessible to all.

Book Review: Computer Age Statistical Inference

jacket_wave_cropped

It took me a while but I finally have some time to write my review about Efron and Hastie’s new book Computer Age Statistical Inference: Algorithms, Evidence and Data Science.

Bradley Efron is probably best known for his bootstrapping re-sampling technique. With his new book Computer Age Statistical Inference he provides a rather short overview of statistics as a whole. The book covers topics from the early beginnings of statistics to the new era of machine learning. As one can imagine covering such a huge amount of content is not an easy task and the two authors did their best to focus on a number of interesting aspects.

The book is separated into three major parts: classic statistical inference, early computer age methods, and twenty-first century topics. Hence, I will review each part individually as well. Despite the great number of topics the book covers it is definitely not meant for beginners. The authors assume a fair amount of algebra, probability theory as well as statistics. Nevertheless, I found it a great way to not only refresh my knowledge, but also delve deeper into various aspects of classical and modern statistics.

Classic Statistical Inference

Overall I think this is the strongest parts of the book. The authors did not go into extensive detail but covered interesting aspects of frequentist and bayesian inference. In addition, Efron and Hastie put emphasis on fisherian inference and maximum likelihood estimation, and demonstrated parallels between these different approaches as well as their historical connections. This really helped me to classify and interconnect all of these different methods. However, I found it a bit surprising on how little space is dedicated to frequentist and bayesian, compared to fisherian inference. On the one hand I really appreciated reading more about Fisher’s ideas and methods since it is often insufficiently covered in most text book. On the other hand, I would have hoped for some new insight into bayesian statistics.

Overall, I really enjoyed this part of the book. It helped me to get a deeper understanding of classical statistical methods.

Early Computer-Age Methods

This part of the book covers quite a variety of topics, from empirical Bayes, over generalized linear models (GLM), to cross-validation and the bootstrap. In particular the bootstrap is covered extensively and pops up in a number of chapters. While this is not particularly surprising given the background of the authors,  it does feel a bit too much.  Furthermore, I find that GLM are covered insufficiently (only 20 pages), considering the importance of  linear models in all areas of statistics. However, given the extensive scope of this part of the book, the authors do a fairly good job by discussing each topic in detail while not being too general.

I especially liked the notes at the end of each chapter, which provided additional historic and mathematical annotations. I often enjoyed these notes more than the actual chapter.

Twenty-first century topics

This is probably the weakest part of the book. While topics such as local false-discovery rate (FDR), sparse modeling and lasso are covered clearly and in detail, topics such as neural networks and random forests feel sparse and are in my view insufficiently discussed. The discussion of neural networks feels especially rudimentary. Again, this is not particular surprising given that neither author is an expert in machine learning. However, the book is good enough without venturing into machine learning topics. The additional space could have been used for more extensive discussions of FDR or GLM.

Hence if you are interested in learning more about machine learning this book might not be ideal for you. However, that does not mean that individual chapters of this book are bad. Indeed, topics such as support vector machines (SVM) and lasso are very well discussed. Nevertheless, although I enjoyed refreshing my knowledge about these methods I did not feel that I gained a deeper understanding compared to the previous parts of the book.

Conclusion

Overall I really enjoyed reading the book. It gave me a great view of current and past statistical applications. It was especially rewarding to discover and understand connections between various different methods and ideas. Furthermore, the book is covered with nice examples (the data and code for each example is also available on the author’s website).

If you want to refresh or update your knowledge about general statistics Efron and Hastie’s Computer Age Statistical Inference is an excellent choice. You can download the free PDF from the author’s website.