MIST101 Workshop #1 Event Review

Author: Alex Li,  IEEE

Reviewer: Justin Yuan, UTMIST & IEEE

Pasted image at 2017_09_20 04_01 PM

This was the first machine learning workshop put on by UTMIST in association with IEEE. As the first of six workshops, it served as an introduction to many of the topics that will be covered in more detail in future workshops. This post will give an overview of the topics covered in this workshop.



There are traditionally two approaches to problem-solving: following a set of instructions or learning from experience. Traditionally, a computer program is in the former category. It is a set of instructions, a recipe tailor-made for a single problem that needs to be created by a human expert. For some problems, however, it is impossible or extremely difficult to cover all potential circumstances by hand. Machine learning takes the second approach, to learn from experience. By designing an algorithm that can learn from examples and solve a family of tasks, we can solve a wide variety of problems that were previously inaccessible. The many applications of machine learning include classification, pattern recognition, recommender systems, self-driving cars, natural language processing, computer vision, robotics, and playing games better than any human being ever could.






In general, there are three types of machine learning tasks: supervised learning, unsupervised learning, and reinforcement learning.

IMG_9626Supervised learning attempts to learn the underlying function of given input/output pairs. These types of functions can take a variety of forms. For example, an image classifier maps pixels to object categories, and a translator maps one language to another. Machine learning tries to create models to approximate these functions. These models include decision trees, graphical models, Gaussian processes, SVMs, and KNNs. Recently, artificial neural networks have supplanted many of these methods and will therefore be a major topic of discussion in the workshops. The different types of artificial neural networks include feedforward neural networks, convolutional neural networks, and recurrent neural networks. More specifics will be given in later workshops.







In general, supervised learning is marked by the presence of already known data-label pairs, which are fed into algorithms that enables effective model learning. Unsupervised learning, on the other hand, does not have any known labels to begin with. Some common applications of unsupervised learning are clustering, which can, for example, group images together that are similar, semi-supervised learning, which applies concepts from both supervised and unsupervised learning, and dimensionality reduction, which can reduce the number of variables
under consideration in a problem.









Finally, there is reinforcement learning, which is used for a very different type of situation. It essentially involves an agent playing a game and improving its performance by reinforcing better behaviors. This type of learning doesn’t require any correct or ideal cases to be given to it, but rather only some type of judgment as to whether it is performing well. It works by exploring a set of actions randomly or under proper guidance, evaluating them as to how well they perform in terms of a “reward”, adjusting its set of actions, and then repeating this entire process. Reinforcement learning requires little human expertise but can perform extremely intelligent behaviors. However, it requires extensive training and isn’t very adaptable to new situations.







In future workshops, all of these topics will be covered in a more in-depth and rigorous manner. We hope you come and attend them! The next workshop will be on supervised learning and neural networks and will occur in the week of Oct. 2-6.


You may find the slides for the workshop here!

Posted in MIST101 Review, UTMIST Events.

Leave a Reply

Your email address will not be published. Required fields are marked *