The torch package contains data structures for multi-dimensional tensors (N-dimensional arrays) and mathematical operations over these are defined.
In this blog post, we seek to cover some of the useful functions that the torch package provides for tensor manipulation, by looking at working examples for each and an example when the function doesn’t work as expected.
OUTLINETORCH.CAT - Concatenates the given sequence of tensors along the given dimension
TORCH.UNBIND - Removes a tensor dimension
TORCH.MOVEDIM - Moves the dimension(s) of input at the position(s) in source to the position(s) in destination
TORCH.SQUEZE - Returns a tensor with all the dimensions of input of size 1 removed
TORCH.UNSQUEEZE - Returns a new tensor with a dimension of size one inserted at the specified position. …
One of the key challenges in the entire job search process is to get past the resume screening and receive an invite to the subsequent rounds of technical interviews. Therefore, drafting a good resume that best aligns with the job description is very important!
This blog post is inspired by the session ‘Engineering your resume’ , a webinar by Jessie Newman, currently an Algo Engineer at Hudson River Trading and formerly a Software Engineer at Google for Women Who Code NYC.✨🎉
- What goes on a resume?
- What's the preferred format of a resume?
- How to structure the Skills section?
- How to describe technical experience? …
Julia is a general-purpose, high-level, dynamic programming language with several features suited for scientific computing and numerical analysis.
The best way to get started is to install Julia on your local machine and set up your working environment ; Over the next few minutes, let’s walk through how we can install Julia and also set up the interactive Pluto notebook environment!😊
The latest release v1.5.3 is available for download
Download the latest stable release for your Operating System; The download ideally takes a few minutes; once done, run the installer, finish installation and launch Julia 😎
Part 1 of the series- NLP:Concepts and Workflow covered the introductory aspects of NLP, techniques for text pre-processing and basic EDA to understand certain details about the text corpus. This part covers linguistic aspects such as Syntax, Semantics, POS tagging, Named Entity Recognition(NER) and N-grams for language modeling.
- Understanding Syntax & Semantics
- Techniques to understand text
-- POS tagging
-- Understanding Entity Parsing
-- Named Entity Recognition(NER)
-- Understanding N-grams
As we know, one of the key challenges in NLP is the inherent complexity in processing natural language; understanding the grammar and context(syntax and semantics), resolving ambiguity(disambiguation), co-reference resolution, etc. …
Data Cleaning is an important step in the generic ML pipeline and a common approach to deal with missing data is to use a suitable imputation strategy.
Very often, we use scikit-learn’s
SimpleImputer, that imputes missing values of a particular feature by replacing them with the mean, median or mode of the remaining values of the feature. But, is this good enough? 🤔
As we know, such imputation is univariate and does not take into consideration the correlation between the different feature columns. Let’s consider a simple example.
import pandas as pd
df = pd.read_csv('http://bit.ly/kaggletrain', …
It’s still the early hours of the day…and I’m startled by the sudden ringing of my mobile phone… “It’s the alarm! My wake up call for the day, yet another tiring and excruciating day!”. “The ambience is pleasant and welcoming enough. The dew drops glistening in the soothing light of the morning sun and the merry birds chirping around!” This perfect morning, now forces me to haul me out of my bed rather reluctantly. The day has begun and I am off the mark. I’m quite sure this would be the case with most of us.
“We may feel that these are probably the toughest days ever.” …
Topic Modeling is a very useful NLP task that helps identify latent topics that are present across a corpus of text documents. It’s an unsupervised learning problem aimed at finding abstract topics in a collection of documents.
Topic can be thought of as a collection of words that are similar in context and are indicative of the information in the text.
As Machine Learning algorithms cannot work on the raw text but on their numerical representations, we need to convert text to numerical forms that can be used in further steps. Commonly used representations are the Document-Term Matrix or the Term-Document Matrix. …
As the sun went down the horizon, painting the evening sky with intriguing shades of blue and pink, I turned around to see the traversed path, the path treaded thus far; the familiarly unfamiliar path!
Struggles accrued over the past two decades showed up as hard rocks, their coarse surface and jagged edges quantifying the ordeals. There were a few dangerous pitfalls that hid themselves so well, wise little things…huh! The taunts and judgements of the society had manifested as thorny shrubs, sometimes fairly lethal!
The trail as you can imagine was dark. Of course, there were these little moments of happiness that impinged upon the trail, trying to illuminate it with their light & goodness; their soft sheen slowly but steadily spreading over the surface, only to reveal the vagaries of the path in greater detail. The otherwise dark trail was now a dimly lit path!
As an ignoramus, I stand oblivious to the path that lies ahead, unsure of where even the next few steps would take me. At times, I find myself enveloped in hopes and positivity only to be engulfed in a sphere of despondence and emptiness moments later. Although these may seemingly alternate, the low phase marked by self-doubt, anxiety about the impending uncertainty preponderates!
I could now resonate so well and appreciate the line, “It was the spring of hope; it was the winter of despair”, my favorite line from the very first chapter of “A Tale of Two Cities”. …
With the huge influx of unstructured text data from a plethora of social media platforms , different forums and a whole wealth of documents, it’s evident that processing these sources of data to distill the information that they contain is challenging because of the inherent complexity involved in processing them. Natural Language Processing (NLP) helps greatly in processing, analyzing and understanding these sources to gain information and meaningful insights; With the recent advances in computing and easier access to computing resources, certain Deep Learning models have achieved SOTA in solving some of the most challenging NLP tasks. The NLP series by Women Who Code Data Science track gives the learners a comprehensive learning path; starting from the basics of NLP, gradually introducing advanced concepts like Deep Learning approaches to solve NLP tasks. …
It was a cloudy, contemplative night. She put down her pen near the pile of assignment sheets. Walking wearily to the window, she stared at the night sky tessellated with flossy clouds. SILENCE prevailed…all that she could hear was the ticking of the clock. The orchestrated silence of the night triggered a wave of reminiscence. She knew what was coming! The journey so far; an eclectic mix of accolades, irony, distress and loads of affection unfolded before her.
The clock was set back; it was roughly two decades ago. She saw a cute little girl strolling by, flaunting a beaming smile on her face. Epitome of happiness the child was! …