title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
How Does the Moon Really Cause Waves? | Look, I know that the reason why the ocean has waves is because of the Moon. I learned that back in elementary school, and I’ll state it with full confidence.
But if you asked an innocent follow-up question, my confidence would unravel. I know that there’s some connection, that there’s some sort of link between the phases of the moon and the strength of the tides. Didn’t sailors used to know about the tides by looking at the moon?
How are tides caused? Does the Moon pull harder at some times to make stronger waves? Are tides in freshwater lakes, or just in the ocean? Why does the tide go in and out once per day in some places, but multiple times per day in other places? What about the Sun? Shouldn’t the Sun’s gravity also cause tides?
Let’s learn about tides! Here’s how they are caused, what the Moon (and maybe the Sun) have to do with shifting water, and what else can contribute to the splashy waves that we see on the beach.
1. How are tides caused?
The first thing to understand is that not all waves are equal.
There are two types of waves — surface waves, which is what we see crashing on the shores of the ocean every few seconds — and long-period waves, which take hours to slowly hit the land.
The waves that we see on the surface aren’t caused by the pull of gravity, but are driven by wind. These surface waves can still travel very long distances, even across an entire ocean, but they’re mostly on the surface. These waves don’t actually carry water over long distances, but instead transmit the energy of the wind through the rise and fall of the water.
Surface waves, caused by the wind, not by gravity. Photo by Jeremy Bishop on Unsplash
Long-period waves, on the other hand, are the waves caused by the pull of other celestial bodies on the Earth. These waves move the entire ocean, and reach deep below the surface.
When a long-period wave hits the shore, it takes hours for the full wave to exhaust itself against the land, and it pushes the entire ocean against the shore, making the water rise. We call the ebb and crash of these long-period waves against the shore tides.
2. Does the Moon pull harder at some times to make stronger waves?
Some tides are stronger than others — the strongest tides, with the highest rise and the lowest fall of the ocean, occur during a full moon or during a new moon.
However, this isn’t because of just the Moon itself, but instead is due to the interaction of the Moon and the Sun.
Long-form waves, which we know as tides when they hit the shore, are caused by the gravitational pull of other celestial bodies, most notably the Moon (because it’s closest) and the Sun (because it’s very large and heavy).
And when the Moon and Sun happen to be pulling in the same direction, we get the strongest tides. This happens during a new moon (when the Moon is directly between the Sun and Earth), and during a full moon (when the Earth is directly between the Moon and the Sun).
In this new Moon situation, where the Moon and Sun are aligned, their gravitational pulls are both in the same direction and we get stronger tides. Source.
The majority of the strength of the tides comes from the Moon, as it is much closer, but the Sun still contributes somewhat to influencing tidal strength.
3. Are tides in freshwater lakes, or just in the ocean?
Since tides are caused by the gravitational pull of celestial bodies, mainly the Moon, they aren’t just limited to oceans! Theoretically, a lake can have tidal effects as well.
However, because there’s a lot less water in a lake, pond, or other freshwater body, the gravitational waves that pull on the water are much smaller, and the tides are much less noticeable. Very large lakes, like the Great Lakes in the northeastern United States, do see a regular tidal effect, but it’s much smaller than ocean tides.
4. Why does the tide go in and out once per day in some places, but multiple times per day in other places?
Ah, now things start getting more complex. Someone may ask this question in Britain, for example, where vacationers at seaside resorts enjoy a regular two tides per day.
In some locations, the spinning of the Earth and Moon generates centrifugal force, which also pulls on the water on the Earth’s surface. This leads to additional tides.
The first tide occurs when Britain faces the Moon, and the pull of the Moon’s gravity is strongest. The second tide occurs when Britain faces away from the Moon, and centrifugal force pulls the water higher.
Essentially, as the Earth rotates, this means that sometimes multiple tides stack up in some regions, while other regions may have less frequent tides, or sometimes almost no tidal activity at all!
5. What about the Sun? Shouldn’t the Sun’s gravity also cause tides?
As mentioned above, in point #2, the Sun does have an effect on tides as well! But despite being much more massive, the Sun is much further away than the Moon, and so its effect is reduced.
The strength of the Sun’s gravity is about 177 times that of the Moon — but it’s also 390 times as far away. When these two numbers are combined, we end up with the Sun having about half the gravitational strength of the Moon, when it comes to tides.
As mentioned above, this is why tides are strongest during a full moon or a new moon! At those times, the Sun’s gravitational pull is along the same plane as the Moon’s pull, and thus we get strongest tides.
These tides, by the way, are called spring tides — not named because they occur in the spring, but because this is when the water most strongly “springs forth”. | https://medium.com/a-microbiome-scientist-at-large/how-does-the-moon-really-cause-waves-9ff7b45691e3 | ['Sam Westreich'] | 2020-12-28 12:02:47.483000+00:00 | ['Environment', 'Astronomy', 'Nature', 'Science', 'Oceans'] |
6 Imposter Syndrome Triggers to Watch Out For | 6 Imposter Syndrome Triggers to Watch Out For
Know the signs so you can rise above them
Photo by airdone on iStock.
Imposter syndrome is described as the psychological pattern in which a person doubts their abilities despite evidence of their competence. It is probably one of the most commonly faced issues by women in tech. The unfortunate thing about imposter syndrome is that it never truly goes away. It does get better with time. You learn to manage it, though, and even use it to fuel the majority of your learning and growth.
Just like a parasite, imposter syndrome uses up all of your resources to feed itself. Whenever it flares up, it uses all your brain’s resources, and instead of focusing on the task at hand, you go into a spiral about all the reasons you can’t do the task. Most of the time, the parasite is dormant, but every now and then, something triggers the activation and ruins everything.
Recognizing what triggers your imposter syndrome can be the first step to fighting it off. They differ from person to person, but here are some of the triggers for my imposter syndrome. | https://medium.com/better-programming/6-imposter-syndrome-triggers-to-watch-out-for-b18dc60e9adc | ['Angela Mchunu'] | 2020-09-28 15:11:36.353000+00:00 | ['Mental Health', 'Imposter Syndrome', 'Software Engineering', 'Programming', 'Women In Tech'] |
Don’t Wait for Opportunity — Work Hard to Achieve What You Want | Hard work is the key
In his book, Outliers: The Story Of Success, Malcolm Gladwell digs deep to determine why some people succeed in this life while others don’t. The primary component that came out was — opportunity.
Opportunity in life is as significant as oxygen when it comes to success. It’s true. But my problem is hard work often underestimated in the success equation.
Some believe that, no matter how hard you work, if an opportunity doesn’t present itself, it would be hard to succeed. That is not always true because many people fail to be successful despite having all the opportunities available to them, while some others succeed without having much luck.
To those who, without opportunities, have become successful, what’s the secret? What things did they do that set them apart from their peers? What’s the success factor that they employed that the others didn’t?
Was there even any factor? And the answer is yes! It’s hard work. | https://medium.com/the-masterpiece/dont-wait-for-opportunity-work-hard-to-achieve-what-you-want-2a73b6b56aff | ['Emmanuel A. Anderson'] | 2020-12-09 15:16:35.359000+00:00 | ['The Masterpiece', 'Motivation', 'Success', 'Psychology', 'Self Improvement'] |
How Identity—Not Ignorance—Leads to Science Denial | How Identity—Not Ignorance—Leads to Science Denial
Changing the minds of Covid-19 deniers may require a lot more than sound reasoning
During the first months of the novel coronavirus outbreak, many rural parts of the U.S. did not experience the swell in caseloads or hospital admissions that threatened to overwhelm cities like New York, Detroit, and New Orleans. West Texas was one of these comparatively fortunate places. And considering the Lone Star State’s long-running antipathy toward government oversight, it made sense that some there would choose to ignore or downplay warnings from federal and local health officials.
But elements of the script have since flipped, and Covid-19 case numbers are now spiking in many counties across West Texas. One might assume that, in the face of rising caseloads, many there would abandon their prior insouciance and embrace masks and other common-sense measures recommended by the nation’s top public health officials. But that doesn’t seem to be happening; if anything, the resolve of many Covid-19 skeptics appears to be stiffening. Even state officials who can no longer ignore the virus continue to lash out at public health authorities. (Last week, Texas Lt. Gov. Dan Patrick criticized Dr. Anthony Fauci, saying that Fauci “has been wrong every time on every issue” and “I don’t need his advice anymore.”)
Anyone who has ever butted heads with a friend, a family member, or a colleague about one of science’s hot button issues — be it global warming, the safety of vaccines, or the gravity of the current pandemic — has likely walked away from the experience frustrated and exasperated at the other person’s stubborn and apparently nonsensical refusal to consider the facts.
But psychologists say that the denial of facts is often rooted in identity and belonging, not in ignorance and that changing minds may require a lot more than sound reasoning.
“The people who deny science are often trying to uphold membership in something that they find meaningful,” says Nina Eliasoph, PhD, a professor of sociology at the University of Southern California. That meaningful thing could be a political or religious affiliation or some other group that prizes certain ideas or ideals. Whatever shape that group takes, the important thing is that it has other members — it’s a community.
Once a community absorbs an idea into its collective viewpoint, rejecting that idea becomes akin to rejecting the whole community, Eliasoph says. And that sort of rejection is a very, very difficult thing for any of its members to do. “This is why you talk with people who deny science and the goalposts are always changing,” she says. “What really matters is the membership in the thing that has meaning, and to keep that membership you have to ignore certain ideas and pay attention to others.”
“The people who deny science are often trying to uphold membership in something that they find meaningful.”
The causes and correlates of denial
Denial, in a nutshell, is the rejection or diminution of a phenomenon that has a large — and sometimes overwhelming — body of supporting evidence.
When it comes to science denial, global warming may be the most conspicuous example. Science’s case that the planet is warming, that people are contributing heavily to this warming, and that this warming — if not addressed — will imperil billions of lives is almost unassailable. And yet huge chunks of the American electorate evince some form of climate-change denial. Even people who are worried about global warming are often unwilling to make even small personal sacrifices that, collectively, could make a meaningful difference.
Why do people do this? Experts say that our aversion to cognitive dissonance is one explanation. “Cognitive dissonance is a negative emotional state characterized by discomfort or tension, or maybe feelings of anxiety or guilt, that’s produced from beliefs or behaviors that are inconsistent with one another,” says April McGrath, PhD, an associate professor of psychology at Mount Royal University in Canada who has published work on cognitive dissonance. For example, a person who believes the planet is warming may also want to drive a gas-guzzling SUV, and these competing interests create cognitive dissonance.
Because cognitive dissonance is unpleasant, people tend to want to get rid of it. And McGrath says that there are generally two ways that people can do this: change a behavior — that is, ditch the SUV for an electric vehicle — or change a belief. Most people go with option B. “Changing a behavior is usually difficult because most behaviors are rewarding,” she says. Changing a belief is often easier, and that’s where some element of denial comes into play. “This could mean trivializing the source of the dissonance” — telling yourself that switching to an electric car won’t make any difference in the grand scheme — “or adding some new belief or idea that supports or rationalizes your choice,” she says. The latter could entail embracing conspiracy theories that argue climate-change consensus is some kind of nefarious ploy.
Before any of us gets too judgy, McGrath says that everyone engages in denial. “We are all constantly bombarded by decisions or choices that create dissonance or conflicts, so we can’t always act in accordance with our ideals,” she says.
Once a community absorbs an idea into its collective viewpoint, rejecting that idea becomes akin to rejecting the whole community.
Along with cognitive dissonance, there are many other scenarios or psychological states that tend to produce denial. “These are all related to each other — they’re not totally independent,” says Craig Anderson, PhD, a distinguished professor of psychology at Iowa State University. He terms one “belief perseverance,” which refers to people’s attachment to ideas or conceptions that they’ve held in the past. We don’t like to change our minds, Anderson explains, and we tend to ignore new information that challenges our long-held views. (Confirmation bias — seeking out and retaining only the information that supports one’s view — is a related concept.)
“Reactance” is another, he says. This refers to the negative feelings that people experience when their freedom is somehow threatened — like if state or local government officials tell them that they can’t shop, dine, travel, or congregate as usual. “Fear is also a big one,” he says. If someone finds a belief or idea to be scary — both global warming and Covid-19 are ready examples — that fear is a powerful motivator of denial.
While all of these overlapping factors can feed into denial, some who study human psychology say that group dynamics — coupled with every person’s vital need to belong — are at the root of many science deniers’ seemingly inscrutable beliefs and behaviors.
Scratching a deep psychological itch
Rebekka Darner, PhD, is director of the Center for Mathematics, Science, and Technology at Illinois State University. Much of her work has focused on improving science literacy and combatting science denial among the general public.
Darner says that a key element of effective science teaching and communication involves “self-determination theory.” This theory holds that people have three basic psychological needs that undergird their motivation to engage in any behavior.
“The first is a need for autonomy, or the belief that an action came from the self,” she says. The second is the need for competence. “This doesn’t mean that a person actually is competent,” she clarifies. What’s important is that the person believes that they are competent and capable of achieving their goals. “The third one is the need for relatedness — a sense of belonging and that other people need you and value your input,” she says.
For those hoping to weaken a friend or loved one’s science denial, Darner says that it’s necessary to start from a place of respect and amity.
The social groups that people identify with tend to satisfy all three of these basic psychological needs, Darner says. And because of this, people are strongly motivated to accept their group’s ideas or to engage in behaviors that are valued within their social spheres. For example, she says that some social groups may place a high value on bucking authority (“You’re not going to control me”) and this attitude and its associated behaviors — like not wearing a mask — can supersede all others.
Self-determination theory helps explain why the widespread adoption of anti-science or anti-expert views is so dangerous. If a person’s group identity motivates them to deny one element of science — like the person who rejects the theory of evolution on religious grounds — then that can be a problem, but at least it’s somewhat contained. If huge numbers of Americans decide that a core element of their group identity is the rejection of science or of creditable expertise, then that’s a problem of a whole other magnitude.
The good news, Darner says, is that beliefs linked to group identities are not intractable. “Humans are complex, which works in our favor,” she says. “No person associates with a single identity, and we all have a variety of different communities with which we interact.” When people are regularly exposed to diverse groups and ideas that clash with their own, the resulting contradictions create uncertainty. And while people tend to find uncertainty uncomfortable, Darner says that uncertainty is often the precursor of learning and idea reassessment.
Unfortunately, she says that some elements of contemporary life may steer people away from these helpful, perspective-balancing encounters with other viewpoints. The ideological myopia — as well as the us-against-them vitriol — that characterizes much of today’s media, both traditional (newspapers, cable news) and new (social media, online message boards), tends to strengthen a person’s opinions and their feeling of being part of a large and like-minded community. Pushing back against all that can be a Sisyphean endeavor.
For those hoping to weaken a friend or loved one’s science denial, Darner says that it’s necessary to start from a place of respect and amity. “People need to feel like you value them and their opinion,” she says. “This kind of relationship has to be there first.” It may help to ask questions — rather than offer counter-arguments — and to respond with interest and noncritical feedback to articles or viewpoints the other person shares. Once you do that and you’ve established more congenial footing, your counterpart may be more willing to consider your side of things. It goes without saying that, however satisfying it may be, telling someone that they’re ignorant and brandishing facts or articles that back your case is the kind of “I’m right and you’re wrong” approach that’s almost certain to fail, and is likely to solidify the person’s opposition to your viewpoints.
But even if you say and do all the right things, your odds of success are probably slim. “Individuals very seldom fulfill basic psychological needs for other individuals,” Darner says. “That fulfillment comes from a larger community and identifying with them and being a part of them.”
The science denier in your life may eventually come around, but it’s unlikely that you’re going to reel that person back in on your own. | https://elemental.medium.com/how-identity-not-ignorance-leads-to-science-denial-533686e718fa | ['Markham Heid'] | 2020-07-09 05:31:01.391000+00:00 | ['Identity', 'Life', 'The Nuance', 'Psychology', 'Science'] |
The Mosaic | The Mosaic
A poem about sadness and anxiety
Photo by Ashkan Forouzani on Unsplash
we all have become a story
and learned to part away with memories
in a quiet way.
each poem is a layer that we shed quietly
we all have become a mirror that knows
how to shatter without making a noise.
each piece which shatters
part of the mosaic. | https://medium.com/scribe/the-mosaic-fd5769e15249 | ['Priyanka Srivastava'] | 2020-12-28 08:42:42.712000+00:00 | ['Poetry', 'Mental Health', 'Sadness', 'Anxiety', 'Writing'] |
What Happens If You Realize You’re Writing the Wrong Book? | Be brave enough to see your mistakes
Knowing you have to start again can be a little scary.
You just barely got through the fear of starting one book, and now you have to go through it again to start another?
I could’ve chosen to ignore my gut feeling that I was doing something wrong. I could’ve continued forcing myself to write.
I would’ve ruined my love for writing if I did that.
The process would be grueling and miserable, and perhaps I wouldn’t want to try writing a book again for a long time after that.
Admitting you made a mistake takes courage. You can’t place the blame on anyone else because writing a book, whether it’s the right or wrong one, is all on you.
No one likes to confess they screwed up, especially in this era where everyone on social media seems to have perfect lives. They don’t mess up, how could I possibly admit I did?
But we all fuck up. A lot. Every day.
Sometimes we trip over nothing, say the wrong thing, or write the wrong books. Screwing up is a part of life, so own it. | https://itxylopez.medium.com/what-happens-if-you-realize-youre-writing-the-wrong-book-4d0f4a984216 | ['Itxy Lopez'] | 2019-12-15 19:47:35.036000+00:00 | ['Writing Tips', 'Motivation', 'Self', 'Advice', 'Writing'] |
ARK Says Goodbye to Marketing Adviser Jeremy Epstein | Six months ago, ARK signed Author, Marketing Expert, and CEO of Never Stop Marketing, Jeremy Epstein, to a contract to serve as a Marketing Adviser to our Chief Marketing Officer, Travis Walker. Jeremy’s goal was to help inform our team and develop a marketing strategy that would fill the gaps in awareness we were seeing within the industry.
Having years of experience in marketing and a strong understanding of the blockchain space, Jeremy helped to analyze and understand the areas in which ARK needed to expand our outreach and highlighted a need to improve our collaboration with influencers in the space. Working with the team, Jeremy helped to put together a list of influencers to target for both the interoperability space, as well as blockchain at large.
We have already started to implement some of these strategies and over the course of the next several months, as we launch Core v2 and other major developments for the ARK Ecosystem, the entire community will see a massive increase in outreach, interviews, podcasts, and articles as we push to make ARK a leader in the worlds fastest growing industry.
As our contract comes to a close, we wanted to thank Mr. Epstein for the insight he has brought into our blossoming project and to let him know that we appreciate the time and energy he has put forth towards ARK and the ARK community. We will continue to welcome him around our Slack and ecosystem as an important ARK community member and supportive hodler, forever. As he transitions his focus to his upcoming book release and a renewed passion for his marketing agency and writing projects, everyone in the ARK Crew wishes Mr. Epstein much success in his future endeavors. | https://medium.com/ark-io/ark-says-goodbye-to-marketing-adviser-jeremy-epstein-203d2123f163 | ['Matthew Dc'] | 2018-06-12 22:38:48.819000+00:00 | ['Arkecosystem', 'Development', 'Marketing', 'Blockchain', 'Bitcoin'] |
The economics of Airbnb | Airbnb just went public in a debut that’s been widely celebrated. We dug into the company’s prospectus and learned some interesting facts about their business and travel trends.
Cash flow is highly seasonal. ABNB makes all it’s money in Q3. As you can see, that’s pretty much the only quarter in which the company is comfortably EBITDA positive in any given year. This is due to the fact that the bulk of ABNB’s customers travel in Q3, hence revenue is earned at that point.
Airbnb generated EBITDA. During 2017 and 2018, the company did generate material positive EBITDA. In 2019, they swung back to a loss due to rising costs across the board, and it looks highly likely they’ll burn again in 2020.
Airbnb generates significant free cash flow. Thanks to unearned fees, which is the payment customers make when they book a reservation, Airbnb does generate free cash flow even in years where they are unprofitable. Cash flow is especially strong in Q1 and Q2 when customers book their stays, and then declines in Q3 when many of the stays actually occur (see seasonality above) and Airbnb has to pay those reservations out to the host. Airbnb keeps 15% of every booking.
Strong founder ownership. The three founders (Brian, Nathan, Joseph) own 14.1% to 15.3% each. This is an extraordinary level of ownership for a 3-founder company going public. This is due to two things: strong free cash flow generation and more importantly the ability for Airbnb to raise capital at consistently high valuations.
Covid hurt a lot. Bookings in Q2 fell to $3.2bln whereas in Q2 2019, bookings were $9.8bln. That’s a 67% decline. The rebound in Q3 of 2020 however was dramatic as demand for travel exploded.
Short trips are in. Thanks in part to covid, people are taking shorter trips. “Short-distance travel within 50 miles of guest origin has been highly resilient, even at the peak of the business interruption in April. Short-distance stays were one of the fastest growing categories prior to the COVID-19 pandemic. This growth was further bolstered by the COVID-19 pandemic, as many guests chose short-distance trips instead of long-distance travel.”
Airbnb’s prospectus provides a very interesting look into travel trends and the economics of running a marketplace.
Visit us at blossomstreetventures.com and email us directly with Series A or B opportunities at sammy@blossomstreetventures.com. Connect on LI as well. We invest $1mm to $1.5mm in growth rounds, inside rounds, small rounds, cap table restructurings, note clean outs, and other ‘special situations’ all over the US & Canada. | https://blossomstreetventures.medium.com/the-economics-of-airbnb-dd2ed4828bf7 | ['Sammy Abdullah'] | 2020-12-17 13:41:52.505000+00:00 | ['Airbnb', 'Founders', 'Startup', 'Entrepreneurship', 'Venture Capital'] |
What If I’m the Narcissist and Not the Victim? | When I realised that I was in a relationship with a narcissist, I started to read a lot about Narcissistic Personality Disorder. I devoured books and articles — to find some meaning in the chaos that I experienced, to find explanations where there were none and to confirm that I am not crazy.
But, just like medical students are suffering from severe hypochondria and diagnosing themselves with all sorts of illnesses that they are learning about, I caught myself finding a lot of narcissistic traits in my otherwise normal personality.
I already knew that something was off in my relationship. I knew that we were both suffering. I knew that I was suffering a lot. But everything I read made me start to doubt myself.
What if I am the narcissistic one, and not him? What if the problem lies with me and it is all on me that we are both in this turmoil?
It was quite disturbing to consider that I may have changed into someone I never wanted to be. I felt I was selfish — because I was told that I was selfish, not caring about his needs. I was told that I was abusive — when I was trying to have two-way communication and I wanted to express my opinions.
I felt that I was a terrible person, who always wants attention, who is clingy and demanding and impossible to satisfy.
And it was true. I wanted the attention that he used to give me but he decided to take away to punish me only to show me some random glimpses of affection as breadcrumbs. I started to become selfish, and I tried to get him to care about me too — instead of always dealing with his problems.
According to studies, it is quite common that victims of narcissistic abuse start to question themselves, whether they are the narcissistic one or is it their partners.
If you have to ask yourself whether you are narcissistic or not, odds say you’re not. Let me explain. | https://medium.com/mind-cafe/what-if-im-the-narcissist-and-not-the-victim-88ccde8fe62d | ['Zita Fontaine'] | 2020-05-12 06:37:51.246000+00:00 | ['Mental Health', 'Communication', 'Relationships', 'Psychology', 'Narcissism'] |
Bat Coronavirus Rc-o319 Found in Japan: New Relative of SARS-CoV-2 | Bat Coronavirus Rc-o319 Found in Japan: New Relative of SARS-CoV-2
This study tells us there’re other undiscovered bat coronaviruses, even outside of China.
Background vector created by articular — www.freepik.com
The Centers for Disease Control and Prevention (CDC) has released a study from Japan, titled “Detection and Characterization of Bat Sarbecovirus Phylogenetically Related to SARS-CoV-2, Japan,” this month. In this study, a new bat coronavirus called Rc-o319 is discovered, which belongs to the same evolutionary clade as SARS-CoV-2 and RaTG13. This article will discuss the significance of this finding.
(SARS-CoV-2 is the novel coronavirus that causes Covid-19. RaTG13 is a bat coronavirus that is the closest known relative of SARS-CoV-2. SARS-CoV-2 and RaTG13 belong to the coronavirus's beta genus under the sarbecovirus clade — betacoronavirus, sarbecovirus. So, Rc-o319, RaTG13, and SARS-CoV-2 will be called sarbecoviruses from now.)
The study’s rationale
Horseshoe bats of the Rhinolophus species are infamous for being reservoirs of betacoronaviruses. RaTG13 is one such bat sarbecovirus that is 96% identical to SARS-CoV-2 at the genetic level. Current evidence suggests that SARS-CoV-2 evolved from a common ancestor of RaTG13.
RaTG13 is first sampled from a bat cave in the Yunnan Province of China. In fact, most of the bat coronavirus studies are from China. But Rhinolophus species and other bats are also found in other parts of Asia, Europe, and Africa, and nothing much is known about the coronaviruses they harbor.
“We provide a hypothesis that a bat sarbecovirus with zoonotic potential might exist even outside China, because Rhinolophus spp. bats inhabit Asia, Europe, and Africa.”
Thus, Shin Murakami, associate professor at the Department of Veterinary Medical Sciences of the University of Tokyo, led a study to characterize the complete genome of a bat sarbecovirus called Rc-o316 in Rhinolophus cornutus, a bat species endemic to Japan.
What the study did and found
In 2013, the researchers captured four R. cornutus from a cave in the Iwate prefecture of Japan. They then extracted RNA genetic material from the bats’ feces to screen for any presence of betacoronaviruses. Once candidates were identified, they proceed to sequence the full genome in 2020.
Sequence analyses revealed that a new bat sarbecovirus called Rc-o319 is 81.47% genetically identical to SARS-CoV-2. While 18.5% of genetic differences are massive, the full genome and key genes (spike protein and ORF1ab) of Rc-o319 still qualify as a place in the same clade as SARS-CoV-2 and RaTG13.
The study also showed that Rc-o319 could not infect human cells expressing the human ACE2 receptor. Another distinction of Rc-o319, the study found, is that it does not require TMPSSR2 to complete cell infection. Thus, the bat’s ACE2 receptor alone is sufficient for Rc-o319, whereas human ACE2 and TMPSSR2 are required for human SARS-1 and SARS-CoV-2.
Adapted from Murakami et al. (2020). Phylogenetic tree of full genomes of Rc-o319, SARS-CoV-2, RaTG13 (highlighted in yellow), and others. Phylogenetic trees of other genes (spike protein and ORF1ab) can be found in the main paper.
“Among R. cornutus bats in Japan, we detected sarbecovirus Rc-o319, which is phylogenetically positioned in the same clade as SARS-CoV-2. Sarbecoviruses belonging to this clade previously were detected from other Rhinolophus spp. bats and pangolins…in China and could have played a role in the emergence of SARS-CoV-2,” the authors concluded. “We provide a hypothesis that a bat sarbecovirus with zoonotic potential might exist even outside China, because Rhinolophus spp. bats inhabit Asia, Europe, and Africa.”
With the current phylogenetic tree, at least five ancestors are standing in between Rc-o319 and SARS-CoV-2. So, while Rc-o319 is related to SARS-CoV-2, it’s very distantly related.
The study also admitted that Rc-o319 is unlikely to jump directly to humans as it cannot bind to the human ACE2 receptor, unlike RaTG13 that also uses the human ACE2 receptor. However, as R. cornutus live in caves or tunnels with other bat species, and interact with other wild animals during the daytime, Rc-o319 may transmit to coinhabitant animals.
A closer look at Rc-o319
First, the study did not suggest that Rc-o319 is involved in the origin of SARS-CoV-2. Rather, the study tells us that other undiscovered sarbecoviruses could still change the current phylogenetic tree — just like the Japanese study added a new member, Rc-o319, into the sarbecovirus clade.
Rc-o319 is only 81.47% genetically identical to SARS-CoV-2, compared to RaTG13 with 96% identity. Scientists have predicted that the 4% genetic differences between RaTG13 and SARS-CoV-2 represent about 50 years of evolutionary time gap. Indeed, a published study in Nature suggests that the most recent common ancestor of RaTG13 and SARS-CoV-2 arose around 1950–1980.
As follows, the most recent common ancestor of Rc-o319 and SARS-CoV-2, as well as other sarbecoviruses in between, would be dated back even further. With the current phylogenetic tree, at least five ancestors are standing in between Rc-o319 and SARS-CoV-2. So, while Rc-o319 is related to SARS-CoV-2, it’s very distantly related. The different biological functions between Rc-o319 and SARS-CoV-2 further supports this notion. To restate, compared to SARS-CoV-2, Rc-o319 uses a different form of ACE2 receptor and does not need the TMPSSR2 co-factor to complete cell infection.
Is it possible that the Covid-19 pandemic started somewhere outside of China? Perhaps so, if a very closely related sarbecovirus of SARS-CoV-2 is discovered outside of China, which is certainly not Rc-o319. At this point, the Yunnan Province of China, where RaTG13 is sampled, is still the leading candidate region where Covid-19 started.
Adapted from Murakami et al. (2020). Cropped portion of the phylogenetic tree depicting the associated common ancestors.
Short abstract
Japanese researchers discovered a new bat coronavirus called Rc-o319 that belong to the same evolutionary clade (betacoronavirus, sarbecovirus) as SARS-CoV-2 and its closest known relative, RaTG13. But Rc-o319 is only 81.47% genetically identical to SARS-CoV-2. By contrast, RaTG13 and SARS-CoV-2 are 96% identical, and these 4% differences entail about 50 years of evolution. Thus, while Rc-o319 is related to SARS-CoV-2, it’s very distantly related. Still, this study tells us that other uncharted coronaviruses — even outside of China — may possibly alter our current knowledge of the SARS-CoV-2 evolutionary tree. | https://medium.com/microbial-instincts/bat-coronavirus-rc-o319-found-in-japan-new-relative-of-sars-cov-2-d6221d90e8d2 | ['Shin Jie Yong'] | 2020-11-22 11:54:15.117000+00:00 | ['Innovation', 'Life', 'Technology', 'Coronavirus', 'Science'] |
I’m Only Superhuman | All you ever gave me was a 1000 and 1 reasons to leave.
But I never did.
I stayed.
Against all odds or hope for a better day,
I stayed with you.
Because I knew — You needed me. | https://medium.com/know-thyself-heal-thyself/im-only-superhuman-fe500c57cec6 | ['Audrey Malone'] | 2020-12-29 20:51:09.740000+00:00 | ['Storytelling', 'Self-awareness', 'Life Lessons', 'Love', 'Poetry'] |
Cracking the handwritten digits recognition problem with Scikit-learn | Sklearn Hello World!
The example we’ll run is pretty simple: learn to recognize digits. Given a dataset of digits, learn the shape of them and predict unseen digits.
This example is based on the Sklearn basic tutorial.
Verify your Python configuration
Before we move forward, just run a simple Python file to make sure you have configured everything properly.
Open PyCharm Create a new project Create a Python file Add the following line into it:
print("Running Sklearn Hello World!") Run the file. You should see that string in the console.
Import datasets
Sklearn has some built-in datasets that allow you to get started quickly. You could download the dataset from somewhere else if you want to, but in this blog, we’ll use Sklearn’s datasets.
Note: How digits are transformed from images into pixels is out of the scope of this blog. Assume that someone did a transformation to get pixels from scanned images, and that’s your dataset.
Edit your Python file and before the print command, add the following import:
from sklearn import datasets
Explore the dataset:
digits = datasets.load_digits()
print(digits.data)
3. Run your Python file. You should see the following output in the console:
[[ 0. 0. 5. ... 0. 0. 0.]
[ 0. 0. 0. ... 10. 0. 0.]
[ 0. 0. 0. ... 16. 9. 0.]
...
[ 0. 0. 1. ... 6. 0. 0.]
[ 0. 0. 2. ... 12. 0. 0.]
[ 0. 0. 10. ... 12. 1. 0.]]
What you’re seeing in that output are all the digits (or instances) and all their features that each instance has. In this example, the pixels of each digit. If we printed the value digits.target instead, we would see the real values (classifications) for those digits: array([0, 1, 2, …, 8, 9, 8]).
Features are attributes about an instance. A person may have attributes like nationality, skills, etc. Instead of calling them attributes, they’re called features. In our case, our instances (digits) has the brightness levels of each pixel as attributes or features.
Learn from our dataset
ML is about generalizing the behavior of our dataset. It’s like taking a look at the data and saying something like “yes, it seems that next month we’ll increase our sales”. That’s because based on what happened, you’re trying to generalize the situation and predict what may happen in the future.
There are basically two ways of generalizing from data:
Learning by heart: this means “memorizing” all the instances and then try to match new instances to the ones we knew. A good example of this is explained in [1]: If we had to implement a spam filter, one way could be flagging all emails that are identical to emails already flagged as spam. The similarity between emails could be the number of words they have in common with a known spam email. Building a model to represent data: this implies building a model that approximates known values with unseen values. The general idea is that if we know that instances A and B are similar and A has a target value 1, then we can guess that B may have a target value 1 as well. The difference with the first approach is that by building a model, we’re adjusting it to represent the data and then we forget about the instances.
A cat-dogs classifier. In our case, we’ll classify by digit: 0, 1, 2, etc. Source
Let’s create a model that represents our data behavior. As this is a classification problem (given some instances, we want to classify them based on their features and predict the digit they represent), we will call our component classifier and we’ll choose a Support Vector Machine (SVM). There are many other classifiers in Sklearn, but this one will be enough for our use case. For further details on when to use certain components depending on the problem, you can follow the following cheat-sheet: | https://medium.com/overfitted-microservices/cracking-the-handwritten-digits-recognition-problem-with-scikit-learn-b5afc28e2c24 | ['Ariel Segura'] | 2019-01-05 01:43:24.570000+00:00 | ['Machine Learning', 'Python', 'Data Science', 'Software Engineering', 'Scikit Learn'] |
Bokeh 2.0.1 | Today we release Bokeh 2.0.1: a collection of improvements in automation, documentation, and other minor fixes following the recent 2.0 release.
The full list of changes can be seen in the milestone list on GitHub. Some of the highlights include:
Addressing a Cross-Origin Resource Sharing (CORS) issue seen in Chrome and Chromium-based browsers #9773
Adding multi-file support for FileInput widgets #9727
widgets Bokeh server can now serve custom extension code #9799
A handful of documentation clarifications, corrections, and expansions
As of 2.0.1, Bokeh’s FileInput widget supports multiple file selections.
If you have questions after upgrading, we encourage you to stop by the Bokeh Discourse! Friendly project maintainers and a community of Bokeh users are there to help you navigate any issues that arise.
If you are using Anaconda, Bokeh can most easily be installed by executing conda install -c bokeh bokeh . Otherwise, use pip install bokeh .
Developers interested in contributing to the library can visit Bokeh’s Zulip chat channels for guidance on best practices and technical considerations.
As always, we appreciate the thoughtful feedback from users and especially the work of our contributor community that make Bokeh better! | https://medium.com/bokeh/bokeh-2-0-1-362eb5d0729a | [] | 2020-06-10 00:13:12.811000+00:00 | ['Python', 'Python3', 'Visualization', 'Data Science', 'Bokeh'] |
How I Escaped My Corporate Fate and Decided to Choose Myself | “Once you realise you deserve a bright future, letting go of your dark past is the best choice you will ever make.”
― Roy T. Bennett, The Light in the Heart
We all make bad choices but sometimes they only become apparent when life is pulled so far off course you don’t recognise who you are anymore.
For whatever reason, some of us double down, grit our teeth and push on through, and never stop making the wrong choice, even when truth stares us in the face.
I realise that now. Corporate work gave me a stark vision of a possible future and made me realise I had been making the wrong choice for 20 years.
It all came to a head this year. I had reached 42, still working in an office, looking at my manager, eight years my senior, who was cranky as hell. He was miserable, under pressure, pale and balding; the archetype of middle-aged misery.
He talked lots about how much TV he watched. Boasted, even.
He had an app that clocked up the hours of mindless escapism he had indulged in, a measurement of existential despair in prime colours and interactive graphs.
He’d made a bad choice. He didn’t seem to realise he wasn’t meant to be there, in that office, but I saw him and I saw his mistake. Somewhere down the line, he’d chosen to stay small, and now, there he sat.
He was unhappy and stressed and about every two weeks, it erupted out of him.
He shouted, berated his team, swore, told people to “fuck off” — the rage spilled over as incongruence gnawed at his soul.
He didn’t seem to realise this was the cause of his woe, I think he felt the irritation but didn’t connect it to anything deeper.
Still, it couldn’t be kept in, the rage, it bubbled just below the surface, the existential angst caused him to hot-foot from steady leader to explosive child, back and forth and forth and back as he held on to a crust of sanity between two extremes, never quite managing to be either.
He’d made a mistake. He shouldn’t be in an office but he’s 50 and this is his bed now. A bed of nails. Lie down, get comfortable, this is yours, you chose it.
Someone once told me “if you can’t find someone you want to be in the place you work, then you should quit.”
These words rattled in my head as he loomed, one singular rung above me on the corporate hierarchy, a walking, talking half-man on autopilot, who dared not think too deeply about concepts such as happiness and meaning else the walls of reality would come crashing down and a tsunami of truth would sweep him away like a bamboo beach hut.
During this time, there has been a great deal of political quarrelling above us, the corporate gods argued and it rippled down the chain of command in gentle lapping waves of agitation.
Sooner or later, it came to fruition and our departmental director was unceremoniously ousted, a sacrificial lamb slaughtered to appease someone, somewhere, I suppose.
With expediency that raised more questions than it resolved, a new man arrived to fill the vacant post.
He came with smiles, big ideas, ugly PowerPoint presentations and buzzwords of encouragement from the book of ‘How to be a Leader’.
Everyone had seen the book, no one told him.
Immediately, he displayed signs of cracking under pressure.
It’s easy to understand why, his predecessor didn’t play ball and there he now sat, in the vacated throne, a Damoclesian sword dangling above his hairless head. He’d been brought in to solve the political machinations above him and everyone considered him “their man” but because of this, he could be no one’s and the stress of this inevitability was his burden to bear.
He began looking more stressed with each appearance he made. He arranged his face to talk to others, but I saw him, I saw him because he was my manager, only another rung up, just one choice ahead, a little older, a little balder, a little fatter.
He too had made a mistake. He too shouldn’t have been there.
Perhaps he knew something my manager did not, he was more self-aware after all, but whatever his wisdom, it did not matter, he was tied in, committed, he’d made promises, this was his only egg, his only basket.
At his age — I’m guessing mid-50’s — his life could have been anything, but it was crumpled suits from long commutes, high-pressured meetings, weekend working and the endless toil of trying to please everyone, but instead pleasing no one, simply disappointing one person here, one person there, watch how it unfolds, watch how it unravels, watch his undoing as the corporate gods rattle him as they have rattled us.
I watched these two men, both my superiors, live out their bad choices. I watched as they chose them over and over, I watched them lose the war of attrition on their spirits.
Regret hadn’t consumed them yet, not wholly, it had only frayed their edges, but I saw it coming. One day, it will be all they feel.
It is in my reflection of these men that I realised it never stops. The mistake, the bad choice, it never ceases to be made, each day, each minute, each hour, unless you choose again, unless you choose differently.
I was these men also, just a bit younger, just a bit slimmer, just with a bit more hair.
The three of us were all on the same road, on the same conveyor belt, on the same mouse wheel.
Their mistake was my mistake, it was our mistake.
The only difference between the three of us was how far down the wrong road we had decided to travel, how much we had gritted our teeth and doubled down on our wrong choices.
It doesn’t need to be this way. Life is not a trap.
The stars in the night sky shine down from the past and looking up at them is looking up at what once was.
In the corporate world, looking up at those above us is looking at what is to come. At what could be.
Above me, all I saw were the ghosts of Christmas Future, but those two men didn’t yet know they were ghosts. They seemed alive, they walked, talked, made tea, buttoned-up shirts, put on ties, and sent emails, but were simply not there.
These two men had long abandoned a search for meaning. Instead, they found themselves in the reflection of a gleaming axel, a shiny cog, a turning wheel, seeking answers in the grinding gears of corporate machinery.
They didn’t realise they too were made from cold, hard steel.
For almost 20 years I had wandered down the corporate road, for almost two decades I was empty and miserable, sacrificing my dreams on the cross of certainty.
It took me that long for my bad choice to come into sharp focus. There it is, I see it now, my inevitable future shown to me in the reflection of two broken men.
That day — when I saw my reflection and a painful epiphany arrived — I left the office and never went back.
The reality, of course, was messier, less impulsive, less romantic, but that day I spiritually checked out.
It was the first day I did something right. It was the first day I saw my bad choice and decided to choose differently because it never stops unless you stop choosing it.
And that’s all it takes, one new choice for one new life. | https://medium.com/the-ascent/how-i-escaped-my-corporate-fate-and-decided-to-choose-myself-247e58c865b3 | ['Jamie Jackson'] | 2020-12-15 18:03:43.070000+00:00 | ['Work', 'Self-awareness', 'Spirituality', 'Life Lessons', 'Entrepreneurship'] |
By (Non)Design: The Connections Between Generic Packaging and Creative Life | 19 Feb 2006 from On Kawara’s “Today” painting series
Like many kids who attended state-run American elementary schools in the 1980s, I have barely any recollection of anything that I learned in the actual confines of a classroom, being mostly dependent on family support and autodidactic ability to acquire and retain knowledge. I do, however, have a comparatively vivid recall of all the “extra-curricular” rituals of violence and status-jockeying cruelty that were the rule, not the exception, in these institutions. In among the rapidly decaying memories of this time, I can still remember one popular insult that was hurled around on playgrounds and school buses with sadistic glee: “generic.” In terms of initiating either clumsy fistfights or defeated sobbing fits, it wasn’t as cruelly effective as other period barbs, i.e. “retard” or “L.D.” [an acronym for one placed in ‘learning disabled’ classes], but it often enough managed to strike a nerve and prompt intense, prolonged fits of self-doubt. While someone being mocked as a “retard” was simply being called inept, to be on the receiving end of a “generic” claim was to be simultaneously accused of low status and to be totally devoid of any distinguishing personality traits. The thing about such insults is that they often prompt their victims to come out on the other side of the aforementioned fits of self-doubt with a desire to throw the detested personality flaws back in their tormentors’ faces: in fact, an entire paradoxically vivid subculture eventually formed around the adoption of “generic” aesthetics and ideals.
To understand the relevance of that both then and now, it is necessary to look back at the retailing landscape as it existed in the late 1970s and 1980s. But first, some more clarification about the pejorative nature of “generic” is in order. While “generic” seems semantically identical to more contemporary insults like “basic,” each is a clear product of its own time: the latter refers to a lack of imagination in the face of an unprecedented opportunity for self-individuation; an inability to make a passably original expression in an Information Age supposedly defined by endless difference. It is an insult that maintains its edge by denying others a capability for producing a memorable self-image during a time in which anyone recording himself screaming at a video game live stream can ostensibly become a world-renowned “content creator”. By contrast, a “generic” person in the ’80s was, essentially, someone incapable of “correctly” consuming.
The hidden implication, I always felt, was that “generic” people were forced into their lot not by external circumstances, but by inadequate levels of ingenuity and poor decision-making skill. They were not only, as the story went, disgraces to themselves, but possible detriments to the national character as well, at a time when free-wheeling American vivacity and spontaneity were still being touted as the cultural forces that would turn Soviet citizens against their masters during the late stages of the Cold War. “Generic” people were incapable of rising to their task as cultural liberators; shuffling automatically and incuriously through life, showcasing such a total deficit of ambition that they practically necessitated the creation of a unique line of consumer goods that responded to their flat affects and self-reduced personal standards.
Enough said.
That product line, in a not-so-distant past, was unmistakeable when encountered in the pre-WalMart era of suburban American supermarkets. Entire “generic aisles” were comprised from solid walls of black text on opaque yellow packaging, with individual products from cookies to breakfast cereal to beer, blending together in a single uniform mass only interrupted by the exposed white strips of metal shelving. It was an artless spectacle that nevertheless could compete with the most sublime works of the Minimalist masters (Donald Judd, Carl Andre etc.) in terms of memorability and semantic clarity. A closer look at the product offerings revealed little more than what could be ascertained from a distance: the already stark two-color printing process was given more imposing weight by the total absence of any additional graphic elements aside from purely functional ones (e.g. UPC codes and ‘nutritional information’ charts), and the black block text announcing the packages’ contents contained no listing of the products’ benefits to the consumer, no elaborative pictograms, nor really any additional subtext to persuade or entice. Elements that might have contributed to both visual and haptic distinction, like the universally recognizable fluted surface and grooves of the Coke bottle, were also ignored.
Given what has already been said about the gravitational pull of full-color American exuberance as a cultural force, this anti-marketing strategy couldn’t have lasted forever (one notable modern holdout is the wholly “generic” Canadian supermarket No Frills), and such generic products eventually made way for somewhat less austere “house brands” featuring an actual modicum of graphic design work. In the absence of such packaging, iconoclastic graphic designers like Art Chantry are left to unironically lament how “everything is so ‘pretty’ now in the grocery aisle.”[i]
The memorable placement of these generic packages in Alex Cox’ 1984 film Repo Man (see above) begins to hint at the erstwhile omnipresence of these products, while re-purposing them as visual signifiers of cultural cynicism (though I occasionally meet Repo Man viewers who are under the impression that the no-brand “BEER” props were comical anomalies fashioned solely for the film). Numerous subcultural outliers in the U.S., and the Western world as a whole, would do generic packaging one better by apparently embracing it: the “generic” album from San Franciscan ‘art-damaged’ punks Flipper, which replicated the sterile black-on-yellow package design of generic foodstuffs to a “T,” was one watershed design that pointed towards a subcultural adoption of the generic anti-aesthetic as something superior to “proper,” element-rich graphic design. For one, appropriating generic design style for one’s own creative output communicated a certain resistance to being propagandized, and particularly in accepting the propaganda that consumer choices alone provided the molecular structure of a distinct identity (especially as it became steadily more obvious that the preemptively limited choices, in consumer goods as well as broadcast media and political candidates, did not represent everything really available or possible in the marketplace).
It is not that bold of an assertion to say that some kind of “genericism as resistance” has manifested in every multi-media, d.i.y. subculture to have existed from the late 1970s to present. This tenacity has existed in spite of repeated lessons from market researchers, such as Orth and Malkewitz, whose findings implied that “nondescript designs score low on sincerity, exictement and ruggedness, and average on competence and sophistication…these designs further generate impressions of ‘corporate’ and ‘little value for money’, and do not evoke happy memories.”[ii]
Then again, the above is not an exhaustive list of criteria for the appreciation of a given object, and generic packaging appropriated for artistic statements plays upon a different set of cultural impulses ranging from a distrust of arbitrariness to the many varieties of societal fatigue. For those inundated with other eye-popping pleas for attention defined by dancing typefaces and hyperreal graphic novelty, the attitude of “take it or leave it” challenge implied by mock-generic cultural products must have had (as it did for me) an attraction akin to the romantic curiosity one might feel for disengaged, aloof loners after being breathlessly propositioned by dozens of other prospective partners. Everyone from the “white label” underground of techno music to the more institutional (if just barely) culture of avant-garde classical have gambled on this psychological quirk with decent enough results: see, for example, the Swiss Hat Art label’s series of ‘modern classical’ masterworks on CD. Elsewhere, the packaging for my CD copy of the late Glenn Branca’s ecstatic Symphony №2 (The Peak of the Sacred) would be almost indistinguishable from a generic product bought at a Kroger supermarket in the early 1980s, save for the deviation of two contrasting text colors being featured on the cover.
Genericism re-envisioned as culture also telegraphs a commitment to essentiality, which is at the core of any ethical statement that this style hopes to make. Chantry, in his musing on the ‘house brand,’ notes that a key to their strategy was “to make the labeling look like they weren’t ‘wasting’ your precious grocery money on elaborate (i.e. expensive packaging…) it all got tossed out anyway, right?”[iii] In doing so, he touches upon a stance that was both ethic and aesthetic, and one which applies to many other non-musical creative artifacts of the late 20th century and beyond, executed in media that did not require packaging. While not consciously attempting to appropriate or comment on generic packaging, some major works of the avant-garde do capture something of this same contrarian attractiveness and ethical essentialism. One of conceptual artist On Kawara’s most noted works, his Today series of paintings consisting only of the painting’s date of completion rendered in white block lettering on a single-color background, effectively served as “packages” or framing devices for the artist’s own continual self-development: they were a kind of “embodied time” demonstrating aspects of Merleau-Ponty’s phenomenology (and, unfortunately, doing so in a way too complex to be fully laid out in this short article). Elsewhere, something like Aram Saroyan’s utra-minimalist poems, e.g. lighght (the entirety of which you have now just read), arguably took the “generic” quality of stark non-descriptiveness into the field of poetry. In the process, they reduced that field’s complex relationship with language to a purely declarative function, and in a way that was shocking enough to become the National Endowment of the Arts’ first bona fide funding controversy.
Rather than traveling further down this road, though, it would be wise to put on the brakes and state the perhaps obvious fact that a simple, nondescript, purely declarative design style has been the very lifeblood of corporate logos and luxury consumer goods for decades now. As to the latter, the marketing aims of the designer fragrance industry are much better encapsulated by something like the austere layout of the CK One bottle (arguably the first truly popular unisex fragrance in the U.S.) than by Katy Perry’s hilariously cloying, cat-shaped Meow container. It’s fascinating to consider how, simply by altering color schemes and diluting some of the blunt force of a bold / block typeface by going “lowercase”, changing the degree of kerning, etc., one can create “genericism” that exudes a much higher degree of “competence and sophistication” while also paying lip service to the essentialist “waste nothing” ethic. With the classical age of actually existing generic packaging behind us, a kind of carefully sculpted generic quality is a valuable weapon in the hands of marketing departments everywhere, and is a reliable alternative to adopt when humans’ limitless capacity for boredom and fatigue with established aesthetics comes into play once again. As in Dr. Seuss’ brilliant childrens’ fable The Sneetches, where a master salesman pits “star-bellied” and generically non-starred creatures against one another in a cyclical divide-and-conquer scheme, alternating acceptance and loathing of the nondescript seems to be an eternal recurrence.
Yet there is one relatively new feature of our current cultural and media landscape that is altering the rules of this game: the simple fact that the relevance of “packaging” itself is eroding. This is certainly true for the music business, as claimed by the Royal Designer for Industry Malcolm Garrett — himself the designer of the notorious “generic” carrying bag design for the Buzzcocks’ Another Music in A Different Kitchen LP:
Packaging is just one interface to the music. The application of creative energy, which once saw physical expression in record sleeves, posters, and club flyers, is now realized in ‘soft’ ways. The interface is now digital, but no less compelling. The point of access is the package, and consequently, identity is expressed in ways that complement rather than define the music.[iv]
Garrett’s invocation of the “interface” brings us right back to the present age of social media, and the “internet of everything,” and their attendant imperatives for all to sacrifice their privacy in order to become recognizable creators of “content.” Musing upon these things also, after a fashion, brings us to what was initially so rewarding about announcing one’s creative presence to the world with a strictly uninformative data set. For some, this may have come from nothing else than a contrarian urge, but this was also informed by anonymity as a strategy, i.e. the hope that an austere interface would force prospective fans, supporters or friends to engage in direct contact and communicate unhindered by symbolic distractions, while also repelling those who could not be bothered to do so.
The new equivalent of “genericist” counter-cultural revolt might be nothing other than a voluntary refusal of the dopamine rush of recognition provided by social media networks, and limitation of personal disclosure to the most purely declarative: something like the Geneva Convention injunction that captured combatants provide captors with no information other than “name, rank, and serial number.” To be sure, there will be a whole new repertoire of schoolyard insults ready to be launched when this strain of non-conformity finally becomes perceived as a genuine force, and when an individual’s level of usefulness to society becomes defined not by their skill in production or consumption, but in their degree of commitment to omnipresence (read: constant ability to be monitored and administered). As always, insults will be loudly bleated by schoolchildren, but only in imitation of those adults who have been successfully propagandized to see any degree of independent thought and action as existential threats.
[i] Chantry, A. (2015). Art Chantry Speaks. Port Townsend: Feral House.
[ii] Orth, U. & Malkewitz, Kevin (2008). “Holistic Package Design and Consumer Brand Impressions.” Journal of Marketing, 72(3).
[iii] Chantry (2015).
[iv] Garrett, M. (2015). “Bsolete?” Royal Society for the Encouragement of Arts, Manufactures and Commerce, 161. | https://thomasbeywilliambailey.medium.com/by-non-design-the-connections-between-generic-packaging-and-creative-life-f9ad735891a3 | ['Thomas Bey William Bailey'] | 2019-10-04 03:14:16.377000+00:00 | ['Anonymity', 'Marketing', 'Alternative Music', 'Design', 'Content Creation'] |
JIT fast! Supercharge tensor processing in Python with JIT compilation | At Starschema, we’re constantly looking for ways to speed up some of the computationally intensive tasks we’re dealing with. Since a good amount of our work involves image processing, this means that we’re in particular interested in anything that makes matrix computations — sometimes over fairly large tensors, e.g. high-resolution satellite or biomedical imagery––easier and faster. Because imagery often comes in multi-channel or even hyperspectral forms, anything that helps process them faster is a boon, shaving valuable seconds off that over large data sets can easily make days of difference.
Until relatively recently, it was not uncommon to write development code in a high-level language with good data science and machine learning support, like Python, but rewrite and deploy it in C or C++, for raw speed (indeed, one of the motivations behind Julia was to develop a language that would be fast enough not to require this!). Python is great for putting your quantitative ideas clearly and succinctly, but interior loops in Python have always been slow due to the absence of type information. Python’s duck typing system really comes to bite when this absence of typing creates unnecessary code and indirection, leading to relatively slow inner loops. Recently, however, solutions were envisaged to get around this problem. The first of these was Cython — injecting C types into your Python code. It is, on the whole, a rather painstaking method of speeding up your code, albeit a lot of computationally intensive code is written in Cython, including code you’ve almost definitely used — much of the SciPy stack, for instance, and almost all of SageMath, were written in Cython.
The problem is that ‘Cythonising’ your code can be time consuming, and often fraught with challenges that require a profound knowledge of C to solve. What if we had a better way to get efficient bytecode from our slow-but-intelligible Python code?
Enter Numba.
Numba is what is called a JIT (just-in-time) compiler. It takes Python functions designated by particular annotations (more about that later), and transforms as much as it can — via the LLVM (Low Level Virtual Machine) compiler — to efficient CPU and GPU (via CUDA for Nvidia GPUs and HSA for AMD GPUs) code. While in Cython, you got the tools to use C types directly, but had to go out of your way to actually be able to do so, Numba does most of the heavy lifting for you.
The simplest way to get started with Numba is as easy as affixing the @numba.jit decorator to your function. Let’s consider the following function, performing a simple and pretty clumsy LU factorisation:
import numpy as np def numpy_LUdet(A: np.ndarray):
y = [1.0]
n = A.shape[0]
with np.errstate(invalid = 'ignore'):
for i in range(n):
y[0] = y[0] * A[i, i] for j in range(i+1, n):
A[j][i] = A[j][i]/A[i][i]
A[j][i+1:] = A[j][i+1:] - (A[j][i] * A[i][i+1:])
Note that as this is a measuring function, it does not return a value, it merely calculates the decomposition. As you can see, for an n x n square matrix, the runtime will be on the order of n², due to the nested iteration. What’s the best way to speed up this code?
We could, of course, rewrite it in Cython. Numba, on the other hand, offers us the convenience of simply imposing a decorator:
import numpy as np
import numba @numba.jit()
def numba_LUdet(A: np.ndarray):
y = [1.0]
n = A.shape[0]
with np.errstate(invalid = 'ignore'):
for i in range(n):
y[0] = y[0] * A[i, i] for j in range(i+1, n):
A[j][i] = A[j][i]/A[i][i]
A[j][i+1:] = A[j][i+1:] - (A[j][i] * A[i][i+1:])
Through that simple decoration, the code already runs significantly faster (once, that is, the code has had a chance to compile in the first run) — approximately 23 times faster than NumPy code for a 10 x 10 matrix.
OK, so how does it work?
Unlike for Cython, we did not have to re-cast our code at all. It’s almost as if Numba knew what we wanted to do and created efficient precompiled code. It turns out that’s largely what it does: it analyses Python code, turns it into an LLVM IR (intermediate representation), then creates bytecode for the selected architecture (by default, the architecture the host Python runtime is running on). This allows additional enhancements, such as parallelisation and compiling for CUDA as well––given the near-ubiquitous support for LLVM, code can be generated to run on a fairly wide range of architectures (x86, x86_64, PPC, ARMv7, ARMv8) and a number of OSs (Windows, OS X, Linux), as well as on CUDA and AMD’s equivalent, ROC.
The drawback is that Numba by definition only implements a strict subset of Python. Fortunately, Numba handles this, in two ways:
Numba has very wide support for NumPy functions (see list here) and Python features (see list here) — although notably, it does not support context handlers ( with expressions) and exception handling ( try , except , finally ) .
expressions) and exception handling ( , , ) . Unless running in nopython mode (see below), Numba will attempt to generate optimised bytecode and, failing to do so, simply try to create a Python function (this is known as ‘object mode’ within Numba).
Object mode vs nopython mode
In general, the biggest boon of Numba is that unlike with Cython, you don’t need to rewrite your whole function. All you need to do is to prefix it with the jit decorator, as seen above. This puts Numba on autopilot, allowing it to determine whether it can do something about the code, and leave the function as it was written if it cannot. This is known as ‘object mode’ and means that if JIT compilation fails because some or all of the function body is not supported by Numba, it will compile the function as a regular Python object. Chances are, the result will still be faster as it may be able to optimise some loops using loop-lifting, however, so it’s definitely worth the try.
But where Numba really begins to shine is when you compile using nopython mode, using the @njit decorator or @jit(nopython=True) . In this case, Numba will immediately assume you know what you’re doing and try to compile without generating Python object code (and throw an exception if it cannot do so). The difference in terms of execution time between object and nopython mode can range from 20% to 40 times (!).
In practice, I’ve found the best approach is to refactor and extract purely optimisable code, and optimise it in nopython mode. The rest can be kept as pure Python functions. This maximises overall optimisation gains without expending compilation overhead (more about which in the next section) unnecessarily.
Where object code is generated, Numba still has the ability to ‘loop-lift’. This means to ‘lift out’ a loop automatically from an otherwise non-JITtable code, JIT compile it, and treat it as if it had been a separate nopython JITted function. While this is a useful trick, it’s overall best to explicitly do so yourself.
Compilation overhead
Because Numba’s JIT compiler has to compile the function to bytecode, there will be an inevitable overhead — often indicated by a pretty slow first run followed by tremendously faster subsequent runs. This is the time cost of JIT compiling a function. While compilation is almost always worth it and needs to be done only once, in performance-critical applications it makes sense to reduce compilation overhead. There are two principal ways to accomplish it with Numba: caching and eager compilation.
The @jit decorator accepts a cache boolean argument. If set to True , it will cache the function it compiled into a file-based cache. In general, every time you open and run a Python script, everything that needs to be compiled by Numba gets compiled at that time. However, if you cache the compilation result, subsequent runs will be able to read the bytecode from the cache file. In theory, you can also distribute the cache file, but since Numba optimizes to your specific architecture (and supports a bewildering array of architectures, as described above), it may not work persistently. It nonetheless remains a good idea to cache functions, compile them once and use them all the time.
Eager compilation is a different way of solving the same problem. Admittedly, the naming is a little misleading — most of the time, these terms are used to indicate when something is compiled (at call time, i.e. lazy, vs. well in advance, i.e. eager). In this case, it refers to a related notion, but one that stretches over what is being compiled, too. Consider the following example:
import math
import numba @numba.njit
def lazy_hypotenuse(side1: int, side2: int) -> float:
return math.sqrt(math.pow(side1, 2) + math.pow(side2, 2))
This is lazy compilation because––the Python typing annotations notwithstanding––we have not provided any information to Numba about the function’s possible arguments, and therefore, it will compile code at time of call depending on the type the values of side1 and side2 are taking. Eager compilation, on the other hand, rests on telling Numba well ahead of time what types to expect:
import math
from numba import njit, float32, int32 @numba.njit(float32(int32, int32))
def eager_hypotenuse(side1: int, side2: int) -> float:
return math.sqrt(math.pow(side1, 2) + math.pow(side2, 2))
The format @jit(<return>(<argument1>, <argument2>,...)) (or its @njit equivalent) will allow the Numba JIT compiler to determine types (check out the documentation for the type system in Numba), and based on that, pre-generate compiled bytecode. Note that if you have an eager compiled function and your arguments cannot be coerced into the format you specify, the function will throw a TypingError .
Invoking other JITted functions
As a general rule, Numba will not do recursive optimisation for you. In other words, if you invoke other functions you yourself defined from a JITted function, you must mark those for JITting separately — Numba will not JIT them just because they’re invoked in a JITted function. Consider the following example:
import numpy as np
from numba import njit, float32
from typing import List def get_stdev(arr: List[float]):
return np.std(np.array(arr)) @njit(float32(float32[:]))
def get_variance(arr: List[float]):
return get_stdev(arr)**2
In this case, the computationally inexpensive second function will benefit from JIT, but all it does is a simple exponentiation. The computationally more expensive first function has not been annotated, and therefore will be run as a Python function — that is, much slower. To get the most out of Numba, the get_stdev() function should also have been provided with a JIT decorator (preferably @njit , since NumPy’s numpy.std() is implemented by Numba).
Just how fast is it?
To demonstrate the benefits of JIT, I’ve run a benchmark, in which I used a somewhat clumsy LU decomposition of square matrices from 10 x 10 to 256 x 256 . As you can see, Numba-optimised NumPy code is at all times at least a whole order of magnitude faster than naive NumPy code and up to two orders of magnitude faster than native Python code. Directly invoked LAPACK code, written in FORTRAN 90 (via SciPy’s scipy.linalg.lu_factor() , a wrapper around the *GETRF routine in LAPACK ), emerges as the clear winner at larger matrix sizes, and Cython’s performance turns out to be only slightly inferior to the optimised NumPy code.
LU decomposition benchmark for native Python, NumPy, optimised NumPy, LAPACK and Cython code. LAPACK was invoked using a SciPy wrapper. Optimised NumPy code is about an order of magnitude faster than ordinary NumPy code throughout, while up to two orders of magnitude faster than native Python. Cython code, on the other hand, is not significantly faster, whereas FORTRAN code only begins to lap optimised NumPy at relatively large matrix sizes. The ‘bang for a buck’ factor of optimising with Numba is clearly the highest — the NumPy code (orange) and the optimised NumPy code (crimson) differ only by the application of a single decorator.
Of course, Numba has its limitations. Importantly, it only helps to optimise a particular kind of problem — namely, processes where loops or other repetitive structures are included. For tensor operations and other nested loop/high cyclomatic complexity workloads, it will make a significant difference. Even where you need to restructure your code to fit in with Numba’s requirements, such restructuring is a lot easier in my experience than having to rewrite the whole thing in Cython. Acting at the same time as an interface to quickly generate not just faster CPU code but also GPU enabled code (via PyCuda) for a slightly more limited subset of functionalities (NumPy array math functions are not supported on CUDA, nor are NumPy math functions in general), Numba is worth exploring if your work involves nested loops and/or large or repetitive tensor operations. For writing numerical code, image processing algorithms and certain operations involving neural networks, it is rapidly becoming my tool of choice for writing heavily optimised, fast code.
There’s more to Numba than speed
Numba’s main job, of course, is to speed up functions. But it also does an excellent job at several other things. Perhaps my favourite among these is the @vectorize decorator, which can turn any old function into a NumPy universal function (often just called a ‘ ufunc ’). If you have a background in R, you might from time to time find yourself to be wistfully reminiscing about R’s ability to vectorise functions without much ado. A ufunc is a vectorized wrapper that generalises a function to operate on tensors represented as n-dimensional NumPy arrays ( ndarray s), supporting tensor logic like broadcasting, internal buffers and internal type casting. An example is the function numpy.add() , which generalises the addition function (invoked via the addition operator, + ) for tensors of any size — including tensors that are not the same size, where NumPy’s broadcasting logic is used to reconcile the tensors of different size.
Magic with Numba’s vectorisation decorator: a simple elementwise function can be generalised to higher-order tensors by nothing more than wrapping it in a decorator. This is, of course, rather inefficient, as failing to specify a signature for possible vectorisations means some type-specific optimisation cannot be carried out. For details on writing good vectorizable code in Numba, please refer to the documentation’s chapter on the vectorisation decorator.
Consider, for instance, the math.log10 function. This is an unvectorised function, intended to operate on single values (size-1 arrays, as the error message quoth). But by simply prepending Numba’s @numba.vectorize decorator, we can generalise the math.log10 function into a function operating elementwise over NumPy ndarray s representing tensors of pretty much any order (dimensionality). | https://medium.com/starschema-blog/jit-fast-supercharge-tensor-processing-in-python-with-jit-compilation-47598de6ee96 | ['Chris Von Csefalvay'] | 2019-03-25 09:16:03.439000+00:00 | ['Machine Learning', 'Data Science', 'Python', 'Artificial Intelligence', 'Deep Learning'] |
Stay In Your Overlaps | Stay In Your Overlaps
Your competitive advantage as a maker and/or an investor is your clarity and discipline around the overlaps of your interests and beliefs.
Aspiring entrepreneurs wonder, “how do you decide which idea is worth committing 5–10 years of your life to build?” Similarly, investors ask, “among all the pitches you get, how do you decide where to invest your energy and money?”
In 2005, I asked myself this question as an entrepreneur when founding Behance and 99U. And starting with my first angel investments in 2010, I have pondered the same question as an investor. By no means is my thesis complete, but at this point it is pretty refined. It can be summed up quickly (and graphically):
My best attempt at my own Maker/Investor Thesis. (made with iPad apps Photoshop Sketch and Paper, working together via Creative Cloud!)
The idea of Behance (to connect and empower the creative world) and the team we assembled were squarely in the overlap of the type of company I aspired to work for and the team I aspired to work with. This was true in 2005 and remains true today. We are focused on empowering careers in the creative industry using technology (among other mediums) to accomplish the mission. As for the team, we deeply value design and hire for raw talent and initiative over experience. No doubt, the experience building Behance helped develop my perspective as an investor.
In 2010, I became an accidental angel investor. I was heads down in the ~5 bootstrapping years of Behance before we raised our own first round of investment. Circumstantially, I had gotten to know other entrepreneurs with similar interests. One, in particular, was Ben Silberman, who invited me to be an early Advisor for what became Pinterest. When Ben raised his seed round, I made my first ever angel investment. I didn’t know any better, so I applied the same thesis behind evaluating Behance as an entrepreneur to evaluating Pinterest as an investor. And I have done so with most of my angel investments since.
As I reflect upon my projects and investments that have either succeeded or failed, I realize the importance of playing within “the overlaps.” When some opportunity lures me beyond the overlaps of my interests and beliefs (as displayed in the graphic above), it feels like gambling with my resources rather than investing and leveraging them.
There’s no playbook for this stuff, and chances are whatever formula works for someone else won’t work for you. The world is not advanced by people replaying another person’s playbook. My advice for building your own playbook: Invest your energy and money in the overlap of what excites you (the opportunity), and who you respect (the team). | https://medium.com/positiveslope/makers-investors-stay-in-your-overlaps-5295ad920d17 | ['Scott Belsky'] | 2016-12-11 05:33:08.921000+00:00 | ['Management', 'Design', 'Investing', 'Entrepreneurship', 'Venture Capital'] |
7 Running Quotes To Help You Hack Writer’s Block | Photo by Derick Santos from Pexels
7 Running Quotes To Help You Hack Writer’s Block
“The moment my legs begin to move my thoughts begin to flow.” Henry David Thoreau
Let’s face it- writing can be tough. It can, on some days seem to be more of a marathon than a sprint. On other days the words flow from mind to keyboard effortlessly.
I write and I run and I have noticed similarities between the two ventures. Running, is a process of discovery, of healing and growth. It is magical, hypnotic and mind-expanding. And so is writing.
You can run in solitude or with a group, you can run a sprint or endure a marathon. You can write when you are sad, happy, inspired and inflated. Both activities leave you drained, help you sleep better and enhance your relationship with yourself. Any problem can be solved by a good run as well as a good writing session.
From Haruki Murakami to Ryan Holiday, Jeff Goins and Joyce Carol all appreciate the importance of running to help clear the fog and move their stories along.
Below is a collection of quotes from great writers to help you finish your article.
1.“As a runner, the real race is getting up and running every single day. Life is the marathon. The same is true in writing”. Ryan Holiday
If you want to see results, you need to show up every single day. Each day you show up builds your muscles, strengthens your resolve and helps you develop a focus for the long run.
2. “The moment my legs begin to move my thoughts begin to flow.”Henry David Thoreau
One foot after another, deep breath in and out, sometimes it can be difficult and sometimes it can be easy. You can’t question whether you are doing it right or wrong, you just have to keep going. The same is true with writing; you need to type one word after the other for the ideas to flow.
3.“A problem with a piece of writing often clarifies itself if you go for a long walk.”Helen Dunmore
Stepping away from your copy helps you find new connections to ideas, to structure a thought differently and tighten sentences. As you are out running your mind is busy at work forming connections you might have missed as you were writing. Running acts as the catalyst to the ideas that were marinating in your mind.
4.“In long-distance running the only opponent you have to beat is yourself, the way you used to be.”― Haruki Murakami, What I Talk About When I Talk About Running
There is only one person you need to compete with: yourself. You need to compete with the version of you that showed up yesterday, to tweak the process and learn new ways of getting better. Each day is an opportunity to better yourself.
5.“The twin activities of running and writing keep the writer reasonably sane and with the hope, however illusory and temporary, of control.” Joyce Carol Oates
Life can be unpredictable, messy and dark. Your best-laid plans might flop in ways you had not foreseen. But in between the stimuli and your response you get the choice to control your reaction. And therein lies your power. In writing and running you get to step away from the heat of the moment; to find solutions to the problems you are facing.
6.“If you don’t acquire the discipline to push through a personal low point, you will miss the reward that comes with persevering. Running taught me the discipline I need as a writer”. Jeff Goins
The challenges we face can feel insurmountable and we might be tempted to give up. But in pushing past the pain and discomfort, we are building resilience and patience. Through running, writers deepen their ability to focus on a single, consuming task and enter a new state of mind entirely. The deliberate act of moving forward each day reminds you that everything will work out in the end.
7.“For me, running is both exercise and a metaphor. Running day after day, piling up the races, bit by bit I raise the bar, and by clearing each level I elevate myself. At least that’s why I’ve put in the effort day after day: to raise my level…The point is whether or not I improved over yesterday.”Haruki Murakami
Word by word, mile by mile. All you can do is trust the process and put in the work despite your doubts, excuses, and fears. Once you start the fear begins to dissipate. You realize that the only way to finish an article or a race is to start. Just take one step and keep at it. | https://medium.com/illumination-curated/7-running-quotes-to-combat-writers-block-962d64206634 | ["Margaret'S Reflections"] | 2020-09-25 10:39:39.981000+00:00 | ['Inspiration', 'Productivity', 'Life Lessons', 'Running', 'Writing'] |
Watson Personality Insights Introduction and How to access Watson without SDK | In short: The Holy Grail for marketing campaign authors. Are we happy? Keep on reading and likely you shouldn´t. Enter some concerns.
“Does it work?” And some Concerns.
I am not qualified to answer this question, because I am not a psychologist. The closer I have been to check this service is this test: I wrote some thoughts in a document and upload it to IBM personality Insights. Next I make a conventional personality test https://www.truity.com/test/big-five-personality-test and I compare the outputs with Watson ones. The Big 5 values are very close in both cases.
This not a serious test, if you want a more accurate opinion, you must ask to Marketing and Psychology experts.
In the other side we can have lots of moral and legal issues with the tool. A good read about these questions: https://medium.com/taraaz/https-medium-com-taraaz-human-rights-implications-of-ibm-watsons-personality-insights-942413e81117 In this post the author talks a lot about the service concerns and its background. Very interesting.
The tech stuff: Accessing without SDK.
The first question is, What is an SDK? An SDK is an additional module, that IBM gives us to access more easily to its services. We import the SDK to our programming language and this way we can access through the module.
The second, Why Do I want to access without the SDK? There are two main reasons:
I work with Microsoft Business Central AL. This programming language doesn´t able to import modules like IBM Watson SDK, so I have to access directly making a HTTPRequest to Watson API. My code could be useful form other people in dev environments that neither allow the use of modules.
The other reason is that IBM doesn´t provide SDK for all its services. Some beta services as Natural Language Understanding, haven´t SDK.
All JavaScript node code is in my GIT repo: https://github.com/JalmarazMartn/Watson-personal-insights-node-whithout-SDK
Remarks:
var request = require(“request”); auth = require(‘./ApiKey.json’); var transUrl = “https://gateway-lon.watsonplatform.net/personality-insights/api/v3/profile?version=2017-10-13&consumption_preferences=true" var data2 = {}; var data2 = require(‘./profile.json’); request.post( { url: transUrl,
auth, headers:{ content_type: ‘application/json’, }, body: JSON.stringify(data2) } , function (err, response, body) { console.log(body); });
We make a HttpRequest to the service, with to files:
Apikey. Are the access keys to Watson. I leave an example in the repo.
Profile. That´s the file with the social media entries. Looks like this:
{ “contentItems”: [ { “content”: “Trump impeachment conclusion is unpredictable due to lack of antecedents.”,”contenttype”: “text/plain”,”created”: 1447639154000, “id”: “666073008692314113”,”language”: “en”}, { “content”: “I have serious doubts about Spain basket team, due important players refusing: Rodr�guez Ibaka Mirotic “contenttype”: “text/plain”,”created”: 1447638226000, “id”: “666069114889179136”,”language”: “en”}, { “content”: “Surprising win over Serbia. The keys: defense and Claver performance.”,
That’s all. Have a nice day and be careful: some people are watching us (put on a silver paper hat to avoid it). | https://medium.com/analytics-vidhya/watson-personality-insights-introduction-and-how-to-access-watson-without-sdk-89eb8992fff2 | ['Jesus Almaraz Martin'] | 2019-11-12 11:45:27.318000+00:00 | ['Data Science', 'Artificial Intelligence', 'Ibm Watson', 'Sdk', 'Big Data'] |
An Overview of Python’s Datatable package | “There were 5 Exabytes of information created between the dawn of civilization through 2003, but that much information is now created every 2 days”:Eric Schmidt
If you are an R user, chances are that you have already been using the data.table package. Data.table is an extension of the data.frame package in R. It’s also the go-to package for R users when it comes to the fast aggregation of large data (including 100GB in RAM).
The R’s data.table package is a very versatile and a high-performance package due to its ease of use, convenience and programming speed. It is a fairly famous package in the R community with over 400k downloads per month and almost 650 CRAN and Bioconductor packages using it(source). | https://towardsdatascience.com/an-overview-of-pythons-datatable-package-5d3a97394ee9 | ['Parul Pandey'] | 2019-06-02 06:20:22.673000+00:00 | ['Python', 'Data Science', 'Pandas', 'Big Data', 'H2oai'] |
Interview with Tony Xu | Interview with Tony Xu
Chief Executive Officer & Co-Founder at DoorDash
Hi Tony, for the audience who may not be familiar with you, tell us who you are!
I am the CEO and co-founder of DoorDash.
What is your day-to-day like as a CEO at DoorDash?
It changes day-to-day but I would say there are a few categories that I’m spending most of my time in.
The first category is the operating reviews, and it probably takes the largest portion of my days. It’s about reviewing and tracking the health of our major audiences (consumers, merchants, and dashers) on the top 5–6 priorities of the company.
The second is spending time with customers, which is usually done in two forms: one is spending time in merchant calls. My calls with merchants range from larger national merchants all the way down to mom-and-pop businesses. In fact, I was just on the phone today with one of the original mom-and-pop merchants I signed up 6–7 years ago! The other form of connecting with customers is actually doing customer support for 15–30 minutes daily. I get dozens of emails per day from all sides of the audience and support some select cases myself.
The third would be in recruiting. I believe that recruiting is one of the most leveraged uses of any manager’s time. I recruit for all roles across the company, not necessarily limited to the roles on my direct team.
The fourth is in talent development through 1:1s with not only my directs but also many others on various levels. I like to give them a sense of what’s going on in the company, what can we be doing better, as well as anything that might be top of mind for them that I can help clarify.
And then, of course, there’s time spent with external teams occasionally, such as investors, the Board, and the press.
How are you engaged in the product development process these days?
Until Nov 2017 when Rajat, our Head of Product, joined us, I used to lead product hands-on, involved in every product review, every design review, and some technical architecture discussions.
Today, it’s a little bit different, and my job has evolved. I still attend major product reviews, but my focus is more on asking questions about the teams’ choices and whether they map to the strategic context of the company over the next 2–3 years. I also still read and respond to each one of our product review/update emails. Hope I’m adding more value than I subtract most of the time!
I’m always amazed by how attentive you are in responses to every product review. How do you find the time to do it despite your busy days?
Haha, I have an advantage of a 3.5-year head start (having run product during DoorDash’s early years), which gave me a lot of historical context and processing time in advance. Frankly, some of the bad decisions were created by me in those early years. And it’s been amazing to see the team driving the evolution of those ideas over the years.
Having seen many successes and failures of different products, what do you care most about when building a product?
First, it’s important to deeply think about the actual problem you’re solving, before getting into features and wireframes. You need to understand the customers’ mental model and their natural behavior. You’d know that you’ve created the best products when the product feels invisible because it removes the friction so seamlessly.
In marketplace businesses like ours, it’s also critical to think about the interplaying effects. Every single decision on one audience impacts other audiences and that becomes increasingly important as the marketplace grows larger. I’m always thinking about how to increase the healthiness of the liquidity in the marketplace.
And the last part is how each product will scale in the long run. It’s impossible to ship a perfect product that solves all problems overnight. You need to make choices on sequencing, make tradeoffs, and plan for the long-term evolution of the different problems.
What were some recent products that you were proud of seeing shipped?
I’ll give one old and one recent example…
When DoorDash just got started in the summer of 2013, we made a decision to ship the driver app (AKA Dasher app) first before we shipped a consumer app. I know it sounds like a counter-intuitive decision as a consumer business.
Our consumer’s expectation was simple then: they get something delivered on time and as described. It wasn’t necessarily about how they can order something in the most efficient way or how nice photos of food should look. Since it’s impossible to solve all of the complex problems at once, we made a choice to prioritize making Dashers successful first. In order to do that, we had to take care of a lot of the basics of the complex logistics system. I’m glad we did, and it’s a decision that I still stand by today.
A recent product that I found really interesting was Convenience. We accelerated to launch our first non-restaurant category during the COVID pandemic given the high demand. Convenience includes pantry items and household goods, and we’re dealing with a significantly different inventory catalog. While restaurants carry 150 to maybe 200 items, an average supermarket sells tens of thousands of different items.
Our team made sure to get the quality of storage and delivery operations successful before getting all of the consumer-interfaces right. This is another good example of designing a product for a scalable system, with a mission to deliver the convenience goods in a matter of minutes, not hours or days.
[ Tony’s speaking at a biweekly company all-hands, pre-COVID ]
What are the challenging problems that you’re excited to solve at DoorDash?
I’ve always been so excited about digitizing what’s happening in the physical world. For example, how long it takes to make something inside of the restaurant, whether the item is available on the shelf, etc. We’ve been working on it for years already, and it remains a perennial problem to solve.
Solving this problem truly will serve our mission of empowering local economies, and enabling merchants to participate in the convenience economy. No one has really done that successfully and that’s why the goods inside of the city can’t easily transport electronically today. Pioneering to build this piping is really exciting.
Where do you think the DoorDash product will be in 5 years?
We have two products that will continue to grow both in terms of directions and magnitude.
One is a marketplace where we sit in between consumers and merchants. This is the service we’re most well known in the industry today. The merchants could be a restaurant, a convenience store, a grocery store, a retail store, etc.
The other product is a platform where we provide tools we’ve built for ourselves to our merchants. For example, DoorDash Drive is our “logistics as a service” platform product that fulfills the delivery of the goods at merchants like Walmart.com, 1–800-flowers, or Little Caesars Pizza.
Recently we also announced DoorDash Storefront, which is another platform that provides merchants e-commerce capability, especially for the 40% of businesses that aren’t online today.
Five years from now, while each of these two products grows bigger, they need to be built on the same set of protocols and reinforce one another.
What do you think the design’s role is in DoorDash? And where do you think the team is at now?
Good designers are great problem solvers, who start with a very deep understanding of the problem and customers’ needs before jumping into pixel execution. Their process often includes collecting anecdotes from customers and laying out systemic questions as to what is required to address the customers’ pain points. So I believe the role of design is articulating all of those challenges and asking the right questions. In the end, in collaboration with the product, engineering, and business counterparts, we — as designers — must deliver the simplest solution for the customer while understanding and hiding all of the complexity.
I know you have a strong philosophy in hiring and you always provide great feedback on design candidates. What are the things you typically look for, when you approve the offers?
What I’ve learned is that regardless of discipline–whether it’s design or engineering, or business functions–the best people share very similar attributes. These are the attributes of excellence that have made people successful at DoorDash.
The first I’d say is having a very strong bias for action. This is difficult because it requires the willingness to be wrong when they act quickly. They’re probably going to make more mistakes than they necessarily want to. But this is truly how to create the future when they take risks and put their reputation at stake.
The second is the ability to hold two opposing ideas at the same time, especially in the world of product and design. People often love holding strong points of view which I think is really important, but the best people also look for discomforting evidence to argue against themselves. That way, they can involve more people into the problem and get to a better outcome.
The third, best people are trying to get 1% better every day. And they put effort to learn things quickly–whether it’s a professional or a personal goal–and every effort adds up quickly.
The fourth is the ability to operate at the lowest level of detail. Particularly in Design function, while the output looks simple, typically the inputs to get to the output can be very complicated. It’s not the customer’s job to decode a messy menu or order functionality. It’s the designers and the researchers to do heavy-lifting of distilling the most simple solution for the customers.
The final one is that they have strong followership. It doesn’t mean that they necessarily run big teams but they’re the individual whom everyone else is drawn to. This ability requires a lot of emotional maturities. They usually have the ability to recruit other great people too.
Great. Thanks so much for your time Tony!
=======
Please learn more about other leaders at DoorDash:
Christopher Payne — Chief Operating Officer
Kathryn Gonzalez — Manager for Design Infrastructure
Radhika Bhalla — Head of UX Research
Rajat Shroff — VP of Product
Sam Lind — Sr Manager for Core Consumer Design
Tae Kim — UX Content Strategist Lead
Will Dimondi — Manager for Merchant Design | https://medium.com/design-doordash/interview-with-tony-xu-f27121c33ed1 | ['Helena Seo'] | 2020-06-13 01:09:31.628000+00:00 | ['Leadership', 'Design', 'DoorDash', 'Product', 'Startup'] |
130 Common Design Terms to Know | A
A/B Testing
A/B testing is where you are comparing to two different layouts, such as webpages or an application, with a single variable online to see which one performs the best.
Accessibility
This is where you are designing the layout of a webpage or mobile app and taking into account people with disabilities who need to interact with your product easily. This includes designing for people who are blind, color blind, deaf, and other sensory disorders.
Adaptive
Adaptive means designing something that fits well on multiple devices, such as on an i-phone, tablet, or desktop computer. When designing, you have to take into account that people will be viewing information on different platforms.
Affordance
Affordance is there to help give clues or signals to the user on what to do next. For instance, designing buttons to show a user that if they want to get somewhere, they will need to tap or click on that icon or bit of text.
Ajax
Ajax stands for Asynchronous JavaScript. It’s used to create dynamic web applications and allows for asynchronous data retrieval while not having to reload the page a visitor is on.
Alignments
Is a process of making sure text and images are aligned in a way that visually makes sense to the user. This helps with everything staying organized, visual connections are made, and improves the overall experience for the user. For example, left, right, or center would all be different types of alignments.
Analogous
Are colors that are next to each other on the color wheel. They are often colors you find naturally in nature and are pleasing to the eye.
Anchor Text
Text that is linked to a site and is commonly used for SEO (Search Engine Optimization).
Animation
Creating images that look like they are moving through computer-generated imagery.
Ascender
Ascenders are the vertical, upwards strokes that rise above the x-height. For instance, letters h, b, and d.
Aspect Ratio
This is the proportional ratio between an images width and height or W:H. For instance, a square box will have an aspect ratio of 1:1.
Avatar
As the name suggests these are usually images that are used to represent a person but in a different visual form. You can usually see these on games or when you are setting up your profile on some website.
B
Balance
Balance involves the placement of elements on the page so that text and other elements on a page are evenly distributed. Three ways to achieve balance are symmetrically, asymmetrically and radially.
Baselinegrid
Is a series of invisible vertical units that can be used to create consistent vertical spacing with your typography and page elements.
Below-The-Fold
The term ‘below the fold’ refers to the portion of a webpage that a user must scroll to see. A holdover from newspaper publishing, the term ‘below the fold’ was established when there was a physical fold in the middle of the page.
Body Copy
The main text that people will read on a design. The body copy refers to the paragraphs, sentences, or other text that are the main content on any website. In design terms, the body copy of a website is the main text rather than the titles, or subtitles.
Blur
Creating a soft or hazy affect around an image.
Brand
Every business needs something that makes them identifiable. Branding is a way of using color, names, and symbols in design that represent the company as a whole.
C
Cap Height
Back to our friend the baseline — the cap height is the height of the top of a capital letter in any given font above the baseline. Cap height refers specifically to letters with a flat top, such as H and I. Round letters like O and pointed ones like A may rise above the cap height in their uppercase form.
Case Study
A case study outlines the success of a particular problem or project you undertook. Here you are showing the problem, the solution behind solving it and why you went that route.
Complementary
Think of these as the best friends of the color world — complementary colors are the colors that sit directly opposite of one another on the color wheel. Examples of complementary colors are red and green, blue and orange and purple and yellow. Using complementary colors ten to make a design more aesthetically pleasing.
Compression
Compression is where you are minimizing the size of bytes in a graphic file without harming the quality of the image or written text.
Contrast
Contrast is the arrangement of opposite elements on a page — in other words, when two things on a page are different. This can be light vs. dark colors, smooth vs. rough textures, text color vs. background color.
Color Theory
Rules and guidelines that designers use to make sure all the colors used work together properly.
Copy
Every website or mobile app needs copy or published text that a user will see once they visit your site. This text will inform the user on what the page is about and direct them to where they need to go.
Crop
Cropping is taking an image and cutting off the excess part if it appears too big or not important enough to include in the design. Depending upon what you are trying to emphasize more in an image, you may need to crop part of it out.
CSS
CSS (Cascading Style Sheets) describes how HTML is supposed to be laid out. CSS ensures developers have a clean, organized and uniform look for their pages. Once the style is created, it can be replicated across all other pages, making consistency much easier.
D
Debt
When a designer makes short term goals or decision in order to meet a deadline, but often what will happen is that later on the person using the end product might not have the best experience due to the designer making rushed decisions or shortcuts.
Descender
A descender is the part on the letter where it descends below the baseline of that particular character. You will commonly see this with the letters: g, y, q, and p.
Display Typeface
Text that usually displays the header on a page before the subtext or body underneath.
DPI
Dots Per Inch (DPI) is the number of dots per square inch in digital design or print. Depending on the density of dots in an image, it can have a higher or lower viewing resolution.
Drop Shadow
In design drop shadow is an affect that you give to an element that makes it look like there is a shadow or that the image is elevated. You will see drop shadows with buttons or arrows on applications or web pages.
E
Elements
Elements are what make up an image like the size, color, shape, texture, position, density, and direction are all components that make up an object.
End User
The person you are designing the end product for.
EPS
EPS stands for Encapsulated PostScript and is used when you want to print high resolution illustrations. EPS files are usually created in Adobe Illustrator.
Eye Tracking
Eye tracking is when you are measuring a users eye motion and where they focus most when viewing a webpage or other design format.
F
Feathering
It’s another way of creating transparency to a design. This is usually applied to the outside portion of an object so that you can get a glimpse of an image underneath.
Figma App
A common product designer tool used to create designs for websites and mobile apps. Once the designs are finished, developers can use the files to create the end product.
Flat
Flat design is a minimalistic approach that focuses on being very simple. It tends to feature plenty of open space, crisp edges, bright colors, and two-dimensional images.
Flowchart
A process in wire-framing that shows what a user will do next as they are navigating through a mockup of a website or app.
Font
This refers to the text style you will see on any website or anything written online. The type of text or font that Google tends to use is Google Sans.
G
Gamification
Gamification is adding elements to a design that mimic game-like qualities to drive more user interaction and engagement. An example would be receiving a gold star after completing your 5k run on an app that tracks your distance. This helps incentivize the user to interact with the app more often.
Golden Ratio
First discovered by the Greeks, it’s when a line is divided into two parts and the longer part is divided by the smaller part to get the number 1.618.0. The idea behind following the golden ratio is that it makes designs visually pleasing to the eye.
GIF
GIF stands for Graphics Interchange Format and is an animated image.
Gradient
A color gradient is also known as a color ramp or a color progression where you start with your first initial color in a defined area and move to another. The gradient tool creates a gradual blend of several colors.
Grid
A ruler like system used to align your objects. They are made up of vertical and horizontal lines that create an easy what to make sure objects or text are positioned properly.
GUI
GUI stands for Graphical User Interface or images that represent a certain action that will take place once you tap or click on it. An example would be scroll bars, menus, icons, pointers, etc.
H
Hex
A six digit code used to represent a certain color. For example, black has a hex code of 000000 and white a hex code of FFFFFF. These are commonly used in Sketch and Figma when designing.
Hierarchy
A process of creating what is most important to the least important. It helps give order to a design and what a user should focus on.
Hue
Hue is the pure color. Basically, it’s just away to describe a color. Yellow, blue, and orange are all different hues.
High Fidelities
High fidelities refer to when actual color gets added to what was once a wireframe or general outline of a design. This is where things start to come to life and it looks like a functioning webpage or app.
I
Icon
A small image used to represent an action a user is supposed to take to get them to their destination. An example is the search icon you will see next to asearch engine box online.
iOS
A mobile operating system that was created by Apple.
Iteration
Iteration in design is where you are constantly changing, testing, and reiterating a particular design layout until it makes sense to the user.
J
JPEG
A compressed digital image that makes the file smaller and is commonly used for photo storage.
K
Kerning
The distance between letters in a word.
Knolling
Knolling is where a you are arranging objects so that they are either at a 90 degree angle or are parallel to each other.
L
Landing Page
A landing page or what is commonly referred to as the home page is the first page a user will see once they visit a website or application.
Leading
The line height or spacing between two lines of text.
Logo
A symbol or graphic created that represents or promotes your business.
Logo Mark
This is just a design that is centered around a brands actual name. For instance, the swoosh image for Nike would be the logo mark for that company.
Lorum Ipsum
Lorem Ipsum is basically dummy text used in design that will eventually get replaced with the actual text later on once you get the proper copy established.
Lossy
When you compress an image some of the quality is lost resulting in what is referred to as lossy.
M
Navigation/Menu
A series of linked items that helps direct the user between the different pages on an application or webpage. The navigation is usually located at the top of any app or webpage.
Margins
Margins are the spacing between important elements in a design, such as on a website. Usually you will see this between the outer most part of a website and where you have the main hero image.
Microcopy
Bit sized content on a webpage or application that helps guide the user. This can be text in buttons, thank you pages, captions, tooltips, error messages, small print below pages, etc. Good microcopy is compact, clear, and easily delights the user.
Midline
Midline or mean line is the imaginary line where all non-ascending letters stop.
Mockup
A mockup is a prototype that provides at least part of the functionality of a system and helps with testing a design.
Monochrome
Monochrome is a color palette made up of various different shades and tones of a single color. It’s important to note that while grayscale is monochrome, monochrome is not necessarily greyscale — monochrome images can be made up of any color, like the different shades of orange.
Monospace
A monospaced typeface is a typeface where each character is the same width, all using the same amount of horizontal space. They can be called fixed-width or non-proportional typefaces.
Moodboard
The starting point for a lot of designers, a moodboard is a way for designers to collect together lots of visual references or ideas for a new design project. Photos, images or typography would all be elements you could use to create a moodboard. They are used to develop the project’s aesthetic, for inspiration or to help communicate a particular idea.
MVP
This stands for Minimal Viable Product. The main purpose of an MVP is to collect enough information about a product that will help the designer later on with fleshing out the project. The document states the bare minimum a product needs to get into production.
O
Opacity
Often referred to as “transparency” this is the amount of light you let travel through on object. Adjusting opacity allows you to fade, blend, brighten, or layer within an element.
Open Source
Open source means that you are allowed to use and modify images that you find online to fit your preferences.
Open Type
A cross-platform font file where fonts are scalable.
Orphan
A single line or letter that is by itself at the end of a paragraph, page, or column.
P
Palette
In design it’s a particular range of colors that you will use for a website or application.
Pantone
The Pantone Matching System is a standardized color scheme used for printing and graphic design. It’s used in a number of other industries including product and fashion design and manufacturing. Each color has its own individual number and name.
PDF
Portable Document Format is a file format used to represent text and images and is used when you need to save and share with another person.
Persona
A persona is a fictional character to represent a targeted audience that you are trying to design a product for to fit that demographic.
Persona Mapping
Is the creation of fictional characters that represent realistic people and what they would want out of a product. Here you would design a road map based on the targeted audiences preferences and why they would take those actions.
Pixel
Pixels are the smallest component that make up your screen and are tiny square in images that you see on your laptop or mobile phone. In design, especially if you are using Sketch or Figma, you will be using these as your base for sizing different objects.
Plug-In
This is commonly used in design. Basically, this third party extension will help increase the functionality of your designing process.
PNG
PNG stands for Portable Network Graphics and is a compressed raster graphic format. It’s used on the web and is also a popular choice for application graphics.
PPI
Pixels Per Inch (PPI) is used to describe the pixel density between images on a screen.
Prototype
A prototype is an early model or sample of what a product might look like. Generally, you will design multiple prototypes and test the concept during the beginning phases of designing and building a product.
Proximity
How objects are grouped or spaced on a page. Images that relate to each other will be closer, while ones that are not related will be spaced further apart.
Q
QA
QA stands for Quality Assurance and is a chance for the designer to review the product before it goes out to be officially tested by a user.
R
Raster
Raster images are constructed out of a set grid of pixels. Meaning, when you change the size or stretch a raster image, it will get a little blurry.
Resolution
Resolution is the detail of an image. Images with low resolution have little detail while high-resolution images have more detail. High-resolution images tend to be crisper looking, since they have more pixels per square inch compared to low-resolution images.
Responsive
An approach to web development where depending upon what device a user is viewing, the layout changes or adjusts to that screen size. Another example is when a user flips their screen horizontally, the images will adjust to that shape or if a user is zooming in on something, then that object will appear bigger.
RGB
RGB stands for red, green, and blue. These three colors are typically used to show images on a digital screen. The colors can be mixed to create any color you want.
Rule Of Thirds
The rule of thirds is a helpful way of aligning the subject of an image and making it aesthetically pleasing as possible. It involves dividing up your image using 2 horizontal lines and 2 vertical lines to create 6 squares total. You then position the important elements in this divided box along those lines, or at the points where they intersect.
S
Sans-Serif
Sans means “without,” and a sans serif font has no serifs or hooks at the end of some letters.
Saturation
The intensity of a color.
Script
Script typefaces are fonts or type based upon historical or modern handwriting styles and are more fluid than traditional typefaces.
Scale
Refers to the relative size of a design element in comparison to another one.
Serif
Serifs are the tiny lines and hooks at the end of the strokes in some letters of the alphabet.
Sketching
A quick drawing done by hand to get an idea on a piece of paper fast and is not the end product.
Skeuomorphism
A term often used in user interface design to describe interface objects that mimic their real-world counterparts in how they appear and how a person can interact with them.
Slab-Serif
Slab serif is identified by thick, block-like serifs.
Sprint
Sprints are the main feature of the Scrum / Agile framework. Sprints are short periods of time in which goals are laid out for a scrum team to complete by the end of the sprint. They usually last for no more than a few weeks and occur in stages such as: planning, design, development, implementations, testing, deploy, then review and repeat.
Stem
A vertical stroke in a letterform. Can be found in both lowercase and uppercase letters.
Stock Photo
A place you can go to retrieve licensed images for mass use when designing websites, blogs, mobile apps, etc. If you don’t have an onsite photographer, popular sites to visit for stock photos are Unsplash, Pixabay, and Pexels.
Storyboard
A visual representation of a user’s experience with a product or problem. They are usually frames laid out in such a way that documents the overall journey a user takes to a final destination.
Stroke
A feature used to adjust the thickness, width, color, or style of a lines path.
Style Guide
A style guide is an established rule book that has certain colors, fonts, and icons used for a particular design. This helps make sure everything stays consistent and uniform brand-wise.
SVG
SVG stands for Scalable Vector Graphic. It’s a file format that helps display vector images on a website. Developers will commonly ask designers for SVG files so that they can easily show that image in their codebase.
Symmetry
Symmetry refers to a sense of harmonious balance and proportion to an overall design when viewed.
T
Template
A template is a set of designs that are consistent. When designing a website you want to make sure everything stays on brand and templates create a space for this.
Texture
The surface characteristic of a particular design, such as smooth or rough.
Thumbnail
A smaller image of an object in order to give the person reviewing the design a quick representation.
Thumbnail Sketch
Sketches or drawings that are done very quickly to get an idea on paper with no corrections.
Tint
A lighter or darker version of a particular color.
Tracking
Tracking is when you are loosening or tightening a chosen set of text.
Triadic
Color schemes that are evenly spaced around a color wheel to create contrasting shades.
Typeface
A set of characters, such as letters and numbers that all share the same design.
Typography
Is the art of arranging groups of letters, numbers, and characters that share the same typeface into something that is pleasing to the eye.
U
UI
UI or User Interface, are the actually assets or buttons a user interacts with to get to a specific destination within an app or website. This is the more physical journey rather than the more psychological experience of UX.
Usability
Here is where you are taking into account how a user interacts with a certain design. Is the app or website you designed for the user intuitive, safe, and effective at getting them to navigate easily through?
User flow
The journey the user takes from start to finish, for instance purchasing an item at a checkout successfully.
UX
UX stands for User Experience and refers the series of steps a user takes to accomplish a goal within a website or app. It’s more about the psychology of why they do what they do with a piece of digital technology and what that overall experience is like for them.
V
Vector
An image made up of points, lines, and curves that are based upon mathematical equations, rather than solid colored square pixels. The beauty of a vector image is that when you zoom in you are not seeing pixels but clean smooth lines.
W
Watermark
A watermark is an image that represents a company, in other words, your logo. You will commonly see watermarks on stickers, water bottles, T-Shirts, bags, etc.
Weight
Adding weight to an objects makes it appear heavier. Different ways to add weight are giving thickness to a line, or deepening the color of an object. All these varied factors can make an image look fuller.
Whitespace
The open space between objects or what is commonly called negative white space. There are no elements occupying that area.
Widow
A widow is a very short line or one word, that is located at the very end of a paragraph or column.
Wire Frames
The outline or bare bones of what a website or app might look like. There is little to no color added to the sequence and its sole purpose is to show what each element’s sole purpose is. Think of a house being built where all you have is the skeleton but none of its fixtures that make it an actual place to inhabit.
X
X-height
Refers to the distance on the x-axis between the baseline of a letter and it’s uppermost top, basically how tall a typeface is for a letter.
Z
ZIP
A zipped file is a compressed version of a file. To zip a file or to send a zip is sending a smaller, compressed version of a file so they can be transferred more quickly & easily, such as by email.
All definitions were crafted with help from the following sites: https://careerfoundry.com/en/blog/ux-design/ux-design-glossary/ https://99designs.com/blog/tips/15-descriptive-design-words-you-should-know/ https://buffer.com/library/53-design-terms-explained-for-marketers/ https://www.smashingmagazine.com/2009/05/web-design-industry-jargon-glossary-and-resources/
As ever, QuarkWorks is available to help with any software application project — web, mobile, and more! If you are interested in our services you can check out our website. We would love to answer any questions you have! Just reach out to us on our Twitter, Facebook, LinkedIn, or Instagram. | https://medium.com/quark-works/130-common-design-terms-to-know-37849a0e7104 | ['Cassie Ferrick'] | 2020-10-26 19:19:23.634000+00:00 | ['Technology', 'Education', 'Design', 'Startup', 'Self Improvement'] |
An Easy Way for Writers to Move From Skinny Ideas to Rock Solid First Drafts | I used to believe that writing was a kind of romance. I would wake in the early hours of the morning, steaming hot cup of coffee in my hand, and light a candle. Just me and the dark and my muse. The scene was set. The wooing had begun. Now all I had to do was wait for her arrival.
So, I would sip my coffee and wait. And wait.
Usually, my beloved muse would show up. But she was a mess. Unfortunately, she was almost always drunk. All she could give me were mutterings I could barely make out. Incoherent rambles. Slurred words and vague statements.
And usually, if I listened to her muddled monologues long enough, she would throw me a bone. A word. A random thought. Kind of like those movies where the guy is dying and he knows where the killer is hiding the abducted child but all he can do is whisper some cryptic words as he takes his last breath.
That was where she left me. With a word and a puzzle. And it was infuriating.
It wasn’t supposed to be this way. I was supposed to sit down at the computer and be inspired. Be driven. Instead, I found myself stuck. I had the seed of an idea, but I didn’t know how to make it grow.
So I tried what everyone seems to think is the right way to get the juices flowing. Just start writing. That was a “no-go” too.
Then, I started thinking about all the strategies I used in the past. What were the common threads that ran through my most successful articles? What were the common threads that ran through writers I enjoyed reading?
I made a list. And then I created a pre-draft subheading of ideas which I filled in before I began actually writing the article.
The results were amazing. By filling out this template, I found ways to practically make my articles write themselves and satisfy my reader as well. And I am hoping my strategy can help you too.
The pre-writing structure that helps me write fuller, more engaging articles
On my blank documents, I write the following headings:
The “Why
The “How
Supporting Research
Personal Anecdotes and Examples
Start by brainstorming the “Why”
You must always provide a “why” in your writing. People want to know the reason they should listen to what you have to say. How will you add value to their lives? Will you give them insights on how to improve their relationships, their finances, their health, or their career? Will you validate their opinions, feelings, or ideas, or will you change their minds so that they can lead a happier, more rewarding existence?
Once you brainstorm the “why,” the next step is elaborating on its importance. Help the reader imagine the direction their lives will take if they listen to your advice or choose to not follow it. The best way to do this is to conjure one or both of the following emotions: excitement or fear.
For example, if you are writing on qualities they need for a successful relationship, make them excited by helping them imagine how much more fulfilling, fun, and intimate their connection with their partner will be.
On the other hand, you can create a bleaker future for them to imagine. For example, what will happen if they don’t listen to what you have to say? Will their relationship continue to fall apart or their communication with their partners dwindle away to nothing? Will they continue to live lives with their significant others more as friends than lovers?
Create a scenario for readers to envision. A “picture” for them to hold on to as they read. This fear or excitement is what hooks your reader.
Now outline the “How”
Now that this picture is in your reader’s mind, tell them how to either make this picture a reality or ensure it doesn’t happen. Some rules to follow when you do this?
Isolate your tips into separate headings
Readers like you to make their lives easier. One of the ways you can do this is to separate each of your ideas into separate bullets or subheadings. The subheading itself tells the reader what they can do in simple terms. If used correctly, it also manages to create more curiosity in your reader.
For example, a good subheading is clear enough to let your readers know your general idea or tip but not explicit enough to keep them from reading more. The result of this type of subheading is that readers will feel compelled to continue because they want to know exactly how to put your advice to use.
Make your advice specific
Using the relationship example from before, let’s say you tell your readers that they need to take more time to connect with their partners. Give them specific ideas on how to do this. Don’t just say you need to find time to talk. Give them ways to implement this advice.
Maybe you suggest that they take their kids to play in the park so that they can talk together without interruption. Maybe you suggest a daily walk together so that the temptation of entertainment at home or the urge to complete household tasks is eliminated.
Specific tips such as these will not only give your readers actionable advice, it will also serve as a catalyst for their own brainstorming. In other words, maybe your tip won’t exactly fit their lifestyle, but it will jumpstart their own ideas on how they can fit your advice into their specific routine or circumstances.
Collect research
The more proof you can give your readers that your advice is sound, the more they will trust you. So do the research to back up what you say. And when you do so, look for reputable sites that are related to the topic at hand.
For example, if you are elaborating on the importance of communication in your article’s “Why” section, you could use the following information from the Forum for Family and Consumer Issues.
“In one study of couples, both men and women agreed that the emotional connection they shared with their partner was what determined the quality of their relationships and whether they believed they had a good marriage or not.”
So now, instead of you alone saying how important communication is, your readers have additional proof for what you say. Also, by providing this hard and fast evidence, readers trust you more overall, as it is obvious you have done your “homework” on the topic.
Make a list of personal anecdotes that bond you to your reader
One easy form of “research” you can give your readers is facts learned through personal experience. For example, a psychological study such as the one mentioned above is valuable, but first-hand experience carries equal if not more weight.
Not only that, when you share your experiences with readers, a bond is built. You’ve been there. You “get” them. And they see you as not only a giver of advice but as a “friend” of sorts.
All readers come to writing seeking commiseration or validation of their feelings, struggles, or beliefs. They want to know you have been there too. They’re not necessarily looking for a time you experienced the exact same situation (although it’s certainly a plus if you have), but they do desire hearing you’ve been in a related situation or have indirectly witnessed or been affected by the same experiences they’ve been through.
So give this to them. Brainstorm personal examples or examples from others with whom you come into contact as it concerns the topic. You may find yourself using this as the “glue” between your claim, hard evidence, and your “how to.”
The bottom line:
Good writing is not always inspired writing. Sometimes it’s more like a grocery list, a list of objects your readers absolutely can’t do without. So fill their refrigerator by planning ahead and giving them the emotional connection, rock-solid facts, and simple tips they desperately desire. | https://medium.com/the-brave-writer/an-easy-way-for-writers-to-move-from-skinny-ideas-to-rock-solid-first-drafts-b6e911a84699 | ['Dawn Bevier'] | 2020-12-18 13:02:09.121000+00:00 | ['Marketing Strategies', 'Marketing', 'Writing Tips', 'Freelance Writing', 'Writing'] |
Applying AI to Group Collaborations. | AI applications, Poetry.
Applying AI to Group Collaborations.
Applying AI to my research in group collaborations. Sharing some poetic thoughts.
Groups collaborate and drink coffee. Can you apply AI to help? Photo by Nikita Vantorin on Unsplash
I was thinking,
Always dangerous,
Could distract me,
From drinking coffee,
Time to write,
Something different,
AI is always touted,
Greatest good and,
Greatest evil by,
Unknowing journalists,
Ranging from,
Benevolent robots,
Big love-me eyes to,
Ratbag bots with,
Glowing red eyes,
Guns and,
Terrifying missiles.
I thought to,
Give you insights,
Some things about,
Systems Thinking so,
You could explore,
AI concepts and,
Make your,
Own judgements,
Also thought to,
Write this,
Story in poetry,
Why not?
Artificial Intelligence,
Is about matching,
Patterns such as,
Facial recognition,
Fingerprint identification,
Voice recognition,
Translating Languages,
Pandemic spread,
Vaccination simulations,
Disease infections,
Diagnostics even,
Car-tyre wear,
Anything that can,
Generate a pattern,
Such as crowd or,
Group and,
Individual behaviours,
May be ripe,
For AI application.
But before you,
Click your fingers,
For AI magic,
Some poor sod,
Like me,
Has to analyse,
The system,
What makes,
It tick,
What happens,
When I push,
This ‘ere,
Red button?
I studied,
How people,
Work together,
Examined banking,
Systems and,
Large retailers,
Universities,
Good thinking,
Ho-hum coffee,
What do you,
Expect from,
Carnivorous,
Pinch-penny,
Management?
Collaborative Wellness.
Term I invented,
PhD research,
Had one of my,
Research Reviews,
Coming up,
Needed a name for,
All my research,
Including PhD,
Totalled Nineteen years,
Came to me,
While drinking,
Rare single-origin,
Ridiculously priced,
But after all,
It was coffee,
In conversation with,
Another nutter,
Said I was trying,
To answer,
“How well people,
Worked together?”,
Like a flash,
“Collaborative Wellness”,
Sprang into,
Coffee-starved Mind,
Determined my fate.
Asking Questions
People working together,
First step in our quest,
Ask questions to discover,
Who for social connections,
How for identifying processes,
With What for discovering means,
Big bit of paper,
Covering table,
Felt-tipped pens,
Multiple colours,
Make a,
Rich picture,
Describing system,
Share with managers,
Workers and Kookaburras.
Diagram 1: Questions to ask when discovering how collaborations work. Research by John Rose.
In essence,
Discovery yields,
Anatomy of,
Linked collaborations,
Flows of knowledge,
Production line with,
Work stations,
Each stop along,
The line is a,
Collaborative,
Wellness Unit,
Our investigation,
One step deeper,
Examining how,
Purpose is fulfilled,
How value is,
Delivered to,
CWU Stakeholders.
Diagram 2. Explores purpose fulfillment of Collaborative Wellness Unit (CWU). Research by John Rose.
Now investigator,
Step back,
Holistic View,
Look at Groups,
Working and,
Exchanging Knowledge,
Notice boundaries defined,
Create closed systems,
So if you’re inclined,
You can estimate,
Flows of,
Knowledge entropy,
Useful for,
Considering the,
Declining value of,
Knowledge over time,
Not to mention,
Head banging.
Diagram 3: Groups Working Together. Published Research by John Rose
Lastly,
Time to put it,
All together,
Each group or,
Process can be,
Described as,
Being CWU’s,
Interacting and,
Adapting to,
Changing market,
Requirements.
Goodness I said,
“Market”,
Didn’t mean it,
Obviously need coffee,
Nerds forgive me.
Diagram 4: Abstracted System now Basis for AI application. Research by John Rose.
System Described, Now What?
It takes some time,
To build this overview,
Knowledge flows,
Collaborations,
Interacting,
Gathering data,
Time to assemble,
Input data for,
AI Tensorflow,
Analysis.
As you can see,
Applying AI to,
Existing systems is,
No easy task,
Time consuming,
Demands attention,
Detail and accuracy,
Approximate data and,
“She’ll be right”,
Attitudes upset,
AI analysis,
Invalidating results.
You will find it,
Most difficult to,
Explain findings and,
Predictions especially to,
Sceptical managers and,
Unsmiling stakeholders.
On some,
Occasions,
I have been as,
Popular as,
Mud soup,
“Sorry guys”,
I exclaimed,
“Your data is,
Just manure,
Not fit for,
Purpose”,
Another story,
I’ll keep for,
Grandchildren and,
Kookaburras.
Blessed be,
I’m a tree,
Not AI.
Systems Thinking References.
Python Language References.
I use Python for,
Experimenting and
Prototyping in,
Deep Learning,
If you really,
Want to blast ahead.
AI References.
If you want to,
Play around,
With proper AI,
See for yourself,
What can be and,
Can’t be done,
Stop engaging with,
Popular empty heads and,
Do something yourself,
Suggest you start at,
Getting your head,
Around Deep Learning.
Modelling References.
I have used Netlogo,
For modelling over,
Many years,
Gives quick insights,
Minimum of work,
Don’t need much,
By way of,
Programming skills,
Allows curiosity to,
Flourish.
Learning Some Basics with Ready built libraries.
Admittedly sometimes,
Jumping into,
Deep Learning,
Is a bit overwhelming,
Try scikit-learn,
Gives you quick,
Access to validated,
Data and tools,
See what it’s,
All about.
AI Background Reading
Alan Turing was,
In my opinion,
Greatest pioneer of,
Concepts in,
Artificial Intelligence. | https://medium.com/technology-hits/applying-ai-to-group-collaborations-b6eaa950fba1 | ['Dr John Rose'] | 2020-10-18 10:33:16.012000+00:00 | ['Python', 'Artificial Intelligence', 'Deep Learning', 'Systems Thinking', 'Poetry'] |
How to Create a New Year’s Resolution That Sticks | How to Create a New Year’s Resolution That Sticks
Your first mistake is calling it a New Year’s resolution
Photo by Kylo on Unsplash
My New Year’s resolution is to remember everything about 2020 and to never take anything in my life for granted again.
Yep, I said it. I want to remember every detail of this past year. I want to take the lessons I’ve learned over the past nine months and engrave them into my brain.
Many of you probably want to forget 2020. You want to leave this year in the dust behind you. This year has been horrible, and it’s undone the lives of so many families.
What is a New Year’s resolution supposed to look like in 2021 anyway?
Things are shaping out to be a little different this year and I think my interpretation of resolutions will live on: they don’t work, and I’ll explain why.
Making a resolution is like making a promise without any consequences. What’s the worst that’ll happen if you don’t read two books every month in 2021? Probably nothing.
You’ll just feel bad about yourself for a little while but then move on with your life. “There’s always next year,” you’ll say as you mentally close the chapter on the resolution you said you’d fulfill.
I want to say that New Year’s resolutions work. On paper, they should have a higher success rate. They are exciting opportunities to better oneself. Yet, it’s a commonly held notion that New Year’s resolutions fail.
You can still accomplish your goal without putting the “resolution” label on it, and I’ll explain how. | https://medium.com/illumination/how-to-create-a-new-years-resolution-that-sticks-615e576d5170 | ['Ryan Porter'] | 2020-12-10 06:43:10.425000+00:00 | ['Inspiration', 'Productivity', 'Motivation', 'Ideas', 'Self Improvement'] |
“Never Give Up: You May Be Closer to Success Than You Think.” | Looking at my last MPP payout data, I tried to conjure the good feels and confidence of the comic that’s kept me moving ever-forward all these years, but I know too much now because I’m a member of two Facebook groups that support and encourage Medium writers.
What I’ve learned from this generous community is that several writers who have a similar number of followers to me (800) and post a similar number of stories each week (5–8), are earning 5 to 12 times more than I am from their effort. Where my MPP payout for April was $85, other comparable contributors posted earnings of up to $1,000 for the 5-week period.
Now, not only was I questioning the wisdom of diverting so much of my potential income-earning hours to the pursuit of making money writing about topics that interest me on Medium, I became concerned that I’d been faking it as a technical ghost-writer for the last 20 years.
I even drafted an email to let my core clients know theyd been duped. But I didn’t send it since I worried it wasn’t well-enough written.
I can’t stop imagining my confidence-building comic, now a little bit altered. The man in the top frame is still enthusiastically pickaxing his way to success, but it’s a woman in the lower frame—one who bears an uncanny resemblance to me. She is joyfully hacking away on the wrong wall.
Image by Gerd Altmann from Pixabay
This doesn’t surprise me since I am 90% consistent in turning left when right is called for.
Write what you know? But, what if what you know is, you know, wrong?
There is actual neuroscience that proves there are people (like my husband) who have an intuitive sense of direction and people (like myself) who feel like the operating system that governs navigation was never uploaded to their brain.
What if that broken navigational O/S is at play with my Medium efforts?
That is the question that was plaguing my sleep.
What if I was taking a left turn with my stories when I should have stayed the course or veered slightly right?
What if my story pick-axe was smashing away at the wrong wall?
What if, instead of chipping away at the Wall that Pays, my stories have been working toward busting a hole in the Pay Wall? I don’t even know what that means but I know that a) just because I don’t understand something doesn’t mean it’s not true and b) that would be just like me to hear “Wall that Pays” and set my GPS for “Pay Wall.”
After dropping my axe and hitting the wall with my skull to hit the wall for three weeks, I did something unthinkable: I popped my head of out my cave and asked for help from the wonderfully supportive and successful writers here on Medium.
They were specific—brutal and loving all wrapped up in honest feedback. They spun me around, handed me a clear map, and said, “Good luck!”
Three days later, I was curated. Twice. And then a third time. That was nine uncurated stories ago. I haven’t lost the map but I have put it down to write some things that feel right for right now.
I’m no longer feeling the desperate need for validation from the anonymous, faceless elevators. I have been seen now. Perhaps like a shooting star, now gone from their view. But seen once means I can find my direction again—to crack more holes in that wall—maybe even enough to reach the diamonds that are waiting on the other side.
And, if I don’t, these hours and months sharing on Medium will become part of my future success story with Enterprise Idea number 9 or 14 or even 26. | https://medium.com/love-and-stuff/never-give-up-you-may-be-closer-to-success-than-you-think-532c101c8e6 | ['Danika Bloom'] | 2019-06-21 23:06:59.876000+00:00 | ['Advice', 'Life Lessons', 'This Happened To Me', 'Entrepreneurship', 'Writing'] |
How I learned 15 programming languages, and why your kids will too | When I was 12 years old, I got sick and had to go to the hospital for a check-up. No need to worry, turned out I only had angina. But a couple of hours I spent in the hospital back then did, in some way, predetermine my whole life.
While I was waiting for the examination, someone gave me a book — just to cheer me up. It turned out to be about BASIC, an early programming language that was and still is among the simplest and most popular programming languages. I loved the whole idea of creating new things from scratch so badly that I picked my life journey after reading that book.
I started with writing code on pieces of paper, and over time, learned another 14 programming languages to truly understand how the world around us work. It might seem unrealistic, but your kids might follow my lead.
Anticipating Mortal Kombat
The tale takes place in Moscow at the beginning of the 90s. There were not too many people interested in IT and programming back then. More so, there were not many people who had personal computers. When I first read the book, I didn’t have a computer, so I tried to write codes on pieces of paper.
Soon enough my father — a professor in a military academy — managed to get me one of those amazing devices, and I started using my newfound skills of BASIC programming to create simple apps, like calendars, planners, and even music apps. Well, apps that made sounds when I would command them so.
However, my biggest pride was in the first game I’ve created. It resembled the oldest versions of Mortal Kombat. I managed to come up with brand new script language that would help me to code how characters would fight, fall down, stand up and win. I did all that when I was 13 years old, by the way.
My first consulting jobs
Apparently, not everyone could do what I could with PCs. When I learned how to code in PASCAL, a more efficient language that encouraged using structured programming, my father brought me to his office in the academy where I showed his colleagues my skills. The academics were shocked by what a teenager with a computer could do and even asked me to help them out, so I began consulting them on programming.
Want to know how I learned PASCAL? Dad once told me that at his work people are already using it, and I’m very old-fashioned with my skills in BASIC. His words triggered me to go and learn how to code and become a multilingual programmer. Dad also was the reason why I learned C and C++. He used these techniques to persuade me to develop myself further quite a lot.
Soon after, my mom thought that she could find a use case for my skills as well. She worked for an insurance company and asked me to write a program that helped to optimize their work. The program basically generated automated documents and emails — before that, the company did everything manually. I created a pattern language for optimizing parameters. And whoa — I earned my first 50 bucks! Not bad for a 15 years old, right?
Java Changed it all
End of the 90s was marked, and I’m not afraid of this word, with a revolution in the IT and programming world, when the Java was finally born. It was somehow simple and familiar, yet opened new horizons for programmers around the world. With automatic memory management and architecture-neutral and portable nature, Java made all the previous programming languages look like a manual car while being the newest model with the automatic gearbox.
I did not have to worry about memory, Java was covering me up. It also made accessing code from different devices possible with its “write once, run anywhere” principle. Moreover, by the time Java has appeared, the Internet became a little bit more common and I got access to the like-minded enthusiasts that were interested in the same things as I was. Now I was not alone, and learning and developing myself became much easier.
Money Maker: languages vs connections
Once I’ve learned Java, I got my first job in one of the countries biggest banks — a place where people with connections worked. I didn’t have any connections, and I have just entered the university, but I knew the magic language that opened many doors.
I’m not going to reveal any names, but over the last 15 years, I have worked for 10 different banks where my skills were a perfect fit. I specialized in building processing centres, but it was never that simple. IT guys haven’t been just coders for any of those banks, but real problem solvers.
The new languages, like PHP (scripting language used to create interactive HTML Web pages) and Perl (seems to me like this language was actually created to confuse people, but there are coders who actually like it), kept coming, but none of them really won my heart. Meanwhile, I was dealing with the real world problems, like creating the first utility bills payment system for ATMs, or bringing these ATMs to small cities.
I even had to stop a riot once! One of the machines we’ve installed in the suburbs was supposed to give out salaries to the workers of one of our clients (a factory) but went broke instead causing mass protests. Despite the fact that my fault was non-existent, I managed to fix all the issues and even had to talk to the press. That language was really new to me.
Kotlin or why I joined Crypterium
As you can see, working for banks was fun. Yet, the whole system is so deeply rooted in tradition and backward that I just had to move forward to see for myself how far the technology can go and what I can do for the world of the future to appear faster.
Back in 2013, I’ve mastered another language — Kotlin, one of the most fascinating things I’ve learned since Java. It was created by my fellow Russian coders and is supported by Google as the main programming language today. I personally like Kotlin so much that I use it to write everything and regard it as a breath of fresh air after almost a decade of a standstill.
I brought the Kotlin culture to the blockchain startup that I joined last year. What we’re trying to build is a whole new layer on the existing financial infrastructure. We like digital assets, mostly cryptocurrencies, so much that we decided to make them as easy to spend as cash. As a result, Crypterium was born and recognized as one of the most promising fintech projects by KPMG and H2 Ventures.
Unlike banks, we are on the edge of the newest technology, and when at Crypterium, we tell candidates that we use Kotlin for our general ledger, they get inspired and motivated while their willingness to work for us increases with a geometrical progression.
The kids will join the IT crowd
It might seem like I’m writing this story just to brag, but learning new programming languages is not about looking smart, it’s about getting things done with the best possible tools. Of course, you can try to move all your belongings from one house to another using a bike, but it’s not the best solution, especially when you can use a truck instead.
Over time I was just looking for the tools to get things done, and it doesn’t matter if it’s Delfi or Assembler x86, Python or JavaScript, I just couldn’t help but wonder how those languages could help us code new things in new ways.
It might sound odd for you, but it wouldn’t for your kids, believe me. Technology will shape the future, and whether or not to learn how to code would not be a question in a couple of years. The new generation will discover, just like I did, the whole new galaxy that “talks” to us. They will look deeper into the way things work. When I ride a bus I wonder how the automatic tickets system or PayPass work. When I am in an elevator I wonder how it manages to go to all the floors it is commanded so.
Technology is getting its hands on everything and is going everywhere, even to the most remote places of our planet — and that is the beauty of tomorrow. In order to understand how this world of future will function, the kids of today will need to learn technologies from an early age because knowing even 15 languages of programming today will be just the tip of an iceberg tomorrow. | https://medium.com/crypterium/how-i-learned-15-programming-languages-5c54d3ca0383 | [] | 2019-02-19 10:57:42.720000+00:00 | ['Python', 'Mobile App Development', 'Cryptocurrency', 'Programming', 'Blockchain'] |
Should I Self-Publish? | Pros, Cons and a Curveball for bypassing Publications and self-publishing on Medium
Photo by x ) on Unsplash
Considering the pros and cons of self-publishing on Medium just got much more exciting, thanks to the October 2020 desktop interface update. If you’re a writer of micro fiction or poetry you know what I mean…
The October 2020 desktop made two serious changes to the personal homepages of Medium Partner Program participants.
“Infinite scroll” meaning that a writer’s work is displayed one right after the next on our homepages Personal URLs; which are not located behind the paywall
The two changes raised alarm bells for writers, especially producers of works that are less than 200 words, who feared that the interface was now giving away their words for free.
Medium has since clarified that the upgrade is programmed to track the “lingering” of members on parts of a home page as reads, even when they don’t click “read more.”
On the other hand, if a user is not logged into Medium and is not a paying member, they have access to home pages without interference of a paywall.
Pros and Cons of Publications
For those readers who are certain that “infinite scroll” and free previews are not in their best interest, there is a simple solution.
Publications.
When you publish your work with a publication, even a personal one, the story lives first and foremost on that publication’s wall. The share link you generate when promoting the post is to the publication, even though the story does still display on your own homepage as well.
Unlike personal profiles, Publications still have the option to opt out of the new interface and most (in my Medium Ecosystem at least…) still are. Therefore, for the time being, a writer who wants to prevent infinite scroll and free previews can create a personal publication and publish their work there, rather than on their home page, at least for a little while longer.
Pros
In the Suggestion Box, we talk a lot about how important good relationships with publications are in moving up the Medium food chain, from ecosystem to ecosystem, all the more so following the October 2020 upgrades.
Plain and Simple: If your reach is limited but the publication’s reach is broad, you extend your reach when your work is featured by a high traffic publication.
Cons
Self-published stories live ownerless on Medium. As a result, they are suggested by tag in the “More from Medium” footer when it displays, and I have a hunch (just a hunch!) that they are reviewed for curation (still happens, we’re just not told the categories) more expediently than those stories that are not granted immediate curation by publications (large ones can do this).
There are reasons to believe that some good stories can gain traction faster when they are self-published than when they are published on small pubs or pubs with disengaged readerships.
It is important that writers understand that Medium made these changes because they believe that they represent system upgrades that will improve the user experience of the platform over time. Indeed, though we may have panicked (prematurely…) about our compensation being impacted if readers don’t have to click into our stories, we have to acknowledge that Medium removing all barriers that stand between our readers and their desire to read more of our work is a good thing. Likewise, just as the music industry found with radio, often the very best way to gain new readers, users or followers is to give them a taste of our work for free.
A Curveball
My genre on Medium is personal essay, which means that my holy grail publication is Human Parts.
Human Parts has 255,000 followers.
It’s tagline is a “publication about humanity.”
It’s editors work with writers to improve their work prior to publication.
But most of all, unlike the Ascent, GEN, or Curious, great Medium publications, Human Parts welcomes embraces the genre that is personal essay. They welcome nuance. Their submission guidelines (back whey they had submission guidelines…) didn’t ask writers to do things like clearly list actionable take aways for their readers. Human Parts welcomes literature, and that’s what I’m really here to write.
In 2020, Human Parts shifted their editorial focus to center minority voices; a move that I celebrate. As a privileged white woman, albeit an expat, they haven’t been looking for me, and that’s okay. However, in 2019, before this pivot, Human Parts replaced their submission guidelines with a “don’t call us, we’ll call you” notice, and since the Oct 2020 upgrades, even that is now gone.
But, Human Parts is still putting out new work, and it’s no longer focused only on things like race and culture. No doubt hundreds of historic Medium writers still have the ability to directly submit to Human Parts, thereby sourcing some of the publication’s featured stories. I’m just not one of those lucky ducks.
Beyond those lucky few, Human Parts is doing what they promised in the submission guidelines that came down a few days ago (apologies that I don’t have it here to quote for you…). They are scouring Medium for great well-written, compelling personal essays, reaching out to those writers and inviting them to publish with Human Parts. In the last few weeks, I’ve messaged with a handful of writers like these who confirm that this has been their experience.
The Holy Grail for Personal Essays
While generally the best way for a writer with less than 1,000 followers to expand their reach is to publish with an active publication that has greater than 1,000 followers, the holy grail of Human Parts changes the calculation for personal essayists like me.
Whereas most personal essays that I self publish simple get pushed out to my followers and perhaps chosen for distribution, there is always a chance that a lucky, truly excellent essay could be picked-up and distributed by Human Parts. | https://medium.com/suggestion-box/should-i-self-publish-a054b277c6a5 | ['Sarene B. Arias'] | 2020-10-21 15:19:30.854000+00:00 | ['Tips', 'Marketing', 'Medium', 'Blogging', 'Writing'] |
The 6-Week Void in My Identity | Six Weeks
That’s the length of time unaccounted for. Forty-two days of life. Those are the completely dark days. Six weeks is the hole in my heart. The entire month of October and then some. I do not know where I was. I do not know who cared for me. I do not know what happened to me.
In his book, “The Body Keeps the Score,” Bessel Van Der Kolk argues extensively for a somatic understanding of trauma. When we face trauma, he suggests, our bodies encode the adverse experience deep within our nervous systems, and far below the level of conscious awareness. Increasingly, research surrounding PTSD is pointing toward a similar thesis, namely, that when we relive a traumatic moment, the memory is much more visceral, since it has been buried in our bodies, often never even having been explicitly recounted by consciousness. Our bodies remember, even if our minds do not recall.
This view of trauma has led therapists and other mental healthcare practitioners to rethink their approach. We should stop asking what’s wrong with you and instead ask what happened to you, if we really want to help people heal.
The problem is, I don’t know what happened to me for the first six weeks of my life and I might never find out.
I do know I was given up for adoption the day I was born, and then I was in the foster system until I was adopted by the people I now call Mom and Dad. Since mine was a closed adoption and all records were sealed, I grew up knowing next to nothing about my biological family, though I was always insatiably curious. I’ve located many of them, and have cobbled together a lot about my origins. I even reconnected with the social worker who handled my case nearly forty years ago. Though he is getting up there in age, he helped me tremendously. Still, no one can tell me what my first six weeks of life were like.
Related to Van Der Kolk’s work on trauma, researchers have also begun focusing on the ways early childhood traumas — adverse childhood experiences (ACEs), as they are called — impact development and health, both physiologically and psychologically. Abuse and neglect, for example, can manifest later in life as chronic illness. Growing up with an alcoholic parent can lead a child to develop all sorts of unhealthy coping strategies and in turn, those can become psychologically debilitating conditions. The body indeed keeps the score, as Nadine Burke Harris eloquently describes in this TED talk:
An adverse childhood experience (ACE) that is less often acknowledged is adoption. Not the part where the adoptive parents take you home and you might finally begin to feel settled and start forming trusting and secure attachments. That is, of course, if you are fortunate enough to be adopted by good parents. Countless adoptees out there suffered just as much or even more by the hands of their adoptive parents, some even being murdered by them. This fact alone renders the sweeping claim that adoption is always in the best interest of the child patently absurd.
Let me be clear: I had an amazing childhood and my parents absolutely did and admirable job. Nevertheless, the experience of being taken from my first mother and father on the day of my birth absolutely counts as an ACE. There are plenty of studies showing that babies placed in the NICU, for instance, suffer extreme stress and their nervous systems go into overdrive trying to compensate for being ripped apart from the only source of safety they have ever known. Even in utero, babies begin forming bodily memories, which is why, as soon as 24 hours after birth, they will show marked preference for the breastmilk of their biological mother over that of a genetic stranger. They will even show preference for music and sounds — like their mother’s voice — they were accustomed to hearing while gestating. Newborns have no way to conceptualize that they are a distinct human subject, different from their mother. This is why postnatal care has been increasingly emphasizing the importance of the “fourth trimester” in forming positive life experiences for the baby (and parents) as the transition from one body to two is made.
All of this is to say that the separation of newborn from its biological parents is a preverbal trauma, one that leaves its mark on the baby’s nervous system just like any other ACE. This sort of trauma happens prior to the capacity for linguistic representation, but this does not mean babies do not remember. It is arguably why so many adoptees suffer from debilitating mental illness, are more susceptible to auto-immune disorders, and are more likely to attempt suicide in the non-adopted population. This lecture by psychiatrist, Paul Sunderland provides an excellent overview of preverbal trauma and its impact on development and functioning in adoptees.
I’ve explored my own preverbal trauma as a potential source of some of the challenges I’ve faced in life. Despite an overwhelmingly positive childhood, it would be a lie to say I have not struggled with my mental health over the years. Having children really brought to the fore just how much being relinquished for adoption impacted me, and that is when I began searching for my biological relatives in earnest. Like I said, I found many of them and learned a lot about my genetic predispositions to certain conditions, including mental illness, and I even learned that my mom was volatile and stressed while she was pregnant with me, potentially a contributing factor to the high blood pressure and crippling anxiety I experience.
We all try to build a cohesive narrative to frame our lives, and I have been able to piece together so much more of who I am by learning about where I came from. Yet, I still do not know anything about those first six weeks.
Pictures or it didn’t happen!
Everyone likes to say this, in our image-obsessed culture. I take a ton of pictures of my kids and I think it’s to compensate for this gaping hole in my own pictorial legacy. I don’t have any pictures of me on the day of my birth. But my birthday most assuredly happened. Life most assuredly happened for me during those six weeks where I have no pictures, no stories, and no information about the people with whom I interacted.
Having my own children now, I realize just how much life happens in those first few weeks — the attachment formation, the mirroring, learning eye contact and gaze following, emotional regulation, sleep pattern formation, and bonding. What were all those things like for me? Did I bond with my foster family only to be traumatized again when I was abruptly taken from them? Or did they neglect and abuse me, the sad reality of so many fostering situations? Or was I basically in a modern-day orphanage? Was I scared? Was I fed enough? Did I scream out for attention like so many stressed babies, or did I simply collapse into silence out of fear I would be harmed if I was too much?
There is abundant compelling science indicating that babies remember things even if they cannot consciously recall them. An abused child, even at 4 weeks of age, will be impacted by that abuse. Yet, people who uncritically praise adoption tend to ignore these facts. They insist a baby is basically a blank slate, at least for the first few days? Weeks? A whole year? It depends on what agenda they have. Many well-meaning adoptive parents think as long as they shower their child with love, it will cancel out anything bad that happened in the first moments or even months of life. The child has no real memories or conscious mind — their life does not truly begin — until they are adopted. It’s the biggest lie in the industry.
If we really want to do adoption right — if that is even possible — we have to first stop insisting that there are no negative side effects to adoption. We need to admit that adoption is traumatic. We need to listen to adoptees when they tell us adoption hurts them. Not doing so is willful ignorance and it perpetuates the marginalization and harm adoptees experience.
I have a six-week void in my narrative construction of myself. Many adoptees I know have a far bigger gap. Some of them were so traumatized by being culturally uprooted, internationally transplanted, and psychologically abused that they have effectively fragmented themselves and suffer dissociations as a result of unconsciously trying to cope with the ACEs that mark their development.
If only I had pictures, or someone to share stories about me in those first six weeks, I could piece my story together even more and understand what happened to me, which would help me understand my behavioral schemas, drives, and non-conscious coping mechanisms. It would help me understand…me. Though I’ve spent a lifetime on this project, the frustration of knowing there are six weeks I will likely never account for and thus will always come up short is maddening. I’m a perfectionist and finish everything I start as perfectly as I can, but this is one task doubt I will ever complete.
And it is even more frustrating to hear “it doesn’t matter” or “just be positive” or “you got THE BEST family though!” All that gaslighting only makes it worse because it reaffirms the fears I have — that so many adoptees have — that no one will ever understand or take it seriously. My pain is mine alone, save for the adoptee comrades I have who feel it too.
As my 40th birthday approaches, I pull out the first picture I have of myself, the day I was brought ‘home.’ I look at this picture every year and wonder if I will ever feel fully home with myself. Because when I look at this picture I see a person who has six weeks of memories and life experiences already accumulated, but no one to share those with. I see this child and I want to talk to her and ask, what happened to you? | https://medium.com/curious/the-6-week-void-in-my-identity-1b1eaf369a3b | ['Michele Merritt'] | 2020-09-24 17:54:18.291000+00:00 | ['Trauma', 'Mental Health', 'Self', 'Psychology', 'Adoption'] |
Cosine Similarity Matrix using broadcasting in Python | Learn how to code a (almost) one liner python function to calculate (manually) cosine similarity or correlation matrices used in many data science algorithms using the broadcasting feature of numpy library in Python.
Photo by mostafa rezaee on Unsplash
Do you think we can say that a professional MotoGP rider and the kid in the picture have the same passion for motorsports even if they will never meet and are different in all the other aspects of their life ? If you think yes then you grasped the idea of cosine similarity and correlation.
Now suppose you work for a pay tv channel and you have the results of a survey from two groups of subscribers. One of the anaysis could be about the similarities of tastes between the two groups. For this type of analysis we are interested to select people sharing similar behaviours regardless of “how much time” they watch TV. This is well represented by the concept of cosine similarity which allow to consider as “close” those ‘observations’ aligned to some interesting for us directions regardless of how different the magnitude of the measures are from each other .
So as an example if “person A” watches 10 hours of sport and 4 hours movies and “person B” watches 5 hours of sport, 2 hours movies, we can see the twos are (perfectly in this case) aligned given the fact that regardless of how many hours in total they watch TV, in proportion they share the same behaviours.
By contrast if the objective is to analyse those watching similar number of hours in an interval, the euclidean distance would have been more appropriate as that evaluates the distance as we are used normally to think.
It’s rather intuitive from the chart below to see this comparing the two points A and B with the length of segment f=10 (euclidean distance) with cosine of angle alpha = 0.9487 which oscillates between 1 and -1 where 1 means same direction same orientation, -1 same direction but opposite orientation.
Simple example to how cosine of alpha (0.94) show a good alignment between the two vectors (OA) and (OB)
If the orientation is not important in our analysis the module of cosine would null this effect and consider +1 the same as -1.
In terms of formulas cosine similarity is related to Pearson’s correlation coefficient by almost the same formula as cosine similarity is Pearson’s correlation when vectors are centered on their mean:
(image by author)
Cosine Similarity Matrix:
The generalization of the cosine similarity concept when we have many points in a data matrix A to be compared with themselves (cosine similarity matrix using A vs. A) or to be compared with points in a second data matrix B (cosine similarity matrix of A vs. B with the same number of dimensions) is the same problem.
So to make things different from usual we want to calculate the Cosine Similarity Matrix of a group of points A vs. a second group of points B, both with same number of variables (columns) like this:
(image by author)
Assuming the vectors to be compared are in the rows of A and B the Cosine Similarity Matrix would appear as follows where each cell is cosine of the angle between all the vectors of A (rows) with all the vectors of B (columns):
(image by author)
If you look at the color pattern you see that first vectors “a” replicate itself by row, while vectors “b” replicates itself by columns.
To calculate this matrix in (almost) one line of code we need to look for a way to use what we know of algebra for numerator and denominator and then put it all together.
Cell Numerator:
If we keep A matrix fixed (3,3) we have to operate a ‘dot’ product with the transpose of B [=> (5,3)] and we get a (3,3) result. In python this is easy with:
num=np.dot(A,B.T)
Cell Denominator:
It ’s a simple multiplication between 2 numbers but first we have to calculate the length of the two vectors. Let’s find a way to do that in a few Python lines using the numpy broadcasting operation which is a smart way to solve this problem.
To calculate the lengths of vectors in A (and B) we should do this:
square the elements of matrix A sum the values by row root square the values out of point 2
In the above case where A=(3,3) and B=(5,3) the two lines below (remember that axis=1 means ‘by row’) return two arrays (not matrices !):
p1=np.sqrt(np.sum(A**2,axis=1)) # array with 3 elements (it’s not a matrix) p2=np.sqrt(np.sum(B**2,axis=1)) # array with 5 elements (it’s not a matrix)
If we just multiply them together it doesn’t work because the ‘*’ works element by element and the shapes as you see are different.
Because ‘*’ operation is element by element we want two matrices where the first has the vector p1 vertical and copied in width p2 times, while p2 is horizontal and copied in height p1 times.
To do this with ‘broadcasting’ we have to modify p1 so that it becomes fixed in vertical (a1,a2,a3) but “elastic” in a second dimension. The same with p2 so that becomes fixed in horizontal and “elastic” in a second dimension.
(image by author)
To achieve this we leverage the np.newaxis function with this:
p1=np.sqrt(np.sum(A**2,axis=1))[:,np.newaxis] p2=np.sqrt(np.sum(B**2,axis=1))[np.newaxis,:]
p1 can be read like: make the vector vertical (:) and add a column dimension and p2 can be read like: add a row dimension, make the vector horizontal. This operation for p2 in theory is not necessary because p2 was already horizontal and even if it was an array, multiplying a matrix (p1) by an array (p2) results in a matrix (if they are compatible of course) but I like the above because more clean and flexible to changes.
Now if you look p1 and p2 before and after you will notice that p1 is now a matrix and so p2 but still one dimensional.
If you now multiply them with p1*p2 then the magic happens and the result is a 3x5 matrix like the p1*p2 in grey in the above picture.
So we can now finalize the (almost) one liner for our cosine similarity matrix with this example complete of some data for A and B:
import numpy as np A=np.array([[2,2,3],[1,0,4],[6,9,7]])
B=np.array([[1,5,2],[6,6,4],[1,10,7],[5,8,2],[3,0,6]]) def csm(A,B):
num=np.dot(A,B.T)
p1=np.sqrt(np.sum(A**2,axis=1))[:,np.newaxis]
p2=np.sqrt(np.sum(B**2,axis=1))[np.newaxis,:]
return num/(p1*p2) print(csm(A,B))
Correlation Matrix between A and B
In case you want to modify the function to use it to calculate the correlation matrix the only difference is that you should subtract from the original matrices A and B their mean by row and also in this case you can leverage the np.newaxis function.
In this case you first calculate the vector of the means by row as you’d usually do but remember that the result is again a horizontal vector and you cannot proceed with the code below
B-B.mean(axis=1)
A-A.mean(axis=1)
We must make the means vector of A compatible with the matrix A by verticalizing and copying the now column vector the width of A times and the same for B. For this we can use again the broadcasting feature in Python “verticalizing” the vector (using ‘:’) and creating a new (elastic) dimension for columns.
B=B-B.mean(axis=1)[:,np.newaxis]
A=A-A.mean(axis=1)[:,np.newaxis]
(image by author)
Now we can modify our function including a boolean where if it’s True it calculates the correlation matrix between A and B while if it’s False calculate the cosine similarity matrix:
import numpy as np A=np.array([[1,2,3],[5,0,4],[6,9,7]])
B=np.array([[4,0,9],[1,5,4],[2,8,6],[3,2,7],[5,9,4]]) def csm(A,B,corr):
if corr:
B=B-B.mean(axis=1)[:,np.newaxis]
A=A-A.mean(axis=1)[:,np.newaxis]
num=np.dot(A,B.T)
p1=np.sqrt(np.sum(A**2,axis=1))[:,np.newaxis]
p2=np.sqrt(np.sum(B**2,axis=1))[np.newaxis,:]
return num/(p1*p2) print(csm(A,B,True))
Note that if you use this function to calculate the correlation matrix the result is similar to the numpy function np.corrcoef(A,B) with the difference that the numpy function calculates also the correlation of A with A and B with B which could be redundant and force you to cut out the parts you don’t need. For example the correlation of A with B is in the submatrix top right which can be cut out knowing the shapes of A and B and working with indices.
Of course there are many methods to do the same thing described here including other libraries and functions but the np.newaxis is quite smart and in this example I hope I helped you in that … direction | https://towardsdatascience.com/cosine-similarity-matrix-using-broadcasting-in-python-2b1998ab3ff3 | ['Andrea Grianti'] | 2020-12-08 13:21:48.486000+00:00 | ['Machine Learning', 'Data Science', 'Python', 'Marketing', 'Analytics'] |
Remembering Steve Jobs | Remembering Steve Jobs
Eight years on, we take a look at the life of a man whose innovations changed the world
Known for charismatic presentations, perfectionism and his signature turtlenecks, Steve Jobs was a pioneer of the computing industry. From co-founding Apple Computers Inc. in 1976 through to his death in 2011, Jobs was one of the most polarising figures in the technology industry, whose ideas and ingenuity have impacted billions of people worldwide.
But his life was so much more than technological innovations. In a world where we have force-fed children from a young age the idea that the road to success is through academia, Steve Jobs is an example to us all of how life is so much more satisfying when you are doing what you love, as opposed to doing what you have been told you should do.
Born To Be A College Graduate?
Born in 1955, Steve’s hadn’t even left his mother’s womb before she made the decision that he would go to college. She wanted him to have a better future than the one that she could provide, so following his birth, he was immediately put up for adoption to a college-educated, Catholic couple.
But unfortunately for Steve, the couple passed on adopting him, explaining they wanted to adopt a girl instead. As a result, Steve was passed on to Paul and Clara Jobs, neither of whom were college-educated. It was only after they promised his biological mother that Steve would go to college that she agreed to sign the adoption papers.
Throughout Steve’s childhood, it was clear for all to see that he was an incredibly gifted individual, but his dislike of formal education was just as apparent. And while his parents used their savings to send their son to college and keep their promise to his biological mother, what they perhaps didn’t anticipate was him dropping out after just six months.
‘Fate, it seems, is not without a sense of irony.’ — Morpheus, The Matrix
Steve’s high school yearbook photo, taken before he discovered Turtlenecks. (Image credit Seth Poppel)
Giving a commencement speech at Stanford University in 2005, Jobs joked that the speech he was giving was the closest he ever came to college graduation. When discussing his time at college, he said:
‘I couldn’t see the value in it, I had no idea what I wanted to do with my life and no idea how college was going to help me figure it out, and here I was, spending all of the money that my parents had saved their entire life.’
Looking back, he said that dropping out of college was one of the best decisions that he ever made, as he was no longer burdened with studying things that didn’t interest him, which allowed him the freedom to study the things that did.
He recalls the opportunity he had to take a class in calligraphy, as it is something that he found fascinating. The information learned in that class manifested itself more than ten years later when he incorporated it into the design of the original Macintosh. Without that class he argues, the Mac would never have had the varied typefaces and fonts that were built into it.
The First Bite Of The Apple
Apple was famously co-founded in Steve’s parent's garage in 1976, a far cry from the $5 billion, 2,800,000 square foot building that currently serves as Apple HQ in Cupertino, California.
For the following nine years Apple was on the rise. Beginning with the Apple I, through to the release of the first Macintosh in 1984, the company had begun to establish itself as one of the premier computing companies in the world.
Steve Jobs public demonstration of the first Macintosh computer in 1984 (Image credit www.businessinsider.com)
But in 1985 things began to unravel. Then CEO John Sculley believed that Jobs was hurting the company, and the two had extremely opposing views on what direction the company should be going in. After failing to regain control of the company, Jobs resigned from his position at Apple, leaving behind the company which he co-founded aged twenty in his parent’s garage.
The Inbetween Years
When discussing the years that followed his departure from Apple, Jobs said that being fired was the best thing that could have happened to him at that point in his life. After leaving Apple he went on to found not one, but two successful companies. One of them went on to become the most successful animation studio in the world (a small company you may have heard of by the name of Pixar), while the other, a computing company called NeXT, was subsequently bought by Apple in 1996, bringing Jobs back to the company which he had co-founded twenty years previously. What happened in the following years can only be described as a complete revolution in personal computing technology.
Returning To The 'Not-So-Big’ Apple
By the time he had regained his position as CEO of Apple in 1997, the company was a stone’s throw away from declaring bankruptcy. After what had been a very prosperous decade for the company, sales were starting to decline.
Microsoft had increased its market share by offering personal computers that were much more cost-effective than those offered by Apple. Jobs knew that continued competition between the two companies would spell the end of Apple. So, upon his return, he announced that Apple would be going into partnership with Microsoft.
In his address to the Macworld Expo in 1997, he made it clear that there was no reason that the two companies couldn’t both be successful. He said:
‘We have to let go of this notion that for Apple to win, Microsoft has to lose. We have to embrace a notion that for Apple to win, Apple has to do a really good job.’
As part of the deal, Apple received $150 million from Microsoft, and Microsoft Office would be made available on Mac, with the additional announcement that Internet Explorer would be the default web browser on the Mac going forward.
The rest, as they say, is history.
Starting with the iMac, Apple went on to develop a huge range of products that went far beyond the realm of computers, such as iTunes, the iPod, and the iPad, in addition to its MacBook range.
But you know as well as I do, that there is one product that is most associated with Apple. There is one product that has dominated the market since it was first released in 2007.
I’m talking of course, about the iPhone.
Steve Jobs unveiling the first generation iPhone at the Macworld Conference in 2007 (Image credit www.time.com)
Having sold over 2 billion units since it was released in 2007, Apple has recently launched the 11th iteration of the iPhone. When Steve Jobs first announced the iPhone back in 2007, he said that the company’s aim was to capture 1% of the global mobile phone market. It’s safe to say that they succeeded. As of the financial quarter ending December 2018, Apple had garnered over 50% of the global smartphone market.
Final Days
In 2003, Jobs was diagnosed with pancreatic cancer. Following his diagnosis, he delayed having surgery, preferring to opt for alternative medical treatment in his attempts to beat the disease. It wasn’t until 2004 that he finally bowed to pressure from doctors and underwent surgery, in which the tumour was (apparently) successfully removed.
In the years running up to his death, Jobs began to take more of a backseat at Apple. Having spent most of his time with the company taking the lead at Apple events and product unveilings, his retreat from the public eye fuelled speculation of his deteriorating health.
He took several medical leaves of absence from his duties at Apple in the years prior to his death. On August 24th 2011, Steve Jobs officially resigned from his position as CEO of Apple. In his letter to the board he stated:
‘I have always said if there ever came a day when I could no longer meet my duties and expectations as Apple’s CEO, I would be the first to let you know. Unfortunately, that day has come.’
He died six weeks later at his home, surrounded by his family, after enduring a relapse of his pancreatic cancer.
His Legacy Is Much More Powerful Than Any iPhone.
We live in a world that loves to tell you how to live your life.
Steve Jobs showed us that even the best-laid plans don’t always come to fruition, yet he is an example of why that isn’t necessarily a bad thing.
Had he stayed in college instead of pursuing what he was truly passionate about, would the Macintosh have ever been invented? Would you be reading this on your iPhone, iPad, or Macbook? Possibly not.
We can’t control most of what life throws at us. But what we can control is our response to it. Instead of wallowing in negativity after being fired from the company that he co-founded, he went on to found not one, but two companies. If he had never been fired from Apple in 1985, he may have never founded Pixar, and the world might never have known the delight that is the adventures of Woody, Buzz and the rest of the gang in Toy Story.
Had he never have founded NeXT, he may have never developed the technology which Apple then went onto use in its core range of products.
Remember Steve Jobs. Not only because you have him to thank for the iPhone in your pocket. Remember him because he was a man that didn’t waste his time on Earth doing things he didn’t want to do, because he was too busy doing the things that he loved. Because isn’t that the kind of life we all want to lead?
“If you live every day like it was your last, one day you will most certainly be right." — Steve Jobs | https://medium.com/swlh/remembering-steve-jobs-edf2257bdab3 | ['Jon Peters'] | 2019-10-07 20:04:52.664000+00:00 | ['Innovation', 'Entrepreneurship', 'Culture', 'Technology', 'Apple'] |
An Investigation of the California Wildfire Crisis | After running the animation a couple of trends appear. Since the onset of 2017, there have been far more damaging fires across the state with coastal and forested areas hit hardest. Before 2018, fires were far more equally distributed in both size and frequency all over California. However, from 2018 onwards, there has been a rampant increase in deadly wildfires across Northern California compared to other regions in the state. 2020 alone has seen the most destructive fires in modern history with the August Complex Fire and LNU Lightning Complex of Northern California burning an estimated 1.4 million acres to date. The August Complex Fire (depicted as the large, bright yellow circle in the 2020 pane), was extinguished only last month in November after an unprecedented cost of $264.1 million dollars.
One striking oddity in the trend of wildfire growth is the plummet of wildfires in 2019. With the lowest levels since 2004, the destruction done in 2019 amounted to only 260,000 acres — a mere 6% of the damage done in 2020. However, experts vehemently believe this trend will not change any long-term patterns and was simply an irregularity due to unusually heavy precipitation.
From this animation, it is clear that the most destructive fires occur in more heavily-forested areas — centered around Northern California. However, can the same be said about the frequency of wildfire incidents?
Hover over each bar for year-specific information
In this bar chart, broken up by region (NorCal and SoCal), it is clear that while the number of incidents in Southern California is increasing at a slight linear rate, the number of incidents in Northern California appears to be growing exponentially year-on-year. Reasons for this stark division can be attributed to the presence of more vegetation in NorCal which acts as a veritable timebomb waiting to ignite during the dry season.
With the exception of 2019 — which was an anomalous year for wildfires — there continues to be an overarching trend of both increased frequency and severity of wildfires, especially in Northern California.
County-Level Analysis
The figure below is a choropleth map — a thematic map colored in proportion to a specific statistical variable. In this case, this choropleth has been partitioned to visualize the cumulative acres burned since 2003 on a county-by-county basis. As expected, Northern California trends towards more land destruction.
Hover over choropleth for information on specific counties
Heavily damaged areas, depicted in purple and black, are also among the most forested regions in California. The San Joaquin Valley, an inland area constituting much of the Central Valley sports a more Mediterranean climate and is far less forested. This, in turn, accounts for fewer acres burned compared to the more northerly Sacramento Valley.
Source: Drought Monitor NOAA Climate.gov
Prolonged droughts and the rapid drying of vegetation accounts for the increase in wildfires. However, variability in temperature and precipitation are often the real instigators. Years of drought in California have traditionally been followed by very wet weather leaving behind vegetation that turns into fuel for wildfires. Cyclical weather patterns along with strong, warm winds prime wildfires for destruction. As seen above, Northern California is the most victim to moderate-severe drought compared to the rest of the state. This dichotomy might be explained by Southern California’s access to the Colorado River Aqueduct, which supplies more than a billion gallons of water a day.
Hover over each bar for county-specific information
The bar graph shown above reveals that Lake County, Shasta County, and Trinity county suffer the brunt of wildfire damage across California. The reason these three counties lead the pack is that they all encompass national parks and are more heavily forested than any other region in the state.
Coincidentally the Shasta-Trinity National Forests — the largest National Forest in California was also the site of the August Complex Fire — the most destructive wildfire in modern California history.
Across California, many minor incidents occur every day and are often contained without loss of property or life. Even with fewer wildfire incidents relative to other counties, Lake County is still responsible for more than 2 million acres worth of destruction over the last two decades.
To better understand the impact wildfires have on private property we drew from data provided by the insurance industry. The below choropleth map serves to highlight figures provided by Verisk Analytics concerning the percentage of California households at high to extreme risk for wildfires.
Hover over choropleth for risk-assessment of various counties
Counties in blue indicate less than 20% of households are threatened, whereas red counties indicate a larger percentage of households at severe risk for wildfires.
Alpine, Trinity, Tuolumne, and Mariposa counties had the highest concentration of severely at-risk households in Northern California. The reason for this disparity is because households in Northern counties are dispersed at the edge of forested areas and are often directly in the way of wildfires. In many instances, stray embers carried by the wind make their way to rain gutters, bursting into flames and engulfing hundreds of properties.
Investigating Major Wildfire Incidents
After investigating wildfire impact on a county-by-county level we shifted to analyzing specific wildfire incidents. The bar chart below shows the top wildfires incident in terms of acres burned.
Hover over each bar for incident-specific information
Among the wildfires listed, the top six all took place in Northern California between 2018 and 2020. Furthermore, the August Complex — the single-largest wildfire and the largest fire complex in recorded California history — is more than twice as large as any other recorded incident. Blazing across Mendocino, Humboldt, Trinity, Tehama, Glenn, Lake, and Colusa counties, the August Complex Fire has been coined as a “gigafire” — a term never before used in California history to signify a blaze that burns at least a million acres.
From this subset of the Top 10 Wildfires in California history, the average duration that wildfires burned for was a whopping 59.5 days. Naturally, the question arises: What Makes Wildfires So Hard to Put Out? Wildfires behave in many ways like a combustion-powered hurricane. By channeling air and fuel upward, forests often spontaneously combust without actually coming into contact with flames. This combination of explosive growth and hellish conditions often renders fire-support teams from both the ground and air useless. In addition, since fires are most prevalent during the dry season, a lack of humidity leaves humans without the aid of Mother Nature.
Seasonal Disparity
For many Californian natives, fire season has always been marked by the end of Summer well into late Fall — making up the months of August, September, and October. To better understand the annual pattern of wildfires, we created a boxplot of Acres Burned across wildfire incidents from 2003–2020. By grouping wildfires by the month they started, we aimed to recognize and explain the seasonal disparity in damage.
Hover over each point for incident-specific information
Right from the start, it’s clear that wildfires dominate the autumn months. The peak of fire season spans from late July through September and is marked by radical wildfires destroying twice the number of acres compared to incidents in the spring and early summer.
Interestingly, there likely exists two distinct fire seasons in California. A study by University of California authors found that California cycles between what is called the Summer Fire Season, and the Santa Ana Fire Season.
Characterized by dry winds blowing towards the coast from the interior, the Santa Ana Fire Season occurs from October through April — striking more developed areas and inflicting more economic damage.
The Summer Fire Season, however, can take place anywhere in the state and often impacts remote/wild areas — as was the case with the August Complex Fire which engulfed the Mendocino, Six Rivers, and Shasta-Trinity national forests. Making up the rest of the calendar year, the Summer Fire Season is between June and September.
With the Santa Ana Fire Season inflicting more economic damage, the Summer Fire Season accounts for the most land destruction with millions of acres destroyed every year.
In order to prove that there exist two distinct fire seasons in California, we tried to replicate the results made by University of California researchers.
We decided to apply a form of unsupervised machine learning known as K-means Clustering to see if the “clusters” of months we identify align with the months outlined in the University of California paper. The idea behind this was to apply K-means clustering on key metrics that might reveal if a set of months truly deserves to belong in their own distinct fire season. As per the findings made by University of California researchers, we wanted to investigate metrics that are most representative of fires during the Summer Fire Season and Santa Ana Fire Season. For example, University of California researchers observed that ‘Summer Fires’ are more inclined to burn more slowly, while ‘Santa Ana Fires’ tend to burn along the coast. As such we take into account the average fire duration by month as well as the percent of fires located in coastal counties. Other metrics include the average acres burned and total wildfires incidents on a month-by-month basis.
To ensure that the algorithm weights each metric with the same relative importance during the clustering process, we normalized each column between 0 and 1 using the MinMaxScaler function in Python’s Scikit Learn library. After normalizing my metrics, we were able to use the K-means algorithm to partition my data into clusters as seen below.
There appear to be two distinct clusters based on this dataframe. Cluster A, identified by ‘1’ exists from June through September. Cluster B, identified by ‘0’ exists from October through April.
Cluster A is striking in that there are more wildfire incidents, a greater number of acres burned on average, and longer wildfires. These results share similarities to the ‘Summer Fire Season’ description outlined by University of California researchers. In the same vein, fires in Cluster B tend to take place along coastal areas and burn fast — characteristic of the Santa Ana Fire Season.
Hover over each point for incident-specific information
To better visualize the disparity between fire seasons we applied a color scale to the boxplot from earlier. Months in orange constitute the Santa Ana Fire Season, whereas months in red represent the more destructive Summer Fire Season.
Human Activity
There are a variety of ways to gauge which incidents can be described as the worst wildfires in California history. Different metrics include size (acres burned), deadliness (lives lost), and destruction (infrastructure destroyed). In order to better understand the part humans play in instigating the worst wildfires in California history we look at the following subsets: the top 20 largest wildfires, the top 20 deadliest wildfires, and the top 20 most destructive wildfires. | https://ucladatares.medium.com/an-investigation-of-the-california-wildfire-crisis-7104b1cb4a69 | ['Ucla Datares'] | 2020-12-22 00:23:52.666000+00:00 | ['Python', 'California', 'Plotly', 'Wildfires', 'Datares'] |
5 Books Bill Gates Thinks You Should Read in 2021 | 5 Books Bill Gates Thinks You Should Read in 2021
A reading list that will inspire you to think differently.
Photo via Flickr
I’ve recently noticed that I have an increasing amount of things in common with Bill Gates. Unfortunately, I’m not talking about having a billion dollars in my checking account. Instead, I’m referring to a genuine love of reading that inspires me to think differently about the world.
How do I continuously find new and exciting books to read? By paying attention to recommendations from passionate readers, such as Bill Gates, and then reading them as soon as I get an opportunity.
So below are several interesting books that Bill Gates has recommended. Each of them changed the way I see the world, and I’m confident they will do the same for you, too. | https://medium.com/curious/5-books-bill-gates-thinks-you-should-read-in-2021-19f926e9730c | ['Matt Lillywhite'] | 2020-12-28 13:15:53.716000+00:00 | ['Education', 'Books', 'Reading', 'Productivity', 'Self Improvement'] |
The Psychology of Airport Design | The Psychology of Airport Design
How airports are designed using traveller behaviour
Photo by chuttersnap on Unsplash
As a case study in environmental design, airports are fascinated. At the core, their function seems fairly simple: a holding space for travellers who are waiting for a flight. Yet, they’re actually an important retail space for many companies and, although you may not notice it, they’re designed with this firmly in mind.
Airport designers think carefully about the journey that travellers make through an airport, from check in to security to gate. They then look to behavioural psychology, looking at how people move around spaces like airports. Combining these two elements allows airport designers to design a space around the traveller’s path which will entice them with retail and restaurant opportunities.
It’s summer, and many of us will be passing through an airport or two over the coming months. Next time you’re in an airport, you might notice some of these ways that airport design reflects psychology and human behaviour.
The stressful part is out of the way quickly
Photo by Moralis Tsai on Unsplash
Taking a flight can be a stressful experience. When you first get to the airport you’re faced with check in. You’re already worried that you’ll be hit with a mega charge if your bag is over the weight limit. You’re also quizzed with intense questions: did you pack your own bag?
After that, you’re directed through a series of queues to get through security. Even if there’s nothing suspicious whatsoever in your bag or on your person, airport security is enough to get your pulse rating — especially if the beeper goes off and you have to have a hands-on search.
Airport designers are well aware of this stress. And they also know that after that stress comes relaxation, the start of holiday mode. In terms of retail, this is the key time. All of the airport admin is done, and it’s time to grab a glass of pre-holiday prosecco and browse the duty-free shops.
That’s why changes in airports are generally focused on optimising that initial portion of the airport experience: streamlining security checks, or improving at home check in, for instance. Get the stress over and done with quickly, and prolong that period of pre-flight relaxation when passengers are more likely to spend money on retail shops and restaurants.
Pathways are built right through duty free
Duty free shops are a key area of income for airports. Because travellers experience that period of relaxation immediately following security, airport designers will usually have the duty free shop as the first thing that a traveller sees after security. It acts as a ‘re-composure’ space where the traveller can move from stressful process to relaxed retail.
Research has shown that if customers have to physically walk past items which are for sale, they’re 60% more likely to make a purchase. That’s why almost every duty free shop in an airport is configured in such a way that all passengers have to walk through it. It’s usually the gateway between security and the retail space of an airport. By exposing customers to products in this way, they’re able to maximise revenue.
Walkways mirror how we walk
Most of us are right handed, meaning that we’ll naturally use our right hand to pull our carry-on luggage. To improve our balance, we’ll therefore tend to walk in an anticlockwise direction. That means that when we’re walking through an airport, most of us are looking to the right far more than we’re looking to the left.
Airport designers use this behavioural knowledge to inform how they design routes through an airport. They mimic the way that we walk, designing walkways which curve from right to left. The majority of shops will then be placed on the right hand side, where they are more visible to people who are walking to the left.
Metres become minutes
Photo by Steven Hille on Unsplash
Airport can be quite big spaces, housing thousands of passengers. It can, therefore, take a while to get to your gate when the flight is ready.
To mitigate the stress of this, airport signs for gates used to give the metres between your current position and the gate. However, in recent years you may have noticed that those metres have become minutes — the time it takes to walk to the gate.
Research found that passengers understood minutes as a marker of distance much more quickly than they could understand the metres. This helps us to feel more at ease during their time in the airport, because we know exactly how much time they need to get to the gate. Therefore, we’re likely to spend longer in that retail and restaurant area of the airport, where we’re helping the airport to generate profit.
Keep it cool and dark
Photo by VanveenJF on Unsplash
Recently ‘smart glass’ has begun to be used in airport design. This smart glass can adjust itself based on the amount of sunlight exposure coming through it, preventing too much heat and sun glare entering the airport.
Dallas-Forth Worth International Airport ran a test with the smart glass in October 2018. They found that when the smart glass was installed customers were much more likely to stay longer in the airport’s restaurants, and to buy an extra drink or two. Sales of alcohol increased by a huge 80% during the test period, simply because it was cooler and darker in the restaurant.
References
Insights for this post were gathered using this report by Intervistas, titled: ‘Maximising Airport Retail Revenue’. | https://medium.com/swlh/the-psychology-of-airport-design-5858a5a2db25 | ['Tabitha Whiting'] | 2019-07-23 14:36:29.405000+00:00 | ['Travel', 'Design Thinking', 'Design', 'Psychology', 'Airports'] |
When Your Past Haunts Your Current Relationships | “The patient cannot remember the whole of what is repressed in him, and what he cannot remember may be precisely the essential part of it. He is obliged to repeat the repressed material as a contemporary experience instead of remembering it as something in the past.” ~Sigmund Freud
The old saying, “we are creatures of habit” rings true here, and especially when talking about why we repeat — on autopilot — the things that we instinctively know are shooting us in the foot.
Are we gluttons for punishment?
Well, Behavior Analytically speaking…no.
If we continue doing what inevitably doesn’t have our best interests at heart, then we’re sabotaging ourselves, as in compulsively…and repetitively.
What I am referring to here is called “repetition compulsion” which is a term coined by Sigmund Freud as he watched a young child throw a toy repeatedly and then pick it back up, only to throw it again. In true Freudian analysis, he proposed that the young child was missing his mother who had left the house earlier, and that the kid’s behavior was a combination of ridding himself of his absentee mother (by tossing the toy) and then bringing mom back (grabbing the toy), thus “fixing” the situation.
Freud aside, we are often guilty of a ‘repetition compulsion’ of sorts if we gravitate to the same show to binge over and over. Or, we may head to our favorite getaway and order the same thing from room service each time.
While this is usually seen as being in our Netflix comfort zone, or having an affinity for Cobb salad, there isn’t anything necessarily self-destructive in these “repetitions” — as long as these habits aren’t being used to self-sabotage or to avoid or numb other pain. For example, if you binge Netflix as an escape or to numb for every failed relationship, then you’re avoiding digging deeper in unboxing what may keep you locked intoa habit of chasing a new relationship each time a problem surfaces in an existing one.
However, healthy repetitive behavior isn’t what Freud or other analysts are referring to regarding this phenomenon. More often than not, a repetition compulsion is a series of learned, habitual behaviors and behavior patterns that originate in childhood and negatively influence us throughout our adult lives unless (or until) we choose to conquer them and make healthy changes.
Because of their repetitive nature, most of us probably gravitate towards thinking that compulsive bad habits are part of intimate relationships. And, you would be correct in thinking this. Intimate relationships may be where a repetition compulsion has most of its strength and influence because of vulnerable emotions and emotional intimacy that are usually tied into it. | https://medium.com/hello-love/when-your-past-haunts-your-current-relationships-66adb4df634 | ['Annie Tanasugarn'] | 2020-12-02 16:56:36.890000+00:00 | ['Mental Health', 'Life Lessons', 'Love', 'Psychology', 'Life'] |
The Story of our New Medium Publication Writing Heals | This past week I had an almost non stop bombardment of family stresses. I won’t go into all of it but my man Bobs’ sister Carol has rapidly progressing M.S. (Muscular Sclerosis). She is going downhill fast. She is my age, 57. It’s very hard to watch.
Bobs mom, who has been her caretaker called yesterday sounding weary and overwhelmed with grief.
“This is the hardest thing ever….to be a parent watching your child die slowly!”
sigh…
“Carol has not stopped eating. She weighs over 300 lbs now. She just doesn’t want to give up her sweets! She said, ‘Mom, I want to go out happy!” She knows she doesn’t have much time left and wants to eat what she wants. She has pretty much given up. Now she wants to enjoy herself and her life!”
I guess I can’t blame her.
She had a half written book she was getting ready to publish. It was a life ‘dream’ of hers. I planned to help her with formatting and editing but now that dream is over. She can no longer write or focus on anything. It’s sad to see the quickness of her deterioration — how fragile and short life really is.
Photo by Clemente Cardenas on Unsplash
I also have two friends (both also my age) who have cancer. It doesn’t look good for them either.
So… all this had me thinking about the urgency of life. Why we must must try to use our time for life enhancing things.
I believe this in my soul! | https://medium.com/writing-heals/the-story-of-our-new-medium-publication-grand-opening-today-ad842706363f | ['Michelle Monet'] | 2019-10-05 23:38:33.093000+00:00 | ['Mental Health', 'Healing', 'Writing Tips', 'Writing Life', 'Writing'] |
The Subtle Art of Writing Copy | The Subtle Art of Writing Copy
Good design is based on a good copy. That’s why UX writing should definitely be the next thing on your skill set.
Should designers…
If you work in the design industry, you might have read at least one article starting with the following words: should designers [name the skill here]? This question returns to us like a boomerang, bringing a new thing each time. Should designers code? Should they know how to create breathtaking UI? Should they conduct research and client workshops? What about analytics? The list goes on and on.
Should designers write copy?
As a response to this phenomenon, we see that many fractions have emerged in the design community. From specialization fanatics to one-man-army believers. Recently, the newest skill on everyone’s mind is writing. UX writing, to be specific. So, should designers write?
FOLI — the fear of lorem ipsum
In 2019 everyone (sic!) knows, that lorem ipsum is bad. If you use lorem ipsum, it is because you either believe that content is not your responsibility, or you are too busy (lazy) to come up with your own copy. In defense of those who still like it, though, Scott Kubie, Lead Content Strategist at Brain Traffic, admits he sometimes uses lorem ipsum to see the shape of the text and visualize the paragraphs. In any other case, working on real content is often crucial to the design. If you don’t consider the length of the CTA labels, headings or blog posts when designing the interface, your whole concept will most probably break the second it goes live.
Should I do it or should you?
This brings up the issue of responsibility. But to talk about responsibility, we need to acknowledge the problem first. Writing content is at the very bottom of both the product team’s and client’s to-do list. Designers assume that the client will create or adjust the content based on their designs; that there will be someone else that will do it better, so they leave some places blank or use “button label” instead of real text. At the same time, clients believe they will get the working product that is not only beautiful but also functional and ready for development. It is thus not surprising that when the end of the project is on the horizon, it becomes clear that we are missing the text, i.a.:
error messages and recovery flows
confirmation screens
user-visible metadata like page titles and search engine descriptions
transactional emails
in-app user assistance
support documentation
changelogs
feature descriptions and marketing copy.
As designers, we are responsible for delivering the product, and it definitely includes the copy, at least in a draft phase. Why? Because the copy sometimes has the power to alter the whole design and we need to be aware of that. Using a specific copy is a design decision. Our job is to guide the client on how to shape the content so that it agrees not only with the design itself but also with the content strategy that is best for the product. That said, we are not able to, and we shouldn’t produce the content without the client’s input. That is why cooperation is the key here.
Ok, but I can’t write
Well, that is simply not true. If you know how to design, you also know how to write. You might not be very good at it, sure, but that is where all the publications on UX writing come in handy. Polishing your skills in this area will help you in not only coming up with better copy, but also formulating and explaining your ideas to clients. And remember, unless you are a UX writer assigned specifically to create content for the project, the fact that you are responsible for it doesn’t mean you need to write it all by yourself. Browse through your client’s product descriptions, use common language patterns and don’t be afraid to ask for feedback. It is not the originality that is being assessed here. Sometimes being too creative puts us on the straight path to dark patterns or confirm-shaming, like in the examples below, where the copy that was supposed to be funny is actually shaming users into doing something they might not want to do.
Examples of confirm-shaming — not opting-in means that you accept the website insulting or shaming you.
Types of content
In general, we usually divide digital content into 3 categories:
Interface copy or microcopy — short text elements like labels for form fields, text on buttons, navigation labels, error messages, etc. The interface would break without them;
Product copy — not necessarily a direct part of the interface, but plays an important role in the functioning of the product. It focuses on supporting the reader, like e.g. the body of the onboarding email.
Marketing copy — connected with sales or promotion, often longer and focused on persuading the reader. Here you can be more creative.
Depending on the product, there can be many more categories to deal with. The most crucial one for designers is the first one — microcopy. However, clients sometimes need some guidance with other types of content, and for your design to work, it is best to address that at an early stage. If the blog posts are very long or difficult to understand, even the most beautiful UI won’t improve the user experience. And if the value proposition is not stated clearly enough, the bounce rate might be very high despite the new shiny information architecture.
This doesn’t mean you are responsible for the content that clients should produce themselves. But creating a draft can start a discussion, and discussion can lead to mutual understanding. After all, it is in your best interest as a designer for the product to work and perform best when it hits the light of the day. You can then create that Dribbble shot and attach the link to the real product with pride.
Design and copy are inseparable
DOs and DON’Ts
Below you can find some tips to get you started. A list of practices that you should avoid if you want to deliver a high-quality copy is longer than that. Think of it as a base on which you can build later on.
DOs
More and more companies create their own content style guides. They are connected with their brand identity, but also use the very basic principles of clear and appropriate style. Check Shopify, Mailchimp, Buzzfeed or even Material design content style guides for more inspiration and implement it in your own process. Don’t just copy it 1:1, though, as context of your users will be different depending on whether you are designing a banking app or a social media platform. Try to use these style guides to create your very own.
Try to cover all the possible errors, but while doing that, consider if you really need it. Can this problem be solved by changing the flow, layout, colors? You may discover that the error message can be avoided by simply getting rid of the error itself.
When starting a project, always agree on who is responsible for the content. As I’ve mentioned before, it doesn’t mean this person needs to write it all, but they need to manage it, start the discussion and make sure that this issue is being addressed.
Establish the values that will guide you throughout the process. For starters, try being helpful and human in your copy. This means empathy in error messages or avoiding technical jargon. Not sure if the text is understandable? Try to test a sample in one of the online tools like e.g. Hemingway App. It takes just a few seconds and you get feedback right away.
Make it easier for the user to take in information. Use numerals instead of words for numbers, especially those higher than 9. Replace dates with “today,” “yesterday,” “tomorrow.” Make sure button labels always have action verbs.
DON’Ts
Don’t try to be too clever with the microcopy. It is not about creative writing but being simple and transparent so that users do not even notice your choice of words. You can work on some more exciting phrases when writing a marketing text for the landing page. Still, in most cases short beats good. Users don’t read word for word and if the text is not easy to scan, they might just not read it at all.
Which one of these messages are you most likely to read?
Don’t ignore the edge cases. If you don’t write the copy for every single error possible, there are two ways it can play out. One is that users will get the same error message each time, regardless of the problem. It is definitely not helpful and can get really annoying. Option two is that developers will write these texts for you, which often ends up being a very technical jargon. To avoid that, work closely with the engineers, learn about all the edge cases and address them with the right text.
Don’t forget about other languages. If your product has more than one language version, you need to consider that when designing. Otherwise, this small lovely button of yours will break when switching from English to German.
Avoid long blocks of text. It is easier to digest the information when it is divided into smaller chunks. Want to bring it to the next level? Add subheadings too. This way you can inform the users what they will find in the next couple of paragraphs.
Why it is worth it
Designers can’t be everything at once. But stepping out of your comfort zone has so many benefits that it is worth at least considering. And the art of writing needs to be cultivated — after all, it is one of the things that makes us human. If you want to practice writing outside of your work, start with things like Day One app. See how it goes and work your way up from there.
I was once told that good design does not require words. You can agree or disagree with that sentence, but from my experience as a UX designer, I’ve learnt that good design is based on a good copy — one does not exist without the other. Poorly written text can be misleading and even the prettiest mockups might not make up for it. | https://medium.com/elpassion/the-subtle-art-of-writing-copy-3a566c367bf7 | ['Ewelina Skłodowska'] | 2019-09-16 12:10:49.349000+00:00 | ['Ux Writing', 'UX Design', 'Design', 'Productivity', 'UX'] |
How emotions work to create preference | Two main traits of the human brain work together when creating brand preference: Energy conservancy and emotions.
Where as the brains need to create preference stems from it’s need to conserve energy / survival instinct (read more…). Emotions help us create this preference.
The important thing here is that emotions is not the brain being lazy, it’s the brains way of evaluating and labeling a choice (and then being able identify preference.)
How does this work? Let’s again look to Daniel Gilbert:
Gilbert says that great psychologists in the end are measured by how they finish the sentence “men differ from monkeys because they …”. And Gilberts claim is that they synthesize future.
What does this mean? “synthesize future”?
When faced with a decision of a specific proportion we imagine ourselves “using” the product or the product being in “use”. We do this by recalling previous experiences that we find relevant and that helps us understand. Images based on our previous experiences which we collect because we find them relevant to the situation. Our emotions connected to these previous images are then mixed and creates an end-state emotion that we connect to the choice at hand.
Now the brain works in such a way that emotion created from imagining things has the same effect as real situation emotion.
Some quotes to support this:
(larger image…)
(larger image…)
(larger image…) | https://medium.com/137-jokull/how-emotions-work-to-create-preference-8f27c92d6558 | ['Helge Tennø'] | 2017-01-21 05:08:52.876000+00:00 | ['Perspective', 'Advertising', 'Psychology', 'Marketing'] |
Plandemic : Debunked | SCIENCE
Plandemic : Debunked
Stop Sharing Propaganda and Misinformation
Altered still from Plandemic meme, 2020
Go on social media right now and you are destined to run in to someone sharing or promoting a video that has gone viral called PLANDEMIC. They will parrot the points from this video like good little puppets, without taking the time to research who is giving them the information, or the credibility of the information being presented. This video is pure sensationalism and filled with outrageous lies and mistruths. And the information being shared is being shared by someone who is a known charlatan. If you believe in and promote this video, you’ve simply fallen for the trick.
First, who is Dr. Judy Mikovits? Google her name. What do you find? I think the first thing a person should notice that sticks out like a giant red flag is her connection to the anti-vaccine movement. This person has become a hero to a movement that claims vaccines are dangerous, cause autism, and kill people, thusly making her the hero of a movement that has caused a resurgence in long conquered diseases like the measles. The next red flag a person should pay attention to is the word DISCREDITED which always appears next to her name. When a scientist is kicked out of the scientific community and then becomes a champion for conspiracy theories, some might say this is anything but a coincidence. But unlike conspiracy theorists, who connect invisible dots no one else can see, there are dots to be connected here that are as large as planets.
As if the description of the video, which mentions a global conspiracy headed by the Rockefellers with ties to Nazi Germany in World War II, isn’t enough to clue people into its insane agenda, Plandemic follows the straightforward and rudimentary template of a conspiracy theory propaganda film, much the same as others that have come before it like Zeitgeist or Loose Change (videos claiming 9/11 was an inside job). It shows people talking in a darkly lit room making claims they present just on their own authority without any evidence to back them up. It does not give any other perspective. The music is a droning and somewhat ominous keyboard that provides an eery tone for the footage, making it appear to the viewer like they are seeing something that is supposed to be secret. The viewer is intended to feel this way because this is simply cut and paste manipulation tactics 101.
The person who made this movie is not exactly credible either. Mikki Willis has made quite a few questionable “documentaries.” In one of them, he concludes that a filmmaker named Daniel Northcott contracted leukemia and died due to finding a cursed Mayan bone…
Luckily for you, I’ve watched this 30 minute video so you don’t have to. Allow me to walk you through the claims it makes, and why they are blatantly incorrect and dangerous:
Dr. Judy Mikovits
The video claims Judy Mikovits to be “the most accomplished scientist of her generation.” NOT TRUE. She is known to be a fraud who manipulated laboratory conditions to produce her intended results. She was fired from the lab she worked in over concerns of integrity. She has repeated dangerous claims linking autism to vaccines after initially trying to link them to Chronic Fatigue Syndrome. While her early career does include some good work researching HIV and AIDS, this good potential was quickly undermined by her pursuits working for a private lab with a biased interest in Chronic Fatigue Syndrome.
The video claims Dr. Judy Mikovits revolutionized the treatment of AIDS in 1991 with her doctoral thesis. She did publish a thesis and at least one other AIDS study such as this one, but there is no evidence to support that her research “revolutionized” the treatment of this disease.
In this video, Mikovits claims that Big Pharma was behind her being jailed and slandered, and that she was arrested with NO CHARGES and her home was searched WITHOUT A WARRANT. These claims are outlandish, and easily proven to be FALSE. Mikovits was charged with theft. There was a warrant issued out of Washoe County, Nevada. These charges were related to her taking samples and equipment from the lab she worked in without consent, and over concerns that she would destroy evidence. On top of that, she was only in jail for five days, where she makes it seem like she was in prison for years.
The Mikovits SCIENCE scandal
Plandemic sensationalizes Mikovits as a victim by making claims that her published article in the magazine SCIENCE was something that shook the scientific community and that a conspiracy worked to take down her work. This is FALSE. Her paper was found to be manipulated and peer review studies could not replicate her findings. She worked under conditions that were heavily biased to link XMRV to Chronic Fatigue Syndrome because the owner of the institute she worked for had a daughter suffering with the disease. She manipulated the lab to present false positives. Thusly, her paper was retracted from SCIENCE. Mikovits thusly hates Dr. Anthony Fauci because he is the one who ordered the review of her research.
Dr. Fauci and the AIDS Epidemic
Mikovits makes the horrific claim that Dr. Anthony Fauci stole her research and suppressed it, thus leading to millions of deaths from AIDS. She is essentially blaming one person for the entire AIDS epidemic, and trying to claim she had come up with a miracle cure. There is no evidence for any of this. This is what we should realize happens to be a gigantic straw man, making an enemy with all the credibility of the boogeyman out of someone who in all actuality is a scientific hero, as Fauci, although not without his faults, has done considerable good in this arena. Look at her motivation here. It is easily seen through. She seeks revenge. Ironically, however, she seems to feel no remorse for claiming she worked in the lab that weaponized the Ebola virus and killed over 11,000 people, even though this is also easily proven to be a lie, as Ebola was discovered in 1976, well before she became a “scientist.”
Anthony Fauci. Fair Use.
The Bayh-Dole Act
Mikovits claims that the Bayh-Dole Act allows researchers to gain patents for treatments they discover, and that this is a conflict of interest for the scientific community. This is way more complicated than Mikovits wants us to believe, and again, her reasons are personal. The Bayh-Dole Act allows non-profit organizations to retain patents on their inventions even when found through federally funded research. However, it also allows the federal government to take control of the invention if it is concluded that it is necessary for the public good. During her research into CFS, Mikovits had a 1.5 million dollar grant that she tried to take with her when she left the institute.
XMRV
Plandemic alleges that Mikovits discovered XMRV and that it is linked to plagues responsible for millions of deaths. FALSE. She did not discover it. This was found by Dr. Robert Silverman. Silverman linked it to prostate cancer, and worked with Mikovits on her study of CFS. He later retracted his own research, admitting that he made errors. There is no evidence to support Mikovits’ claims that this virus is the root cause of everything she wants it to be the cause of. This is sensationalism.
Coronavirus Claims
Plandemic and Mikovits goes on at length about their real agenda, which is to make dangerous and misleading claims regarding the coronavirus and the current global pandemic. In a nutshell, here are the claims made:
Covid-19 had to be made in a lab. FALSE. Covid-19 would take 800 years to occur naturally from SARS 1. FALSE. Viruses mutate so fast we have already seen numerous examples occurring in real time. This claim is what the scientific community would likely refer to as BULL****. The government is purposefully faking the covid-19 numbers. FALSE. Important to note here, that if you die from pneumonia caused by the flu, the cause of your death is still the flu. This is a conspiracy theory that acts under the presumption that somehow what must be true in the United States, has to be true all over the world. This is impossible. Italy was hit harder because it utilized flu vaccines in 2019 containing a strain of h1n1 common in dogs. ARE YOU KIDDING ME??????? Hydroxochloroquine is the best treatment for covid-19. FALSE. People should not be sheltering in place. Nor should they be wearing masks or gloves. GIVE ME A BREAK. The MeDiA is Fake News, and people making this Fake News should be put in jail. This is the final point that convinces me this video was funded by the Trump Administration. It is nothing but a HUGE distraction from the myriad ways Trump has failed the American people in a time of absolute crisis, and created a scenario which has caused over 70,000 people in America to die, and still counting. The video runs the gamut of popular Trump talking points, and points fingers at people Trump likes to point fingers at. Coincidence? See how easy it is to jump to conclusions?
Plandemic makes so many false claims, and at such a fast pace, it’s almost impossible to keep up with it. And that is part of the manipulation technique. It bombards you, and overwhelms you with information, making you more susceptible to believe it, because it is stating so many horrific things with such authority how could it not be true? But you need to stop and think. Who are these people? Why should I believe them over the experts? A few of the people speaking in this video are never even given credentials. They are just strangers wearing scrubs. I might as well put on a set of scrubs and make a video myself. Would you believe me if I did?
(EDIT: These “doctors” have been identified. Dan Erickson and Artin Massihi are Urgent Care workers in California and are frequently on Fox News spreading false information. The other “doctor” is a chiropractor named Eric Nepute from St. Louis who is notable for telling people that tonic water is the cure for everything that ails you.)
This viral video showcases the inherent dangers of propaganda and misinformation campaigns. It shows how easily people are ready to believe things because they want to believe them rather than the truth. It shows how people will search for explanations outside the realm of rationality or possibility when their senses are flooded with real fear of mortality. It is important that we as a people do not fall for such instinctual defense mechanisms. It is important that we vet the information we are given and that we refuse to spread this false propaganda narrative. STOP SHARING THIS VIDEO. STOP PROMOTING FEAR. STOP BEING A WILLING PARTICIPANT IN MAKING THE PROBLEM WORSE. Stop it. Just stop. | https://medium.com/politically-speaking/plandemic-debunked-403a6e7d3ff7 | ['Jay Sizemore'] | 2020-06-15 21:12:11.773000+00:00 | ['Propaganda', 'Plandemic', 'Science', 'Judy Mikovits', 'Coronavirus'] |
Android Image Color Change With ColorMatrix | Binary the Color
We have the primary colors of red, green, and blue, and the secondary colors of magenta, cyan, and yellow. We could convert all colors to the binary colors, depending on the dominant color of the pixel.
To do that, we need to come up with a formula.
But before that, one important note:
If a calculated color value is greater than 255, it will cap at 255. If a calculated color value is smaller than 0, it will be cap at 0.
The above note is very handy for our case. We want to ensure that all colors are either 255 or 0. We could have a formula like this:
NewColor = 255 * OriginalColor — 128 * 255 [Cap at 0 to 255]
Let’s test the value:
- Assuming original color is 0, the new value is -32640. But since it is cap at minimum 0, it is 0.
- Assuming original color is 255, the new value is 32640. But since it is cap at maximum 255, it is 255.
- Assuming original color is 127, the new value is -255, which is converted to 0
- Assuming original color is 128, the new value is 0
- Assuming original color is 129, the new value is 255
So, we have proven that any original color of 128 or less is converted to 0 and any original color of 129 to 255, will be 255.
With that, we could have our matrix as below:
[ 255, 0, 0, 0, -128*255,
0, 255, 0, 0, -128*255,
0, 0, 255, 0, -128*255,
0, 0, 0, 1, 0 ]
You’ll realize that the decision to convert to either 0 or 255 is based on the coefficient 128-set. If we make that a variable, we could then adjust how much we make it bright/dim for the binary color, as per the demo below. | https://medium.com/mobile-app-development-publication/android-image-color-change-with-colormatrix-e927d7fb6eb4 | [] | 2020-12-24 13:39:44.573000+00:00 | ['Android', 'Android App Development', 'Mobile App Development', 'Programming', 'Design'] |
Social Media: The Death of Real World Interaction? | The digital age has been transformed into one surrounding social media and networking. With over a billion monthly active users on sites like Facebook alone, it is hard to argue against social networking being something ubiquitous. These social sites act as gatekeepers for the harboring of online connections between users. These forms of online communication are also not relegated to specific age groups either as more than 73% of online adults today (18-65+) are on some sort of social site (Social Networking Fact Sheet). As more and more people continue to find ways to communicate in the digital world, new issues arise, however, that have previously never been faced. These issues span major sectors of our cultures and societies, from the physical to the psychological. While new technologies are ushering in new mediums and outlets for interaction, old ones are being soon forgotten. In a world where we can get a message across to millions of people with a click of a button, the most fundamental type of communication, human face-to- face interaction, is becoming less and less important. Social media can have catastrophic affects on humans as social creatures if used to replace rather than enhance, provoking false senses of connection, psychological changes to how people approach relationships, and negative emotional responses to these types of communications.
Social media is often becoming a replacement for building and establishing connections in the real world and there is something fundamentally wrong with this mentality. In a study conducted by the Pew Research Center, 54 percent of those surveyed said they text their friends at least once a day, while only 33% said they talk face-to-face with their friends on a consistent basis (Antisocial Networking). This tells us several things. Direct interaction is not being seen as the best way to communicate anymore, especially among teens, and people are not putting as much value as they once did on face to face interaction. Psychologist Sherry Turkle puts it brilliantly in describing what road we are going down by spending all of our time on online communication when saying, “We are sacrificing conversation for mere connection” (Connected, but Alone?). We are sacrificing the experiences and understanding of real world interactions that are necessary in our development for a mere connection that is established in social media, one that is superficial. These connections that are no more than surface deep are becoming sufficient replacements for face to face interaction among social media users because they are easier to establish, but have dire consequences for social development in the future. Ms Turkle also details this phenomenon very well in her talk when saying, “….So from social networks to sociable robots, we’re designing technologies that will give us the illusion of companionship without the demands of friendship” (Connected, but Alone?). It is undeniable that we as humans look for companionship throughout our lives. After all, we are social creatures; however, a text saying “I love you” is not the same thing as if someone were saying it directly to another person. It does not provoke the same level of emotional attachment, and this among other things is what is wrong with social media and why direct interaction is still so vital in our lives. For adolescents especially, the skill of maintaining real world interactions (and it can really be considered a skill with how our society is coming to approach this type of communication) is the “bedrock” of development. Real world interaction allows us to understand each other profoundly and allows us to get to know each other down to the most fundamental parts of who we are. Social media and social connections just don’t have the same level of profound connectedness. This is why the false sense of connection that comes as a byproduct of social media is so dangerous to who we are and who we end up becoming. We are in fact becoming more “connected” through social media in the very sense of the word, but this “connection” is one that we don’t want to replace our real life connections with. Social media can truly have harmful effects on us psychologically if we use the medium to replace rather than enhance and if we do not realize that the connections we are establishing through these mediums are not suffice for our social development.
With the emergence of online communication there has also been a difference in the way we approach technology when it comes to relationships and companionship. Psychologically, we have a mentality different than that of past generations because of this new technology. In Sherry Turkle’s Alone Together-Why We Expect More from Technology and Less from Each Other, Turkle clearly lays out this out this idea when stating:
As infants, we see the world in parts. There is the good-the things that feed and nourish us. There is the bad-the things that frustrate or deny us. As children mature, they come to see the world in more complex ways, realizing, for example, that beyond black and white, there are shades of gray. The same mother who feeds us may sometimes have no milk. Over time, we transform a collection of parts into a comprehension of wholes. With this integration, we learn to tolerate disappointment and ambiguity. And we learn that to sustain realistic relationships, one must accept others in their complexity. When we imagine a robot as true companion, there is no need to do any work. (Turkle 60)
Although the robots mentioned in this piece Turkle’s writing refers to physical technology, it very much applies to how we see things when dealing with digital technology. We have adopted this notion that online means of connections can be substitutes for those connections that are so vital in the real world, when in fact it is simply not true. As per the Pew Research study and countless more like it, people are substituting this new form of communication for its real world counterpart, so this is not a psychological adaptation that is being taken up by a select few. With real life conversations, we learn to deal with the shortcomings and complexities of others, and vice versa. Every real life conversation is like practice or a warm up towards the game of social fluidity, if you will. This can simply not happen with any robot or any digital connection. With a digital connection you have all the time and energy in the world to project yourself as the perfect version of who you would like to be. No one has this luxury in the real world and avoiding real world interaction altogether is simply impossible. Social media has brought forth a drastic change in how we treat relationships. This mental adaptation to how we treat this form of online technology is not a path we should be going down, and one that can ultimately spell trouble for future generations. The wrong message is being created by users of these networks that think that it is alright to replace rather than enhance, which is what these networks were originally intended for. We have led ourselves to believe that online interactions themselves can be companions because in a way we feel more comfortable in these spaces. Several studies on the matter, however, have produced opposite results in how we feel emotionally when we use social media.
Social media is affecting its users not only on how they act socially, but how they feel socially when it comes to using the sites. An online social connection is supposed to evoke sensations of emotional satisfaction as this type of communication is still social in nature and we as human get satisfaction from social activities, according to advocates of these online social systems. What has been seen, however, is that the more people use sites like Facebook, Twitter, Whatsapp, etc, the more anxious and emotionally taxed they became. A study of roughly 300 people by the Salford Business School found that these social networks are exacerbating negative emotions. The surveyors found that “If you are predisposed to anxiety it seems that the pressures from technology act as a tipping point, making people feel more insecure and more overwhelmed. These findings suggest that some may need to re-establish control over the technology they use, rather than being controlled by it” (Anxiety UK). More than half of the respondents reported having negative emotions after using social networking sites (Anxiety UK). This corroborates the idea that social media cannot be used to replace the interactions which take place in the real world. It may seem that these digital interactions are satisfactory on the surface, but there is something within us, much deeper than we can come to realize, that no matter how hard we try to indulge ourselves in our digital communications we cannot escape the truth that these interactions are not enough. Younger generations especially are vulnerable to the vortex that is social media. For the first time in history, face to face interaction has dropped to third behind texting and IM/FB messaging in the so called “iGeneration”, or those born from 1990-1999 (Rosen). As these younger generations are nurtured around technology and social media, it becomes increasingly difficult to get out of a digitally social driven life. With severe emotional implications in using social networks, the vast amounts of time spent on these sites should not be promoted, especially among adolescents. There are other ideas that exist, however, for the benefits of having a social life online.
Some argue that the use of social media is a beneficial tool, allowing us to be become more connected than we ever were by allowing us to reach a much greater audience. Others argue that social media allows those to build social lives where it is hard to build them in the real world. While these arguments can certainly hold to be true, the fact of the matter is that social media does not replace real world interaction and while it is of benefit to have connections with dozens of people at once, this tool often becomes a replacement for real world interaction. What has been seen is that social media simply does not produce the same levels of psychological “well being” as real world interactions have, which is why “direct” interaction is still so important, as shown by the Public Library of Science’s study . “Because we also asked people to indicate how frequently they interacted with other people “directly” since the last time we text messaged them, we were able to test this idea. Specifically, we repeated each of the aforementioned analyses substituting “direct” social interaction for Facebook use. In contrast to Facebook use, “direct” social interaction did not predict changes in cognitive well-being, and predicted increases (not decreases) in affective well-being” (PLOS ONE). The study clearly illustrates how we may perceive social media and “direct” interaction to be on equal ground cognitively speaking. Emotionally, however, the very quality of our ability to be satisfied is diminished with the use of social media and lack of real world interaction, which in turn can have harmful effects on how we develop socially. It is possible to have a sort of balance between real and digital social connections, but these online connections HAVE to be used to enhance, not replace, which has unfortunately not been the case, as corroborated by aforementioned studies.
Digital technology is evolving at an alarming rate. Face to face interactions have become the third method of communication behind text messaging and IM messaging in just a matter of a few years (Rosen). Billions of people around the world are flocking to social networking sites in hopes of creating online connections. The desire, accessibility, and interest in these digital connections have put the most fundamental type of communication, face to face interaction, in its shadow. It is almost disturbing that humans can abandon such a vital form of our social makeup without thinking twice. We want to have social interactions, but we don’t want to go through the trials and tribulations of real world interactions. It is these complexities in interaction, however, that help us to adapt to different social situations in the future and something that social media is not preparing us for. Social media can be greatly beneficial if used to enhance those relationships which we hold dear in the real world, but more often than not what is being seen is that these real world relationships are being substituted altogether by a digital experience, so these benefits end up having no merit. People would rather text message someone before talking to someone face to face, and that says something about who we have become as a society. We prefer to be interacting with a computer screen or mobile device than interact with each other directly and there is something vastly wrong with this way of thinking. In the words of psychologist Sherry Turkle, in today’s world we prefer to be “Alone together” (Turkle). As direct interaction becomes less prevalent, a false sense of connection, negative psychological adaptations to how we approach digital technology and negative emotional responses to online outlets brought on by social media are having devastating effects on who we become as social creatures. | https://medium.com/musings-of-a-writer/social-media-the-death-of-real-world-interaction-5e2f33cfd8ee | ['Marcos Suliveres'] | 2018-02-12 18:29:43.242000+00:00 | ['Social Media', 'Marketing', 'Tech', 'Internet', 'Psychology'] |
Use C# and a CNTK Neural Network To Predict House Prices In California | The file contains information on 17k housing blocks all over the state of California:
Column 1: The longitude of the housing block
Column 2: The latitude of the housing block
Column 3: The median age of all the houses in the block
Column 4: The total number of rooms in all houses in the block
Column 5: The total number of bedrooms in all houses in the block
Column 6: The total number of people living in all houses in the block
Column 7: The total number of households in all houses in the block
Column 8: The median income of all people living in all houses in the block
Column 9: The median house value for all houses in the block
We can use this data to train a deep neural network to predict the value of any house in and outside the state of California.
Here’s what the data looks like. This is a plot of all the housing blocks in the dataset color-coded by value:
You can sort of see the shape of California, with the highest house values found in the Los Angeles and San Francisco area.
Okay, let’s get started writing code.
Please open a console or Powershell window. You are going to create a new subfolder for this assignment and set up a blank console application:
$ dotnet new console -o HousePricePrediction
$ cd HousePricePrediction
Also make sure to copy the dataset file(s) into this folder because the code you’re going to type next will expect them here.
Now install the following packages
$ dotnet add package Microsoft.ML
$ dotnet add package CNTK.GPU
$ dotnet add package XPlot.Plotly
$ dotnet add package Fsharp.Core
Microsoft.ML is the Microsoft machine learning package. We will use to load and process the data from the dataset.
The CNTK.GPU library is Microsoft’s Cognitive Toolkit that can train and run deep neural networks.
And Xplot.Plotly is an awesome plotting library based on Plotly. The library is designed for F# so we also need to pull in the Fsharp.Core library.
The CNTK.GPU package will train and run deep neural networks using your GPU. You’ll need an NVidia GPU and Cuda graphics drivers for this to work.
If you don’t have an NVidia GPU or suitable drivers, you can also opt to train and run the neural networks on your CPU. In that case please install the CNTK.CPUOnly package instead.
CNTK is a low-level tensor library for building, training, and running deep neural networks. The code to build deep neural network can get a bit verbose, so I’ve developed a wrapper called CNTKUtil that will help you write code faster.
Please download the CNTKUtil files and save them in a new CNTKUtil folder at the same level as your project folder.
Then make sure you’re in the console project folder and create a project reference like this:
$ dotnet add reference ..\CNTKUtil\CNTKUtil.csproj
Now you are ready to add classes. You’ll need a new class to hold all the information for a single housing block.
Edit the Program.cs file with Visual Studio Code and add the following code:
The HouseBlockData class holds all the data for one single housing block. Note how each field is tagged with a LoadColumn attribute that will tell the CSV data loading code which column to import data from.
We also have a GetFeatures method that returns the longitude, latitude, median age, total number of rooms, total number of bedrooms, total population, number of households, and median income level of a housing block.
And there’s a GetLabel method that return the median house value in thousands of dollars.
The features are the house attributes that we will use to train the neural network on, and the label is the output variable that we’re trying to predict. So here we’re training on every column in the dataset to predict the median house value.
Now we need to set up a custom TrainingEngine which is a helper class from the CNTKUtil library that will help us train and run a deep neural network:
Note the CreateFeatureVariable override which tells CNTK that our neural network will use a 1-dimensional tensor of 8 float values as input. This shape matches the 8 values returned by the HouseBlockData.GetFeatures method.
And the CreateLabelVariable override tells CNTK that we want our neural network to output a single float value. This shape matches the single value returned by the HouseBlockData.GetLabel method.
We’re almost done with the training engine. Our final step is to design the neural network.
We will use the following neural network to predict house prices:
This is a deep neural network with an 8-node input layer, an 8-node hidden layer, and a single-node output layer. We’ll use the ReLU activation function everywhere.
Here’s how to build this neural network:
The CreateModel override builds the neural network. Note how each call to Dense adds a dense layer with the ReLU activation function to the network. The final output layer consists of only a single node without activation.
With the training engine fully set up, we can now load the dataset in memory. We’re going to use an ML.NET data pipeline for the heavy lifting:
This code calls the LoadFromTextFile method to load the CSV data in memory. Note the HouseBlockData type argument that tells the method which class to use to load the data.
We then use TrainTestSplit to split the data in a training partition containing 80% of the data and a testing partition containing 20% of the data.
Finally we call CreateEnumerable to convert the two partitions to an enumeration of HouseBlockData instances.
Now we’re ready to set up the training engine. Add the following code:
We’re instantiating a new training engine and configuring it to use the MSE metric (= Mean Square Error) to measure the training and testing loss. We’re going to train for 50 epochs with a batch size of 16 and a learning rate of 0001.
Now let’s load the data from the ML.NET pipeline into the neural network:
The SetData method loads data into the neural network and expects training features, training labels, testing features, and testing labels, in that order. Note how we’re using the GetFeatures and GetLabel methods we set up earlier.
And that’s it. The following code will start the training engine and train the neural network:
After training completes, the complete training and testing curves will be stored in the training engine.
Let’s use XPlot to create a nice plot of the two curves so we can check for overfitting:
This code creates a Plot with two Scatter graphs. The first one plots the TrainingCurve and the second one plots the TestingCurve.
Both curves are defined as the loss values per training epoch. And note the Sqrt method to convert the MSE loss to RMSE ( = Root Mean Square Error).
Finally we use File.WriteAllText to write the plot to disk as a HTML file.
We’re now ready to build the app, so this is a good moment to save your work ;)
Go to the CNTKUtil folder and type the following:
$ dotnet build -o bin/Debug/netcoreapp3.0 -p:Platform=x64
This will build the CNKTUtil project. Note how we’re specifying the x64 platform because the CNTK library requires a 64-bit build.
Now go to the HousePricePrediction folder and type:
$ dotnet build -o bin/Debug/netcoreapp3.0 -p:Platform=x64
This will build your app. Note how we’re again specifying the x64 platform.
Now run the app:
$ dotnet run
The app will create the neural network, load the dataset, train the network on the data, and create a plot of the training and testing loss for each epoch. The plot is written to disk in a new file called chart.html.
Here’s what it looks like:
The training and testing curves stay close together with the loss slowly dropping with each successive epoch. There is no hint of overfitting.
The final RMSE is 80.64 on training and 81.59 on testing. It means my app predictions are roughly $80k off. That’s not a bad start.
(disclaimer: note that RMSE is not expressed in dollars so this is only a rough approximation of the average prediction error)
Now I’ll expand the number of nodes in each neural network layer to 64. I will change the CreateModel method in the TrainingEngine class as follows:
The neural network now has 4,801 trainable parameters.
And here’s the result:
The training process is more unstable now, with some epochs reporting a large testing loss. But the network always corrects itself in subsequent epochs and there’s still no sign of overfitting.
The final RMSE is now 66.63 for training and 67.64 for testing. A nice improvement!
Now let’s add another layer. With the extra layer the neural network now has 8,961 trainable parameters:
And here is the result:
The training curves look about the same, with a final RMSE of 65.01 for training and 65.55 for testing. Even though I doubled the number of trainable parameters in the network, the results hardly improved.
Let’s do one more experiment. I’ll remove the extra layer and increase the number of nodes in each layer to 128. The neural network now has 17,793 trainable parameters:
And here are the results:
Again the curves look unchanged, with a final RMSE of 64.73 for training and 64.84 for testing.
You can see that adding more hidden layers or increasing the number of nodes per layer is not improving the final results. We’re not getting the loss below 64.
In machine learning we’re always looking for the most simple model that provides the most accurate predictions, because if we make the model too complex it will tend to overfit.
So it looks like for this dataset the optimal neural network that delivers the best tradeoff on accuracy and complexity is the second one we tried, with two 64-node layers.
So what do you think?
Are you ready to start writing C# machine learning apps with CNTK? | https://medium.com/machinelearningadvantage/use-c-and-a-cntk-neural-network-to-predict-house-prices-in-california-f776220916ba | ['Mark Farragher'] | 2019-11-19 14:58:49.623000+00:00 | ['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Csharp', 'Deep Learning'] |
For Those Who Try | For those who try to write something meaningful, something which will resonate with the reader, take heart. Prepare yourself. It’s another day, possibly one of frustration and sadness, because you just looked at your stats and realize you have fewer reads than the day before.
Perhaps you’re sitting in front of your computer at the moment, staring at white space, wondering why you can’t think of anything to write. Maybe you’re telling yourself it doesn’t matter because no one’s going to read what you write anyway.
For those of you experiencing this right now, it’s not over yet. Now is not the time to quit. Now is a momentary splash of worry and difficulty which will pass. Maybe it will come back tomorrow, perhaps not, but it shouldn’t be your focus. For those of you who try, now is the time to do just that.
Try again.
What we do is challenging. We all know it because we live it every day as we try to make our way forward, as we attempt to claw our way to the top. We struggle with set back after set back. And for all our hard work, we often achieve lackluster results. It’s just so damned frustrating most days, isn’t it? Up to this point, you’ve probably thought about moving on, searching for something else to nurture your creative spark what, at least a thousand times?
We all have.
But the dream still lives inside of us. It lives in you, doesn’t it? We all have that burning desire that won’t go away, the constant urge which prompts us to sit down and write something.
Even if we know the chance of what we do going viral is maddeningly slim, we have to write. For those who try, it’s never been a conscious decision, a purposeful choice to become a writer. It’s not a choice. It’s a dream, a firm resolve, and an understanding of what and who we are. If you’ve not thought of yourself in this light before, remember this, hold on to this.
You’re a writer.
And you may be thinking based on the poor results you’re currently experiencing, you probably aren’t a good one, but that’s okay. We all start the same way; we all share the same pains, the same frustrations, the same heartbreaks. It’s how you deal with them that makes the difference. Remember, it’s not over yet.
Unless you decide it is.
For those who try, the difficulties and challenges experienced will never go away, and if we do it long enough, we eventually figure it out. We learn that we’re only as good as, or as bad as, the last thing we wrote. Even if the last thing we wrote was yesterday. And when we sit in front of that white space, we often remember all those dreams which prompted us to begin this writing journey in the first place. We remember and compare them to our current progress.
Most times, it saddens us. Sometimes it infuriates us; it makes us so angry we want to just check out and quit. But we don’t stop. Instead, we try again. Somehow an idea manages to bubble up from where we neither know nor care.
All we know is that it’s something we want to say. It’s something compelling us to speak, and so we say it. We tell the story the best way we can. We forget all the challenges which have knocked us about all these days, and we write.
Another day, another story, and without us realizing it, another life lesson learned.
For those who try, it’s essential to remember while the yield of our harvest may be poor most days, we must continue to plow the fields. We must continue to sow these fields with our writing. Though not backbreaking work, I’m sure to all those who try, it seems that way most days.
At the very least, it’s mind-numbing, isn’t it? Most days, it seems as if it’s mind-numbing, grueling tedium as we lay down word, after word, after word. Somedays it all becomes an endless stream, a blur of things written and things yet to be written. And yet, we continue to plow those fields, don’t we? So, if you take away nothing else from this piece, remember this.
It’s not over yet.
There’s still time for you to write something, time for you to tell us another story, give us another opportunity to think differently about something, provide a perspective only you can provide.
For those who try, it’s what we do.
Thank you so much for reading. You didn’t have to, but I’m certainly glad you did.
Let’s keep in touch: paul@pgbarnett.com
© P.G. Barnett, 2020. All Rights Reserved. | https://medium.com/the-top-shelf/for-those-who-try-153f1acff8f9 | ['P.G. Barnett'] | 2020-09-09 13:51:53.507000+00:00 | ['Self-awareness', 'Self', 'Writers On Medium', 'Awareness', 'Writing'] |
Why design systems fail, and how to make them work | For a short period of time I worked on a design system at WebNL, an agency specialised in web design, development and marketing based in the Netherlands. Our design system was aimed at improving the bridge between the design and development of the products we’re making.
In this blog I will explain how we did this, and why it didn’t work eventually. Hopefully this will prevent others from making the same mistakes we made, even though we’ve learned a lot from them.
The beginning of the journey
When I started working at WebNL, one of my first tasks was to look into the possibilities of improving the transition between design and development of web products. Traditionally this has been a process of developers ‘copying’ the mock-ups made by designers.
The designers did their work primarily in Sketch. They translated their vision of the product into static designs. The developers then wrote HTML, CSS, Javascript and PHP to convert these static designs into a working product.
One of the biggest ambitions inside the company was to find a way to make this process less time consuming as the work was basically done twice.
So the first step I took was to find out more about ways in which this process of ‘copying’ could be automated. I looked into automation in Sketch and found out there were plugins that used the Sketch API for this purpose. But the plugins I found lacked reliability and I wasn’t really interested in writing my own Sketch plugin.
I looked further and discovered that Sketch had recently opened up their file system format so their files could be used in other tools. Every property of every group and layer in the design was now easily accessable outside of Sketch and I quickly realised that I could use this to translate these properties into working products automatically.
Building the first prototype
After my discovery I quickly made a proof of concept. It was a very simple prototype that could turn a Sketch file into a working website. A Sketch file is basically a zip file consisting of images and json files. The prototype translated these json files into a Javascript array so it could read all the properties stored inside the Sketch files and use it to generate a standard website.
Our developers were using a centralized file in which SCSS variables were stored. These variables controlled visual aspects of elements like colors, typography, buttons, and form elements. I took those elements and build them into a library of Sketch symbols which could be edited by designers. Designers could then use these elements as a starting point for new projects.
When the visual appearance of these symbols had been changed, I could take the Sketch file and use it to create a new file with variables. Designers could now control these elements in the final product.
/**********************
Colors
**********************/ $brand-primary: rgb(229,0,68);
$brand-secondary: rgb(172,171,171);
$brand-tertiary: rgb(0,0,0);
$brand-lightest: rgb(248,249,250);
$brand-darkest: rgb(52,58,63); $brand-success: rgb(85,184,144);
$brand-error: rgb(229,0,68);
$brand-warning: rgb(255,190,61);
$brand-info: rgb(23,162,184); $text-color: black;
There were also some drawbacks. We could only translate properties of the design into code if they had been standardised. Designers could change properties like colors, font properties, borders and shadows, which were then translated into working code. But the layers and symbols they added would not be translated.
That didn’t seem like a problem. When designers would come up with new properties or elements, developers could just write new code to extend the existing code. I also started making more complicated elements like cards and menus in a standard way to make sure designers would not have to come up with new properties or elements as much as before.
A modular approach for the symbols in our design system
My first prototype got everyone at the company excited. The standardised way things worked had the potential to speed up the workflow of designers and developers alike. While designers could use the standardised elements as templates to make a jumpstart, developers would spent less time on getting things right.
We got permission to spend 100 hours as an investment for future projects. I used these hours to make more elements, and translate them into code. A frontend developer worked alongside me to build the same elements as HTML with SCSS properties.
When we were done, we started using the design system in production. The results were still moderate in the early phase, but showed a lot of potential.
Realising we were building a design system
Ironically, when we started to work on the system we didn’t know what a design system was. We had never heard of it before until our boss introduced the term design system as an existing thing, and as a way to give the project a noticeable name.
We named our project the WebNL Design System, and I started to look into other companies that used design systems.
During this time I read about Brad Frost, a pioneer in design systems. He talked a lot about them and he was even writing a book about it. From his book I learned about atomic design systems, a concept I implemented in our design system.
Atoms, molecules, organisms, templates and pages
I also read about how Airbnb was automating the design process. They used intelligent image recognition to analyse sketches made on paper and translate them into working prototypes immediately. I showed a video of their work inside my own company and that caused people to be even more excited about the potential of design systems.
Another example from Airbnb was react-to-sketch. Airbnb uses it to generate Sketch symbols from existing React components. They can use the react components as a single source of truth like this. For us that didn’t work because we started a lot of new projects where the Sketch designs were the source of truth. So instead we tried to generate code components from existing Sketch symbols.
This difference also exposed another difficulty we had compared to other companies. They usually had a single brand, providing a single service through a few digital products. But we were making products for a wide range of brands providing even more services. So our design system had to be more flexible.
Vox Media has an excellent example of a flexible design system that can be used across brands. To me this proves the feasibility of such a design system, even when it will still makes things hard when trying to automate the workflow between design and development.
Fixing bugs in production
After the first hype about our design system, things started to head south. We used the system extensively, but never without trouble.
We decided to use the system in short sprints where products were made within one week, because that was were we needed it the most. But on several occasions, especially in the beginning, we had to solve issues during the sprints.
Instead of spending time on production we had to debug the system and produce bugfixes. Sometimes the designers had just broken things while editing the Sketch file. During those first trials I worked on getting fixes into the system and making things more enduring so designers couldn’t accidentally break things.
And it worked, the system became better and more reliable. But the system still wasn’t meeting up to expectations.
Managing expectations
Beforehand we didn’t expect that having a design system that could automate things would have us spent less time on projects. The time we saved could be spent as extra time on our projects, we reasoned. But after a while, a product manager still mentioned that we weren’t spending less time.
So not everyone was expecting the same thing from our design system. But things were also not exactly as how we expected them to be. This was because there were still a lot of bugs, not related to the design system but related to the projects. So any time left at the end of the projects would be spent on solving bugs instead of nice features.
In a way this was not what anyone had expected to happen. But I didn’t see this as a problem. We just had to make the system more efficient so more time could be freed up, and less bugs would be produced.
Error handling
Yet this wasn’t were our problems ended. Even though the system had become more reliable, the designers were still making mistakes while building their Sketch files. These mistakes didn’t result in breaking the system anymore, because I had set up error messages that could be analysed by the developers.
My idea was that these messages would cause developers and designers to talk more about problems together so they would understand each other better. But while they were indeed talking more to each other, it didn’t help them understand each other. The designers still didn’t understand the design system.
Eventually I even heard some developers who weren’t directly working with the design system talk about how it didn’t work because designers weren’t using it right. I realized that I had to spent more time explaining the system to designers and co-creating with them.
Teaching designers about design systems
I had already spent a lot of time with developers. But I hadn’t spent much time explaining the design system to designers, assuming they would intuitively know how to use it. This was a mistake.
After that realisation I spent a lot of time teaching our designers how to use the design system. I found out that they had some understanding about components, but they just weren’t used to working with nested components, naming conventions, and working with layer and text styles.
This caused them to ignore some core Sketch principles that the design system relied upon. But moreover, they also weren’t used to working with design templates.
Before the design system was created they always started out with a blank page, using ideation to create new and innovative designs. They wanted each design to be unique and incomparable to another. Even though the design system had been built upon patterns used in their previous work, they wanted to deviate from that work.
This caused headaches with developers, because they now had to do more work instead of less, complying with the whishes of designers.
The end of our Design System
We did eventually reach a point where designers understood enough about Sketch principles and design systems so they could use it without much trouble.
But by the time we reached this point, an unexpected decision was made to completely overhaul our standard codebase. There would be no central file with SCSS variables anymore, making it harder to generate SCSS variables from our Sketch files. All of the existing code components were also put out of order, they would all have to be rebuild before we could automate them again.
At the same time, Invision launched their Design Systems Manager (DSM). This was a product which had become available in beta a short while after we had made the first prototype. DSM offered an API to translate designs into SCSS variables, like we had been doing ourselves before. Now it was out of beta and could be used in production.
Even better, it offered a Sketch plugin for designers which made it easier for them to work with the components and styles used in our design system. We also decided that it would be best to switch to their API for future use, as we had found out that Sketch was continuously updating their file format, making it time-consuming to maintain generating SCSS variables ourselves.
These events finally made us decide to pull the plug on the design system. We would have to rebuild the design system in a new way to make it automated again, and we just didn’t have the time at that moment. Instead we focused on smaller improvements with Invision DSM and our new codebase.
Takeaway
I still think design systems can do a lot of good, and at WebNL we are also still working on new design systems for clients. They are just more customized now and less automated. But there are some lessons we have learned that everyone should take in mind before creating their own design system.
Manage expectations. Don’t make yourself or other people think your design system will change the world by saving you time. Instead, focus on things that are really important, like designers and developers understanding eachother.
Don’t make yourself or other people think your design system will change the world by saving you time. Instead, focus on things that are really important, like designers and developers understanding eachother. Don’t do everything at once. At the start of your journey, it can be tempting to try and make a complete design system. This won’t work as you’ll have to explain and decide upon everything you make together with other designers and developers. Instead, try to take small steps over time.
At the start of your journey, it can be tempting to try and make a complete design system. This won’t work as you’ll have to explain and decide upon everything you make together with other designers and developers. Instead, try to take small steps over time. Design for people. The biggest mistake I made is thinking that I could improve the connection between designers and developers by putting a system between them. It’s much better to actually get them in a room and have them making decisions together, even when this process takes a lot of time and effort.
I hope these lessons can help you avoid the mistakes we made during our first attempt at building a design system. Hopefully I will be able to share about our new design systems workflow in the near future. I’m also curious to know about how other people use design systems in their workflow. Leave a comment if you’d like to share your experience or if you have any questions. | https://uxdesign.cc/why-design-systems-fail-and-how-to-make-them-work-6f6d812e216d | ['Daniël De Wit'] | 2019-01-03 17:40:56.407000+00:00 | ['Development', 'Design Systems', 'Design', 'Sketch', 'UX'] |
Self Esteem and Expectations | Self Esteem and Expectations
Balancing on your pedestal
Photo by Timothy Dykes on Unsplash
Growing one’s self-esteem is like sorting the wheat from the chaff. A sieving system where beliefs and actions and habits are brought to light to be examined and queried. Does this empower me or disempower me?
Do I want this or not?
Perhaps it is maturity but lately, I’ve been more discerning. I’ve been questioning the very atmosphere of expectations — social and personal — and whether I need to obey or not.
I’m cherry-picking.
When I look at the expectations in my hands I realise they don’t belong to me — they disintegrate in my calloused palms as if nothing more than powdery ash. Like little delusions. Some of them mass delusions.
Even traditions are not making much sense.
All those expectations are like nails snagging my lace dress.
I have a long friendship with a man who is so audacious it's impressive. He does not bend to anyone. He stands on a lot of toes and puts many a nose out of joint. Other men object to his Alpha maleness but my friend doesn’t care for those definitions. He genuinely doesn’t care what people think of him. Impervious. He defines himself. He’s not anchored by social mores. He’s authentic and game. He does whatever he wants as if his life depends on it. Ten years ago it made my jaw drop. I had the epiphany — If he can do whatever he wants and get away with it. I can do whatever I want!
But knowledge and implementation are two different things. Repetition helps it integrate. It’s been a decade of chipping away, practicing using my voice, saying no, putting my well-being first, recognising the power-point where I drop the ball and rollover.
I jot notes in the margins of my experience like an actor learning a new script. How interesting I think; next time I’ll do it like this. I scribble — poise.
As your sense of loyalty to self increases you kind of stand up inside yourself. Less tempted to spend energy being sparkly at a party preferring instead to observe. Not interested in proving anything to anyone anymore but in self-improvement to see what more you can learn, experience, attempt and master. Noticing the point where you’d usually appease and instead - stand your ground.
Many expectations I now dismiss with a flick of the wrist.
I have other things to do.
I have a high price on my head.
To free ourselves of the expectations of others,
to give us back to ourselves there lies the great,
singular power of self-respect.
- Joan Didion
Yesterday a man barrelled towards me on a city street and instead of making way for him I continued cutting my own line — at the last moment he had to move his body so as not to bump me. Someone did an experiment about that once. How women move aside for men.
I’ve been limiting the number of times I say sorry. Reserving the word for when I mean it, instead of saying it as a means of deferment.
Self-respect is a discipline, a habit of mind that can never be faked
but can be developed, trained, coaxed forth.
- Joan Didion
Standing on the pedestal you make for yourself means you can see clearly. You’re braver. The little knocks and nicks and snide remarks don’t penetrate. It means shining your own light and not becoming desolate over failed attempts. It means not getting sucked into snake dens. It means you like yourself a lot; you make no apologies for who you are or how you live or what you choose. It means throwing out the labels and categories. You don’t accept criticisms from people who have no skin in the game. You can divine motives by feel and vibe and don’t sacrifice anything to unworthy causes.
You make your own decisions. You stand on the rock of yourself. You realise and it sinks in: if it's meant to be it's up to me. You care less for the opinions of others. You take yourself in hand. You stop worrying about being told off.
You start eating with your hands.
If you don’t care to be liked, they can’t touch you.
- Navel Ravi Kant He that respects himself is safe from others.
He wears a coat of mail that none can pierce.
- Henry W. Longfellow Don’t throw your power away like a slut.
- Anon
Thanks for reading,
Louise | https://medium.com/illumination/self-esteem-and-expectations-be6895aa708e | ['Louise Moulin'] | 2020-12-07 01:54:15.298000+00:00 | ['Self-awareness', 'Self Love', 'Motivation', 'Life', 'Self Improvement'] |
Jake Reisch ’15 makes headphones that improve seniors’ lives | As co-founder and CEO of Eversound, Jake Reisch ’15 leads a team that creates wireless headphones designed to improve quality of life for older adults and ease communication between residents of senior living communities and their caregivers. To date, the technology has been adopted by over 500 senior living communities. Eversound’s founders were named to the “Forbes 30 Under 30” list for consumer technology in 2018.
Eversound headphones can be used for group activities, communication with caregivers, music therapy, and visits with friends and family.
What does your business do, and what problem does it solve?
Eversound’s goal is to improve health outcomes and quality of life for seniors in elder care communities. Social isolation among seniors is linked to higher rates of mortality and greater health care costs. Eversound makes easy-to-use headphones that enhance focus and engagement for seniors with hearing loss or dementia. They can be used for group activities, communication with caregivers, music therapy, and visits with friends and family. We also provide member communities with a digital library of activities they can use to stimulate social interaction.
How did you get the idea for your business?
My co-founders and I had watched as our loved ones’ senses declined and they struggled to remain connected to the world around them. We wanted to create something that would help. Many of Eversound’s users have lost their spouses or their children, and some of them no longer have anyone visiting them. We see ourselves as advocates above all else.
“Social isolation among seniors is linked to higher rates of mortality and greater health care costs…. We wanted to create something that would help.”
Starting a business is a big risk, especially straight out of school. How did you decide to take the risk?
My last semester at Cornell, we talked to people in over 100 senior living communities about the idea for Eversound. Almost no one believed in us. John Alexander ’74, MBA ’76, a former Cornell Entrepreneur of the Year, was the only person who saw the potential. He gave us a few words of encouragement, our first investment and countless hours of mentorship. That really helped to get us off the ground.
How has your experience at Cornell impacted how you approach business?
Going through Cornell’s eLab accelerator helped to structure my thinking. It forced me into a customer-centric development approach and taught me how to address each problem we faced. Ken Rother, Tom Schryver, Zach Shulman, Brad Treat and Deb Streeter were all critical figures in my learning experience at Cornell.
To date, Eversound’s technology has been adopted by over 500 senior living communities.
What has been your proudest moment as an entrepreneur? Why?
I recently had a check-in with one of our most valuable partners, who has Eversound in use in over 50 assisted living communities. She told us about the impact we were having on the residents and the staff’s lives, and how amazing their Eversound account manager was to work with. It was indescribably rewarding to think back on where we started and to hear the impact we’re having now.
“Going through Cornell’s eLab accelerator helped to structure my thinking. It forced me into a customer-centric development approach and taught me how to address each problem we faced.”
Who or what inspires you?
Many of my mentors inspire me with their good will and authenticity. It takes a lot of dedication to accomplish what our investors and advisors have accomplished. Life is short, and I firmly believe that rallying people around important missions can make a difference.
If you had one piece of advice for someone just starting out, what would it be?
Force yourself into the habit of monthly reporting with a consistent and focused metric dashboard. After you take your first funding, update your investors monthly, without fail, on the good and the bad. It builds rapport, creates accountability and forces you to look soberly at the progress you’re making over time. | https://medium.com/cornell-university/jake-reisch-15-makes-headphones-that-improve-seniors-lives-b837a6d96161 | ['Cornell University'] | 2019-12-18 20:42:29.832000+00:00 | ['Cornell', 'Technology', 'Cornell University', 'Startup', 'Entrepreneurship'] |
Day One Python Engineer | __author__ = “Alex Varatharajah”
class SoftwareEngineer:
“””So you made it! Through all the applications, all the tests and all the interviews… Welcome to ONZO! I have been here just over a month now as a Software Engineer, and I’d like to do a short retrospective on what my first month has been like.“””
def __init__ (self, day_one_python_engineer):
What better place to start than your first day? It is highly likely that you will be reading a “Day One Python Engineer” or an equivalent guide for your role on Confluence. You’ll be setting up your virtual environments, installing your IDE’s, setting up Git and cloning repositories, creating a branch, writing tests for a codebase you have no idea about, raising a PR…Googling what PR means… the point is it can all seem very overwhelming. That’s ok it is normal. If you manage to raise a PR on your first day kudos :+1: (don’t forget to install slack).
def my_first_few_days (self, stand_up):
I came to ONZO with a small Python background from my previous place, with a passion to learn about the fundamentals of programming best practice. Having not worked at a company that follows agile principles before I have found it such a breath of fresh air to come into a team that has such a good philosophy for it. The first few times I felt Stand-Up was a bit intimidating, as you are speaking in front of a whole new group of people, but once you realise that everyone is on the same page and ready to listen and willing help, it just becomes very natural…it also helps that everyone is so friendly and welcoming. My first few days were spent pair programming with various other engineers in the team. This was great because I got to get hands on with our codebase, while also having someone to discuss the structure and why we do certain things. I would thoroughly recommend doing this (though you may not have a choice), it is an easy way to knowledge share and pick up flaws in the code you are writing at the source before it goes into production.
def my_first_ticket (self, sprint_before_you_can_walk):
Pretty soon after I picked up my first ticket, which was a design for a postal code to lat-long look up. This involved creating a confluence page to discuss ideas about how to best attack this problem. Once this was done it was sent round the team for comment before organising a meeting to discuss the ideas and writing stories to fulfil the epic. Initially, I thought “what have I got myself in for?”, but actually it has given me the best opportunities to get stuck into a small cog of the machine, start playing around with ideas and learn a lot in a short amount of time.
def day_to_day (self, pythonic_python):
Since my first ticket, I have been helping mostly with the Python side of projects. Improvements to algorithms made by our data science team need to be available for client use. They are written in Python, therefore it is a big focus of mine to help with getting them into production. One of my personal interests (and objectives) is to make the Python side of data science as efficient as possible. At ONZO, I have been allowed to research new technologies and given the opportunity to remove inefficiencies in our Python codebase. A lot of my days have been refactoring code, writing functions and unit tests for those functions, while also making sure the functionality passes the original unit tests.
def __del__ (self, pros_and_cons):
If you are working at ONZO you’ll be working in a very relaxed environment. Everyone is very friendly and supportive. I feel like I have been here longer than I actually have, which can only be a good thing to have been embedded so quickly.
Pros:
Agile principles in action.
You are trusted, you are responsible and accountable for what you are doing, however (to quote Dumbledore) “help is always available at ONZO for those who ask for it”.
So much Python work, I’m never bored
Knowledge sharing sessions
Regular 1–2–1’s with managers and colleagues
Flexible working
Caffeine until your brain explodes
Dangerous amounts of Tunnock’s Caramel Wafers
Cons: | https://medium.com/onzo-tech/day-one-python-engineer-24214ef2f8d | [] | 2018-12-03 15:51:54.404000+00:00 | ['Python', 'Utilities', 'Energy', 'Software Engineering'] |
What an Ostrich Can Teach Us About Gut Health | When we consider the gut microbiome, we usually think of two things: gut-related diseases, such as irritable bowel syndrome (IBS), or probiotic supplements.
(At least, this is what I think about, as a microbiome scientist. Most people that I speak with are not microbiome scientists, and they just give me weird looks when I begin enthusiastically speaking about the gut microbiome.)
But the gut microbiome — the collection of trillions of bacteria, comprised of hundreds of different species, all living in an uneasy balance with each other inside our intestinal tract — isn’t just for humans. Dogs, cats, mice, cattle, and just about every other animal on the planet has their own gut microbiome.
Because we generally care more about curing irritable bowels in people, rather than in mice, most studies of the gut microbiome tend to focus on humans.
Recently, however, researchers in Sweden published a paper looking at the impact of the gut microbiome on juvenile mortality — in ostriches.
Here’s what they found.
Not Only the Good Ostriches Die Young
An ostrich can live for a very long time — if it makes it past childhood.
Most adult ostriches live for at least 40–45 years, with some making it as old as 75 years. This is quite long-lived; ducks and chickens will live 5–10 years and a goose lives 10–15 years, in comparison. (Of course, some other birds, such as parrots, can live even longer — up to 75 years in captivity.)
However, many ostriches don’t make it to even their first birthday. In one study of more than 2,500 chicks, more than three-quarters of them — 78% — didn’t even survive 90 days beyond hatching!
This is a concern for us, not just because it’s sad to know that most baby ostriches don’t make it, but because ostrich farms are a profitable business industry. Ostriches are highly profitable, and have several advantages over raising beef cattle or other birds:
They adapt well and need little shade or protection from the elements;
Unlike chickens, ostriches are capable of aggressively defending themselves from predators;
The meat, eggs, skin (for leather), and feathers of ostriches are all sold for excellent prices;
Ostriches produce more meat for the amount of consumed resources (higher efficiency) than cattle.
If you’re a farmer, a herd of ostriches could be big bucks — if you can keep too many chicks from dying.
Experiments have looked at different ways of incubating and raising ostriches. A more intensive and nurturing system produced fewer dead birds, but it’s not a perfect solution.
So what kills baby ostriches? One possibility — it could be related to their gut microbiomes.
Is an Out-of-Whack Ostrich Gut Linked to Chick Death?
In the research paper, Videvall and her team used a method called 16S rRNA sequencing to look at the composition of different bacteria in the guts of baby ostriches at various time points.
16S rRNA sequencing is a bit like scanning barcodes at a grocery-store checkout line; it looks at a specific gene, called the 16S subunit, in order to identify different bacteria. Each family of bacteria has slight variations in its 16S gene that differentiates it from other types of bacteria. By using computers to match the 16S gene of all the bacteria from a sample to a reference, we can quickly determine which bacteria are present in a sample.
“You’re taking my poop… for WHAT?!?” Photo by Krzysztof Niewolny on Unsplash
The researchers used this 16S sequencing method to take “snapshots” of the gut microbiomes of baby ostriches as they grew — and when they died, from autopsies of the dead birds. They then compared the microbiomes of the surviving ostriches with those of the chicks that died during the first 90 days of life.
What did they find?
Overall, individual birds who passed away had drastically reduced microbe diversity — that is, there were way fewer species of bacteria in their guts. If the healthy birds had an Amazon rainforest of different species thriving in their guts, the sick birds had a cornfield — far fewer different organisms.
Additionally, some species of bacteria seemed to be more prevalent in sick birds, while other species were more present in the healthy birds. It’s not just how many bacteria were present, but the right ones, instead of the wrong ones, seemed to also play a role.
One interesting conclusion that the authors found was not from the birds directly — but from their environment. How did the bacteria that seemed connected with early death get into the birds?
Sequencing of the environment showed that they didn’t come from the food, water, or soil where the birds were raised. Instead, it seemed like small numbers of these “bad bacteria” were in the baby birds from the beginning.
In the healthy birds, other species crowded out these bad bacteria so that they couldn’t take over. But in the sick birds, the lack of diversity let the bad bacteria proliferate, taking over that gut environment.
There’s Always More to Study
Of course, this study isn’t putting the nail in the coffin for the question of why so many baby ostriches die. There are still a bunch of outstanding questions, including:
What mechanism makes some of these bacteria bad, and what’s different in the good bacteria?
Can we reduce the loss of diversity in sick ostrich chicks?
Is this causative? In other words, we see low gut diversity in the sick birds, but is that what’s responsible for their deaths?
When and how would we intervene to restore a more diverse gut microbiome with healthy bacterial species?
These questions aren’t just relevant to ostriches — we have the same questions with many of the bacteria that we see in human gut microbiomes. We’re working on answers, but they’re not present yet.
Interestingly, many of the “bad bacteria” seen in the low-diversity guts of sick ostrich chicks are closely related to bacteria that are found in humans — and that are associated with negative outcomes. Perhaps if we can better figure out how to understand and improve the human gut, we can do the same for ostriches! | https://medium.com/a-microbiome-scientist-at-large/what-an-ostrich-can-teach-us-about-gut-health-cb56e71ede90 | ['Sam Westreich'] | 2020-12-14 12:12:31.358000+00:00 | ['Biology', 'Environment', 'Science', 'Farming', 'Microbiome'] |
You shouldn’t cheat on your partner and this is why | by: E.B. Johnson
Our relationships form a cornerstone of our happiness, but when they become corrupted, the waters get muddied. Life is complex, and it’s hard to stay centered and focused on one another at all times. We drift and our affections and our attentions drift too. Things go wrong and we start to wonder if the grass wouldn’t be greener in some other pasture.
No matter what’s going on in your relationship, infidelity is never acceptable. When we commit to our partners, we promise to do the right thing when it comes to their emotions and our needs. That’s not to say that you’re doomed to spend forever with someone who is no longer right for you. But it does mean you have to do the hard work when difficult temptations or difficulties come along.
Commitment is important in all stages.
Many dream of building a life with someone, but they don’t always consider what setbacks or challenges can come with that. It’s not all picket fences and butterflies. Relationships are hard work, and that hard work doesn’t always pay off. Sometimes we run up against divides that push us away from one another. In those instances, we can become tempted to cheat. This is never the answer, though. Commitment remains important even when things are bad.
When you commit to be in a relationship with someone, you commit to do right by them — even when things are falling apart. You are allowed to change your mind. You are allowed to want out of your relationship, and you’re allowed to fall in love with someone else. Things change. People change.
What you don’t have the right to do is harm your partner or lie to them. Our lives are the sum of the decisions we make. Committing to someone is making a promise to them, and among those promises is telling the truth. If you’re tempted to cheat on your partner, it’s time to open up and figure out what you really want from them and your relationship in general. In order to do this, though, you’re going to have to dig deep and be brutally honest with yourself and your partner too.
Why you shouldn’t cheat on your partner.
Relationships go through ups and downs, and sometimes they fail. No matter how hard things get, however, we don’t have a right to cheat on our partners. When we commit to someone we make a promise to do the right thing. Cheating only creates bigger issues, more stress, and an array of complex emotions and patterns which can be hard to heal.
Creating bigger issues
We all experience hardships in our relationships, but infidelity never makes those challenges easier. Perhaps your relationship isn’t broken, you’re just experiencing a momentary lapse or pressure point that’s making it hard to connect. By engaging in infidelity, you create a bigger problem — one which you may not be able to come back from. Cheating is complex, and it involves deep-rooted emotions. If you want to come back from problems, you can’t run to another person…you have to run to your partner.
Inflicting unfair injury
No matter what angle you view it from, cheating is wrong, and it inflicts serious pain on the other side of your partnership. When your partner is a good person, then the act of cheating creates unfair injury which is unnecessary. The hurt of cheating runs so much deeper than simply ending something you were both invested in. It also teaches the other person toxic lessons, which follow them throughout their remaining relationships. Betrayal is nothing to take lightly and its wounds last a lifetime.
Corrupted reputation
Affairs are never an event that remains between two people. Rightly, our reputations become corrupted when word gets out about our inability to stay faithful to the people that we’ve committed to. Word will get around and some people closest to you will begin to see you in a different light. Little-by-little, this can impact the way they see you in their lives, and they way in which you’re able to interact with your community at large.
Degraded social circles
Do you think that your affair will only touch you and your partner? Don’t console yourself with this thought. Not only will you potentially lose your spouse or loved one through your decision to cheat, you will potentially lose friends and family in response to your actions. Never underestimate the loyalty that the people we love will feel to a wronged partner. And no matter what they decide to do, you will have to accept it as a result of your actions.
Emotional dysfunction
When we cheat, we don’t just hurt the other person in deep and irreparable ways. We also cause a lot of damage to ourselves emotionally and cultivate feelings of guilt and shame, which change our personalities and our relationships with others. On top of that, we create even greater stress for ourselves, which causes more mistakes in other parts of our lives, as well as physical erosion, which impacts our quality of life.
Cultivating toxic patterns
Cheating, more often than not, is a part of a toxic cycle of self-destruction which undermines our long-term happiness time-and-time again. Cheaters tend to cheat in every relationship they’re in, whether that infidelity is emotional or physical. It becomes a toxic pattern which pulls people in and then pushes them away before reaching true vulnerability. It’s also a way to constantly chase “greener pastures” rather than putting in the work it takes to last.
Handling your urge to cheat the right way.
Are you struggling with an urge to cheat? Has someone new come into your life, or have things changed drastically between you and your partner? You have to process these challenging emotions the right way, and that happens by figuring out underlying issues and opening up communication channels the right way.
1. Figure out the underlying issue
The urge to cheat isn’t necessarily something that happens overnight. Generally, it results from long-standing issues that have been ignored or otherwise swept under the rug. For example, you and your partner could be dealing with a long-term conflict that’s caused you both to shut down and shut one another out. Over time, this coldness compounds and presses you both to look outward for the comfort you can’t find within the relationship.
Instead of embracing your urge to cheat as the natural “next step” in a failing relationship, take a step back and question what the underlying issues really are. Where is this new desire coming from? What is in you that is seeking someone who isn’t your partner?
Avoid blaming it all on the other person. Relationships don’t (usually) fail because of a single person’s actions. We both make the decision to stop communication. We make the decision to put our partners last and everything else in our lives first. Don’t analyze your partner or act on your urges until you get clear on what you’re not getting. Then figure out how that’s feeding your need to cheat.
2. Think before you react
Temptation is a powerful thing. One moment you are happily engaged in your life, and then the next moment you’re presented with something you didn’t even realize you were lacking. For some, this temptation is gambling or engaging in other risky or addictive behaviors. To others, though, that temptation can come in the form of a person who offers something you perceive your partner not to have.
You have to think before you react to and act on this temptation. While your brain might be telling you that this is something you will never encounter again, that just isn’t true when you break it down. Is this person really offering you anything you couldn’t find at home with the right work and communication?
If they can — then why are you settled in a relationship that isn’t giving you what you need? As humans, we claim to be so much better than the animals we rule over, but we ourselves are animals who often struggle to control our base impulses. Rise above your animal nature and think things through. To cheat will only detonate the good in your life. You need to move forward (in any direction) with maturity and good faith.
3. Open up communication channels
Like it or not, communication is a fundamental part of facing up to and resolving your urge to cheat on your partner. You have to communicate with your inner self and get aligned with what you want, both emotionally and morally. You also have to communicate with your partner once your truths have been reached, and get their perspective if you want to repair things or move forward in a different way.
Spend some time with your inner self. Make it a regular habit and spend that time getting reacquainted with your needs and your future designs. We all deserve to be happy in relationships and lives which are aligned to our authentic selves.
Sometimes, our relationships change and no longer fit the person that we’re becoming. If that’s the case, you have to sit your partner down and be honest and candid with them. Find a safe space when you can both be secure and share what’s going on inside your head. Then, you can come together to find solutions and make mutual decisions on what comes next.
4. Get some perspective
Our intimate relationships are intense, and they take up a lot of our time and our focus. When we’ve spent a long time with the same person, it’s easy to get tunnel vision and lose sight of the bigger picture. You need to get some perspective if you’re dealing with ideas of infidelity. From time-to-time, this can help shift us back into line with our partner. Or, it can reveal some more critical realities for us to embrace.
Once you’ve opened up to your partner and taken some time to figure out what your ultimate relationship needs are, you need to take a step back and get some perspective. Extra-martial affairs and outside relationships are exciting. They give us that butterfly feeling and they get our blood racing again.
That’s tempting, especially if you’ve been settled down with the same person for a long time. You have to question your reality on it, though. Are you chasing something you genuinely need, or are you excited about the prospect of a new adventure in territory you’ve never visited before? The time you’ve put in with someone is important. The fantasy presented by an affair is also important to acknowledge. Brace yourself in reality and get some perspective.
5. Do right by your commitments
Like it or not, the commitment we make to our partners applies even to the challenging parts and ending of our relationships. To commit to someone isn’t just to say that you won’t cheat on them. It’s also making a promise to be truthful to them, even when your truth hurts them. That’s what it is to do right by someone. But you can only do this when you look to the future and the bigger victories (and losses) at stake.
Have you decided that you can’t resist your urges? Have you decided that you need something different, or something better? That’s fine. Do right by your commitments and tell your partner that it’s time to call it a day on your partnership. Communicate that you’ve changed and what you want from your relationships has changed too.
You don’t need to give them any gritty details, you just need to ensure that you aren’t betraying their trust. The pain that comes from infidelity is so much greater than the pain that comes from a relationship that’s come to a close. If either of you ever want to be civil to one another again — if you want a genuine chance of healing — then you have to do right by one another and cut the chord if that’s the only thing left to do.
Putting it all together…
No matter how strong your urge to cheat might be, it’s never the right answer for a crumbling relationship. You have the right to walk away from something which isn’t working, but you also have a responsibility to be honest and faithful to your partner. Are you sitting on the fence with a difficult decision to make? You need to handle your urge to cheat the right way.
Figure out the underlying issues behind your urge to cheat and then figure out whether they are worth repairing with your partner. Think before you react. Is this temptation worth losing all the time and effort you’ve put into your relationship? Is it worth losing your friends and your happiness? These are all things we have to consider. Sit down with your partner and open up. Be compassionately honest with them and let them know where you stand. Perhaps the two of you can work things out, you’ll never know until you talk and see where you both stand. Then, you can get a more realistic perspective on where you’re both at and make the decisions which are authentically aligned to your happiness and your commitment to one another. | https://medium.com/lady-vivra/you-shouldnt-cheat-on-your-partner-a48e980f768e | ['E.B. Johnson'] | 2020-10-27 07:07:05.291000+00:00 | ['Self', 'Nonfiction', 'Relationships', 'Psychology', 'Dating'] |
Wandering in the Pandemic Wilderness | As a clinician in an outpatient mental health practice, I have been searching for the right analogy to what this time has felt like for my patients … and for me. As with many traumas, there is that initial shock and denial. As Kübler-Ross wisely observed, next will often come anger and bargaining perhaps settling into a sense of depression and finally acceptance.
Photo by Blake Cheek on Unsplash
But unlike a specific traumatic event that may have a beginning-middle-end, we don’t yet have a sense of the scope and duration of this pandemic. It feels more like a series of waves that continually crash upon us, a tsunami at first, then a series of other waves, some deceptively small, others overwhelming.
You can feel like you have gotten a good breath and then relief. Then at other times, we may feel that we are flailing about in the water unable to feel the mushy ground underneath or to have something stable on which to hold. It is hard to swim or to even know which direction we should go.
Yet also as a person of faith who works in the “Bible Belt”, the image that I keep returning to is the Exodus from Egypt and that time of wandering in the wilderness for the children of Israel. There were lessons there for the people … and maybe for us too.
The people complained a lot.
Over and over and over they complained. The people complained to God; they complained to Moses their leader and to Aaron their priest. And especially early in that journey, those complaints took a form that was not unlike grief or mourning. Even though what they had left may have been oppressive and difficult, the people longed to be back to what they knew, what was stable, what was “normal”.
Photo by Aaron Burden on Unsplash
We too complain about what we miss and what may be lost. We miss the communities where we sat/worked with others. Perhaps it is congregational singing, passing the peace, hugging each other. Maybe it is the restaurant and the sounds of clanging dishes, the variety of smells of food around us. It may be children playing together that are now socially distanced from the parks and playgrounds.
Our grief is appropriate because we have experienced loss. We should honor our mourning … but not allow it to stop us putting one foot in front of the other.
The people learned to eat manna.
Out of the complaints of the people on this long journey, God provided manna. This food was their daily sustenance. They would gather enough for the day and no more. On the day before the day of rest (Sabbath), they could gather enough for two days. If they took more than what they needed, the food would spoil. This continually reminded the people that they should only gather what you need for that day.
And when they were sick of manna, the people complained again. God sent an abundance of quail but not without making the point that God was frustrated with the people for not being satisfied with what they had been given.
For many of us, we too may have to learn to have enough. We may have to look around at what we have for the day, to recognize that there is enough and to be content there.
Photo by Austin Kehmeier on Unsplash
And although God is not happy with their complaints, God still responds. As a parent, I am reminded of the times when I have had to acknowledge that my children needed what they were asking for … even if I had initially said “no” or failed to give it.
We remember that the God with which we are presented in Exodus is a God who seems to have lots of feelings about the people, sometimes loving and gracious, sometimes frustrated and vindictive. Regardless, this is a God who remains in relationship with the people with whom God is covenanted, committed to, through all the ups and downs of that journey. This is the sort of steadfastness that one needs in a companion on this wandering path.
We are tempted to build a golden calf.
During one long stretch when their leader was absent, the people pressured their priest to build an idol. The people wanted something solid and tangible, not this God who said “I am that I am”. With the accumulated jewelry and metal from the people, they melted down their desire in order to form a golden calf.
There is a strong desire in all of us for predictability and control. We look to our leaders and experts for this. But we should be careful not to make an idol of them.
Photo by Philipp Knape on Unsplash
My work as a clinician reminds me that when we are anxious and fearful, angry, and in pain, we will try nearly anything to find relief. This is a normal response. The desire to have life feel predictable again or to feel that someone somewhere has control or an answer helps us feel safe.
But there is danger in the easy answer. Someone offering a quick solution that appears tangible and “real” could be an idol of our own making.
Life in the wilderness is hard. And when we want it to be over, we can find ourselves holding on to someone or something that is not our answer.
In many ways, this “building the golden calf” is a sort of bargaining, a trying to gain control one last time before acknowledging again our sadness at what we have lost and taking our steps toward an uncertain future.
Photo by Christopher Sardegna on Unsplash
In the wilderness, we walk with God, day by day, step by step.
We accept where we are. We eat what we have. We camp for the night. We move on the next day. This is the cycle of wandering in the wilderness … and perhaps what is best during this pandemic.
We may not necessarily know where we are going. Our vision is limited to where we presently are. We try to worry less about the future by grounding ourselves in what is present. This is not the same as walking blindly, but accepting that we can only know this step … then the next.
We will not be returning “home” anytime soon … if ever. There is grief to acknowledge in that. Perhaps this new place has lessons to teach us. Maybe there are promises there that we cannot quite fathom yet. But for now, we’ll pack lightly, walk one step at a time, continue to follow the signs that God has given us, and try to get used to the taste of manna. | https://medium.com/caring-for-souls/wandering-in-the-pandemic-wilderness-6fda613b7ac4 | ['Jason B. Hobbs Lcsw'] | 2020-05-23 22:23:05.994000+00:00 | ['Spirituality', 'Covid 19', 'Mental Health', 'Coronavirus', 'Religion'] |
Considerations When Measuring Chatbot Success | Considerations When Measuring Chatbot Success
And What Principles You Should Implement…
Introduction
Performance measures are important to organizations wanting to track their investment in a conversational interface…
But standards & metrics differ by industry and obviously by companies in each industry. Due to the nascent nature of the technology companies are also eager to learn from one another.
With some overestimate the importance and impact of their chatbot, and other heaving discounting the significant impact their conversational interface is having…
Industry Type Matters
Call Deflection
Obviously chatbots are implemented across a vast array of industries. These industries use different parameters. Parameters which they deem as crucial to the measuring of success in their environment.
Microsoft Power Virtual Agents have Analytics Built In
Banking and financial sectors use chatbots to perform existing tasks faster. And an important driver is lower call volumes and how much savings are incurred from call deflection.
Quality Conversations
The most common and probably important chatbot performance metric is conversation length and structure. In most cases conversation transcripts are reviewed and manually classed in order so points of improvement noted.
Organisations are aiming for shorter conversations and simplified dialog structures. A conversation or specific dialog always have a happy path which developers hope the user will find and stick to.
Digression in a Chatbot Conversation
A rudimentary and simplistic approach would be to have a repair path, or a few. Paths which intends to bring the conversation back to the happy path from points of digression. Hence ‘repairing’ it.
This approach might lead to a situation called fallback proliferation. | https://cobusgreyling.medium.com/considerations-when-measuring-chatbot-success-93aaaac0cb86 | ['Cobus Greyling'] | 2020-05-21 15:55:40.499000+00:00 | ['Chatbots', 'NLP', 'Artificial Intelligence', 'Design', 'Conversational UI'] |
ReElivate — Creating Better Social Virtual Experiences | ReElivate — Creating Better Social Virtual Experiences
A marketplace connecting experience providers and companies to deliver unique, memorable, and virtual experiences
The Problem
The coronavirus pandemic has made companies pretty reliant on Zoom and virtual communication. While these virtual communication services have been life savers during this crazy time, they have not been able to replace in person social interaction. Virtual happy hours and coffee chats are redundant and people are looking for better ways to socialize virtually.
What The Company Does
ReElivate is a platform that connects companies with experience providers to help them create better virtual experiences. ReElivate is a marketplace to support companies in a coronavirus world, so companies can better engage their customers, teams, and clients. Events are centered around six categories including cooking, tasting, entertainment, crafts, care, and games. The platform also includes a concierge service if companies want higher levels of account management and assistance planning the experiences.
The Market
The company is serving a market that is smaller than traditional event management but focused on B2B. Some competitors include Airbnb and Kapow, but ReElivate believes the customers it is targeting are underserved by both competitors.
Business Model
ReElivate is a traditional marketplace that charges hosts a commission on the experiences that are booked.
Traction
ReElivate was founded in September and is working with more than 50 companies for their pilot, and with more than 20 local companies as hosts of experiences including Improv Asylum. The company has started to book experiences for November and will continue to add hosts and companies throughout the month. The self service marketplace will launch by the end of the year.
Founding Team Background
The founding team has over 30 years of experience in technology startups. Jon Conelias and Jason McCarthy were both executives at The Grommet. Conelias has been a CFO and operator for the past 15 years of marketplace companies focused on both B2B and B2C channels with multiple successful exits. McCarthy has been in marketplace operations for eight years. McCarthy founded The Grommet Wholesale business.
What They Need Help With
The company is looking for any hosts to provide experiences — the more interesting the better. The company is also looking to inform companies of its services to help connect them with the right experiences. Connect with the ReElivate team.
Subscribe To The Buzz To Get More Startups In Your Inbox | https://medium.com/the-startup-buzz/reelivate-creating-better-social-virtual-experiences-a8ef73a186dc | ['Bram Berkowitz'] | 2020-12-22 20:02:47.666000+00:00 | ['Marketplaces', 'Venture Capital', 'Startup', 'Coronavirus', 'Social'] |
Scaling Malcolm Gladwell | FIVE IDEAS…
Developer credential management
Enabling least-privilege for infrastructure developers
In certain development environments, a “least-privilege” framework is optimal. This means that the developers working on a project are given only the information necessary to carry out their task without providing access to broader (potentially-sensitive) materials. There’s a need for a credential management solution that grants ephemeral access to infrastructure resources (think GCP or AWS) in a secure and compliant way. Crucially, existing solutions like Sailpoint don’t support infrastructure resources — compliance in that respect is vital for businesses at scale.
I’d like to see a credential management system that solves this problem, making it as easy to share access to infrastructure as it is to give “Comment” or “Edit” access on a Google Doc.
— Astasia Myers, Enterprise Investor at Redpoint Ventures
Personalized podcast adS
Voice synthesis technology to scale the soothing tones of Malcolm Gladwell
Historically, podcast advertisements have worked through direct response: advertisers pay a flat-fee per episode based on the audience size (usually $10–30 per thousand listeners) and provide hosts with copy. Hosts record themselves reading that script and then place it somewhere in the episode’s static mp3 file.
There are a few problems with this. Because the ad is hard-coded into that static file, it’s impossible to personalize the messaging for listeners in different demographics and geographies. Data collection is tricky (downloads are inaccurately counted as listens), back catalogs are difficult to monetize (you’d have to alter that static file), and programmatic advertising is impossible. Hosts might want to confine an advertiser to a certain number of downloads, for example.
Canned advertisements do exist and can be inserted dynamically. But host-read ads remain the gold standard. Given that fact, how can we combine the dynamism of programmatic ads with the intimacy of host-read ones?
Voice synthesis technology may be the solution. A new startup would help existing podcasters create “voiceprints” based on existing content. Then, within the system, advertisers could bid to target anonymized individuals based on their demographics but divorced from the podcast they were listening to. This would look a lot like Google and Facebook’s ad platforms. Whenever an advertiser won a bid, the system would create a synthetic version of a host-read commercial, stitching it into the episode, and delivering it to the chosen user segment.
The result could be a gamechanger, helping podcasts close the monetization gap.
— Elaine Zelby, Investor at SignalFire
Staking creators
Discovering creators first and sharing in their success
The overabundance of digital content has led consumer seeking out individual creators to serve as curators and taste-makers. As with many discovery-driven activities, there’s a sense of pride in finding a creator (or brand) that others haven’t. Social capital can be earned by demonstrating one’s ability to identify these personalities first.
Right now, though, there’s no great way to showcase and validate this ability. As an early-fan, you want to be able to visibly signal your support and maybe even benefit from it. Combine this desire with a creator’s need for capital, and you can imagine a kind of fan-creator investing relationship in which fans “stake” creators and then capture some of the upside in the event they blow up. A platform that enabled this behavior, giving the next Charli D’Amelio the cash to go full-time, would be intriguing.
— Jerry Lu, Investor at Advancit Capital
Essential oils for everyday life
Curated therapeutic oils for modern ailments
Tylenol isn’t always the answer, especially for millennials who are more apt to reach for custom supplement packs than a couple of shots of bourbon to treat a variety of maladies.
One solution? Essential oils. I imagine a beautifully-branded collection of high-quality products designed to do everything from turning your shower into an Equinox (eucalyptus, of course) to relieving tension headaches from staring at a screen all day (peppermint).
It’s time for essential oils to have a glow-up. A great DTC play would be to make them feel cool, curated, and quality-controlled, removing the need to ask your weird aunt who works for an essential oil MLM scheme or step foot in the overwhelming supplement aisles of a Whole Foods.
— Willa Townsend, Director of Business Development at Banza
Twitter podcast app
Prove it can work, then sell
This is a little different than the usual RFS. Mostly, because I think it’s a business made to exit, likely in a short timeframe.
Maybe they don’t know it yet, but Twitter could be the most powerful podcasting app on the planet. The company’s social interest graph is perfectly and uniquely positioned to solve personalized discovery of podcasting. Even Spotify, Google, and Apple don’t have access to the same kind of information.
So here’s the play: create a podcast app with 10x better discovery, leveraging Twitter’s API. Add easy social sharing so that recommended podcasters and episodes can be shared on the platform. If properly implemented, big podcasters would be excited by their ability to reach large audiences through a distribution platform that hasn’t been tapped. That could lead to a breakout trajectory that would cause Twitter to take notice.
There’s a risk, of course. Whenever Twitter saw this was working, they might turn off API access and clone it. But I think there’s a genuine chance it might get taken off the table for a nice sum.
— Alex Carter, Co-founder of the 1st social podcast app on iOS
Have something to say about these ideas? Are you working on something similar? Vote for your favorite idea and share thoughts by hitting the button, below.
Vote for an idea
👉 Get free startup ideas from leading VCs by joining RFS 100 | https://medium.com/swlh/scaling-malcolm-gladwell-21ff00e563fc | ['Mario Gabriele'] | 2020-12-03 00:55:14.175000+00:00 | ['Innovation', 'Startup', 'Entrepreneurship', 'Startup Ideas', 'Venture Capital'] |
Starting Conversations about Customer Privacy and AI | Starting Conversations about Customer Privacy and AI
A guide for UX professionals
By Derek DeBellis, Penny Marsh Collisson, Angelo Liao, and Mar Gines Marin
I love when AI makes a recommendation that accounts for me, my goals, and my context. I love when AI automates part of my workflow by recognizing where I’m going and cutting out some of the work required to get there. I also love when companies respect my privacy. I’m not alone. I’ve heard this countless times in user interviews: people want personalized AI-driven experiences that cater to their specific needs while also respecting their privacy.
When we operate with shared values and communicate about how to put those into practice, researchers and product teams can help to deliver both personalization and privacy to our customers. At Microsoft, we’ve been compiling privacy practices that we think every UX professional should know and understand. The below list isn’t exhaustive, but we’ve found that the ideas it contains help UX professionals exploring AI and privacy. We also include questions you can ask your product and data engineers to kickstart a conversation about AI privacy and design.
Collecting the right data
A lot of the beauty of advanced statistical approaches resides in the ability to handle rich, multidimensional sources of data. The more features a dataset has, however, the more effort is required to make sure that no one can be identified. Take care, also, to collect and use the data in a manner that aligns with your company’s values and your customers’ desires and values.
Conversation starters
• What features would be contained within this data set?
• How important are these features for the model’s performance?
• Do we have a justification for needing that piece of information?
• Does having that information increase the odds we compromise someone’s anonymity?
• Are we (and our partner teams) selecting and using data in a manner that our customers have both comprehended and agreed to under their own volition?
• Are we collecting and using the data in a way that reflects our customers’ values?
Exploring the shape of data without exploring the content or individuals
AI systems don’t need to know much about individuals to make useful predictions for them. Current approaches allow data to be aggregated, featurized, and encoded to anonymity without detracting from the ability to do computations on it. The important patterns can be retained after adding noise to the data. This noise makes it extremely difficult to trace it back to content or individuals. There are also techniques that make sure queries and models return statistical or aggregate results, not raw or individuating results.
Words can be represented as vectors of numbers. These sets appear meaningless to us, but they often contain patterns valuable to the AI system.
Conversation starters
• If I run a query on this data, is it possible that the results will be associated with a small subset of individuals?
• Do we have a way to make sure our queries return statistical, aggregate, and de-identified results?
• Is it possible to determine whose data was in this initial training set?
• How are we anonymizing and encoding data to ensure privacy?
Handling customer data
Modern technology allows us to address many concerns about how, when, how long, and where the data is being handled. For example, a customer’s information doesn’t always need to travel to the cloud for AI to work. Advances have made it possible to get sophisticated models onto a customer’s device without taking up all the device’s memory or processing power. Once on the device, AI can function offline, without needing to constantly connect to the cloud. There are, in addition, many ways to maintain privacy within the cloud.
Conversation starters
• If we want personalized models, how do we build, store, and update them?
• Are we housing our AI models in the cloud or the device? Why?
• How do we update our general models?
• Who, if anyone, can look at the data? When? How? What data exactly?
• How long is the data being stored?
• Where is the data being stored?
Providing customers with transparency and control
Ultimately, you’re asking these questions so you can give customers what they want, which our research shows is transparency and control. You want people to have the information they need to decide whether they want to use the AI-driven features. Make sure you’re presenting this information in an easily understandable way. And if customers decide they don’t want to use AI-powered features, they should have the controls to make the necessary adjustments.
Conversation starters
• Do we have answers to the questions users are asking?
• Do customers have the information they need to determine if using our AI is worthwhile?
• Do customers have the controls necessary to manage their experience? If so, are these controls nuanced enough? Are they too nuanced?
The real UX work begins after you sift through these questions
We hope that these questions help open conversations with the people on your team building AI-driven experiences. This communication reinforces a shared objective and leads to an understanding of how you can help protect user privacy. That knowledge empowers us, in turn, to help our customers navigate privacy in AI-driven products and communicate these intricacies in ways that are better, simpler, and clearer.
Authors
Angelo Liao is a program manager working on AI in PowerPoint.
Mar Gines Marin is a program manager working on AI in Excel.
Penny Collisson is a user research manager working on AI in Office.
Derek DeBellis is a data scientist and user researcher working on AI in Office.
With special thanks to Simo Ferraro, Zhang Li, Curtis Anderson, Josh Lovejoy, Ilke Kaya, Ben Noah, Bogdan Popp, and Robert Rounthwaite. | https://medium.com/microsoft-design/starting-conversations-about-customer-privacy-and-ai-41de0352dedc | ['Derek Debellis'] | 2019-12-05 18:21:21.111000+00:00 | ['User Experience', 'Artificial Intelligence', 'Microsoft', 'Research And Insight', 'Design'] |
From model inception to deployment | From model inception to deployment
Machine Learning model training & scalable deployment with Flask, Nginx & Gunicorn wrapped in a Docker Container
We all have been in this position after we are done building a model :p
At some point, we all have struggled in deploying our trained Machine Learning model and a lot of questions start popping up into our mind. What is the best way to deploy a ML model? How do I serve the model’s predictions? Which server should I use? Should I use flask or django for creating REST API? What about shipping it inside docker? Don’t worry, I got you covered with all of it!! :)
In this tutorial, we will learn how to train and deploy a machine learning model in production with more focus on deployment because this is where we all data scientists get stuck.
Also, we will be using docker containers, one for flask app and another for Nginx web server shipped together with docker-compose. If you are new to docker or containerization, I would suggest reading this article.
High-Level Architecture
High level design of large scale Machine Learning model deployment
Setting up
Here is the GitHub link for this project
This is the folder structure that we will follow for this project
Let’s break this piece into three parts:
— Training a Machine Learning model using python & scikit-learn
— Creating a REST API using flask and gunicorn
— Deploying the Machine Learning model in production using Nginx & ship the whole package in a docker container
Model Training
To keep things simple and comprehensive, we will use iris data-set to train a SVM classifier.
iris_svm_train.py
Here, we are training a Support Vector Machine with a linear kernel which is giving a pretty decent accuracy of 97%. Feel free to play around with the training part, try Random Forest or Xgboost & perform hyper-parameter optimization to beat the accuracy.
Make sure you execute the ‘iris_svm_train.py’ because it will save the model inside the ‘model’ folder (refer to github repo).
Building a REST API
Creating a flask app is very easy. No kidding!
All you need to know is how a request from the client(user) is sent to the server and how the server sends back the response and a little bit about GET and POST methods. Below, we are loading our saved model and processing the new data (request) from the user in order to send predictions(response) back to the user.
app.py
We will use gunicorn to serve our flask API. If you are on windows, you can use waitress (pure-Python WSGI server) as an alternative to gunicorn.
Execute the command: gunicorn -w 1 -b :8000 app:app and hit http://localhost:8000’ in your browser to ensure your flask app is up and running. If you get the message ‘Hoilaaaaaaaaa!’, then you are good to go!!
If you want to test the predict(Post) method, use curl command or use Postman curl --header "Content-Type: application/json" --request POST --data'[{"sepal_length":6.3,"sepal_width":2.3,"petal_length":4.4,"petal_width":1.3}]' http://localhost:8000/predict
Deploying the ML model in production
Finally, fun part begins :)
We will use Nginx web server as a reverse proxy for Gunicorn, meaning users will hit Nginx from the browser and it will forward the request to your application. Nginx sits in front of Gunicorn which serves your flask app.
More information on why Nginx is required when we have gunicorn: link
nginx.conf
Wrapping everything inside Docker Container
Congratulations, you have made it to the last part.
Now, we will create two docker files, one for API & one for Nginx. We will also create a docker-compose file which will contain information about our two docker containers. You have to install docker and docker-compose for this to work. Let’s ship our scalable ML app and make it portable & production ready.
Docker file for API (keep it in api folder)
We have created a docker file for API which needs to be saved inside ‘api’ folder with other files including requirements.txt (containing information about python packages required for your app).
Docker file for Nginx(keep it in nginx folder with nginx.conf file)
docker-compose.yml
docker-compose.yml is the master file which binds everything together. As you can see, it contains two services, one for api & one for server(nginx). Now, all you need is just a single command to run your ML app:
cd <project/parent directory>
docker-compose up
Output of above command
Cheers! Your dockerized scalable Machine Learning app is up and running, accepting requests on port 8080 and ready to serve your model’s predictions.
Open a new terminal to run predict method using curl or use Postman
Predictions from your deployed ML model
Thank you for making it till here, comment below if you face any challenges in running the project or have any feedback. Happy Learning!! | https://medium.com/datadriveninvestor/from-model-inception-to-deployment-adce1f5ed9d6 | ['Akshay Arora'] | 2018-11-28 10:04:23.359000+00:00 | ['Machine Learning', 'Python', 'Artificial Intelligence', 'Deep Learning', 'Docker'] |
Startup Metrics | Startup Metrics
“In God we trust; all others must bring data”
This post lists my favourite articles on startup metrics.
Learn what to measure from rock stars such as David Skok, Andrew Chen and Dave McClure.
Happy reading!
METRICS OVERVIEW:
Key concepts explained
THE CONVERSION FUNNEL:
Describing users flows as funnels
CUSTOMER ACQUISITION & RETENTION:
Capture and maintain user loyalty
SOME NUMBERS:
Metrics in the real world
Happy reading,
— Livio (@LivMKk)
Thank you for reading & recommending ❤
P.S.: If you care about measuring the right metrics, you should read about Financial Planning for SaaS startups
URL.02.11 | https://medium.com/startup-info/startup-metrics-155db194b3a9 | ['Livio Marcheschi'] | 2017-07-16 16:59:58.999000+00:00 | ['Metrics', 'Digital Marketing', 'Startup', 'Entrepreneurship', 'Growth Hacking'] |
COVID-19: Impact on Housing Security Across the U.S. | COVID-19: Impact on Housing Security Across the U.S. Jbochenek Follow Dec 10 · 14 min read
Housing is essential, but not guaranteed. This has never been more obvious than since the start of the COVID-19 lockdowns stranded Americans from their jobs, and thus their incomes. Without income, paying for routine and necessary bills such as food and housing can become a struggle. Housing insecurity is certainly not a new addition to America, but for the first time, we have week by week data on how it has impacted households across America.
Starting in April, the U.S. Census Bureau began a new project, the Household Pulse Survey, with the goal of determining the social and economic impacts of COVID-19 on the American populace. Phase one lasted from April 23rd to July 21st, and this analysis examines those 12 weeks (calendar savvy will notice that this is in fact 13 weeks, but that will be discussed below).
The Household Pulse Survey phase one results are available as Public Use Files (PUF), where each row is a response. However, due to privacy reasons, the PUF does not include location indicators, which was desired for this analysis. Instead, we used the summarized data which was slightly edited due to nested headers. The file we used is available here.
For this, we also worked in Google Colab for easier code sharing across the team. First we imported the necessary packages.
from google.colab import drive
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
import plotly
from sklearn import preprocessing
from urllib.request import urlopen
import json # This will prompt for authorization.
drive.mount(‘/content/drive’)
Then we imported the data:
Household = ‘/content/drive/My Drive/Data/Housing/Household Pulse Survey/phase-one-household-pulse-survey-tool overall.xlsx’ Phase1 = pd.read_excel(Household, sheet_name=’Data’)
The data has three different levels of location, nationwide, state level, and the top 15 largest metro areas. It was important to separate these out, as we wanted to make comparisons within these location groups, not between these location groups. We grabbed only the rows we wanted into three different datasets:
State=[‘Alabama’, ‘Alaska’, ‘Arizona’, ‘Arkansas’, ‘California’, ‘Colorado’, ‘Connecticut’, ‘Delaware’, ‘District of Columbia’, ‘Florida’, ‘Georgia’, ‘Hawaii’, ‘Idaho’, ‘Illinois’, ‘Indiana’, ‘Iowa’, ‘Kansas’, ‘Kentucky’, ‘Louisiana’, ‘Maine’, ‘Maryland’, ‘Massachusetts’, ‘Michigan’, ‘Minnesota’, ‘Mississippi’, ‘Missouri’, ‘Montana’, ‘Nebraska’, ‘Nevada’, ‘New Hampshire’, ‘New Jersey’, ‘New Mexico’, ‘New York’, ‘North Carolina’, ‘North Dakota’, ‘Ohio’, ‘Oklahoma’, ‘Oregon’, ‘Pennsylvania’, ‘Rhode Island’, ‘South Carolina’, ‘South Dakota’, ‘Tennessee’, ‘Texas’, ‘Utah’, ‘Vermont’, ‘Virginia’, ‘Washington’, ‘West Virginia’, ‘Wisconsin’, ‘Wyoming’] US = [‘United States’] Metros=[‘Atlanta-Sandy Springs-Alpharetta, GA Metro Area’, ‘Boston-Cambridge-Newton, MA-NH Metro Area’, ‘Chicago-Naperville-Elgin, IL-IN-WI Metro Area’, ‘Dallas-Fort Worth-Arlington, TX Metro Area’, ‘Detroit-Warren-Dearborn, MI Metro Area’, ‘Houston-The Woodlands-Sugar Land, TX Metro Area’, ‘Los Angeles-Long Beach-Anaheim, CA Metro Area’, ‘Miami-Fort Lauderdale-Pompano Beach, FL Metro Area’, ‘New York-Newark-Jersey City, NY-NJ-PA Metro Area’, ‘Philadelphia-Camden-Wilmington, PA-NJ-DE-MD Metro Area’, ‘Phoenix-Mesa-Chandler, AZ Metro Area’, ‘Riverside-San Bernardino-Ontario, CA Metro Area’, ‘San Francisco-Oakland-Berkeley, CA Metro Area’, ‘Seattle-Tacoma-Bellevue, WA Metro Area’, ‘Washington-Arlington-Alexandria, DC-VA-MD-WV Metro Area’] StatesP1 = Phase1[Phase1[‘Geography (State or Metropolitan Area)’].isin(State)] USP1 = Phase1[Phase1[‘Geography (State or Metropolitan Area)’].isin(US)] MetroP1 = Phase1[Phase1[‘Geography (State or Metropolitan Area)’].isin(Metros)]
It soon became obvious that 50 states was a large number to handle, in visualization, so we added another level to the state data — Divisions. The U.S. Census defines the US by several location levels, one of the most familiar is Regions: Midwest, Northeast, South, and West. There are also Divisions, which split the regions up even smaller. Figure 1 below shows the breakdown of Regions into Divisions.
Figure 1. Regions and Divisions of the United States
We used a data dictionary to add that to the data. I’m including this here so maybe no-one else has to write this code again.
Divisions = {‘Alabama’: ‘East South Central’,
‘Alaska’: ‘Pacific’,
‘Arizona’: ‘Mountain’,
‘Arkansas’: ‘West South Central’,
‘California’: ‘Pacific’,
‘Colorado’: ‘Mountain’,
‘Connecticut’: ‘New England’,
‘Delaware’: ‘South Atlantic’,
‘District of Columbia’: ‘South Atlantic’,
‘Florida’: ‘South Atlantic’,
‘Georgia’: ‘South Atlantic’,
‘Hawaii’: ‘Pacific’,
‘Idaho’: ‘Mountain’,
‘Illinois’: ‘East North Central’,
‘Indiana’: ‘East North Central’,
‘Iowa’: ‘West North Central’,
‘Kansas’: ‘West North Central’,
‘Kentucky’: ‘East South Central’,
‘Louisiana’: ‘West South Central’,
‘Maine’: ‘New England’,
‘Maryland’: ‘South Atlantic’,
‘Massachusetts’: ‘New England’,
‘Michigan’: ‘East North Central’,
‘Minnesota’: ‘West North Central’,
‘Mississippi’: ‘East South Central’,
‘Missouri’: ‘West North Central’,
‘Montana’: ‘Mountain’,
‘Nebraska’: ‘West North Central’,
‘Nevada’: ‘Mountain’,
‘New Hampshire’: ‘New England’,
‘New Jersey’: ‘Middle Atlantic’,
‘New Mexico’: ‘Mountain’,
‘New York’: ‘Middle Atlantic’,
‘North Carolina’: ‘South Atlantic’,
‘North Dakota’: ‘West North Central’,
‘Ohio’: ‘East North Central’,
‘Oklahoma’: ‘West South Central’,
‘Oregon’: ‘Pacific’,
‘Pennsylvania’: ‘Middle Atlantic’,
‘Rhode Island’: ‘New England’,
‘South Carolina’: ‘South Atlantic’,
‘South Dakota’: ‘West North Central’,
‘Tennessee’: ‘East South Central’ ,
‘Texas’: ‘West South Central’,
‘Utah’: ‘Mountain’,
‘Vermont’: ‘New England’,
‘Virginia’: ‘South Atlantic’,
‘Washington’: ‘Pacific’,
‘West Virginia’: ‘South Atlantic’,
‘Wisconsin’: ‘East North Central’,
‘Wyoming’: ‘Mountain’} StatesP1[“State”] = StatesP1[“Geography (State or Metropolitan Area)”].astype(‘category’) StatesP1[‘Division’] = StatesP1[‘State’].map(Divisions)
We needed to do some exploratory data analysis to determine the quality of the data and any adjustments that would need to be made.
sns.displot(StatesP1, x=”Housing Insecurity Percent”, element=”step”, col=”Division”, col_wrap=3) g = sns.boxplot(x="Division", y="Housing Insecurity Percent", #hue="Selected Horizontal Dimension", data=StatesP1, palette="Set3")
g.set(xlabel='Division', ylabel='Housing Insecurity (%)')
g.set_xticklabels(g.get_xticklabels(),rotation=45,ha="right")
Figure 2. Histogram of Housing Insecurity Percent from the Household Pulse Survey by Census division from April 2020 — July 2020
Figure 3. Boxplot of Housing Insecurity Percent from the Household Pulse Survey by Census division from April 2020 — July 2020
Overall, we were very pleased with the distribution of data, in the histograms it shows as relatively normal and in the boxplots we only see one true outlier. For the purposes of this analysis, we kept that outlier as it was important trend data.
We also wanted to get a first look at the actual data, how has the housing security changed over the 12 week period across the US?
g = sns.relplot(kind = ‘line’, data=StatesP1, y =’Housing Insecurity Percent’, x= ‘Week Number’) g = sns.relplot(kind = ‘line’, col=’Division’, col_wrap=5, col_order =[‘Pacific’, ‘West North Central’, ‘East North Central’, ‘Middle Atlantic’, ‘New England’, ’Mountain’, ‘West South Central’, ‘East South Central’, ‘South Atlantic’ ], data=StatesP1, y =’Housing Insecurity Percent’, x= ‘Week Number’) | https://medium.com/swlh/covid-19-impact-on-housing-security-across-the-u-s-6c9d787ce2d | [] | 2020-12-16 17:40:54.131000+00:00 | ['Data Science', 'Python', 'Housing', 'Coronavirus', 'Covid 19'] |
Here’s how we upgraded our marketing analytics | I hate interrupting my analysis workflow by tabbing between different applications and interfaces. It’s irritating, decreases your productivity and just makes things harder to understand. Therefore, I could empathize when one of our marketing people came up to me and expressed their need for an online marketing dashboard. In their vision, this dashboard would unite all our most important online marketing indicators and help them immensely by removing the need to go back and forth between the analytics views of different platforms.
But online marketing data is isolated, lives in silos and the individual platforms don’t make it easy to integrate them with one-another. Luckily, most of them offer API services, so we rolled our sleeves up and built a basic data pipeline, which resides entirely in the cloud and feeds our Tableau dashboard.
The data
As far as social media platforms go, Starschema mostly uses Facebook and, to a much lesser extent, Instagram and Twitter. Our leads are generated through our website, the traffic of which we measure with Google Analytics, which we also use for our standalone, Wordpress-based blogs. It would have been nice to get the traffic data from Medium as well, but this platform doesn’t offer an API for that unfortunately, so it’s not currently in our scope.
The pipeline
If you’re only interested in the visuals and not how the data got there, just skip this section.
We are not a small firm anymore, so whatever we would have created needed to be as enterprise-ready as possible, not least because we wanted to showcase this and use it as a proof of concept for other projects. We also wanted something with low cost and maintenance, since we want to be able to deploy this for smaller firms that might not necessarily have tech personnel on board.
Thus, we opted for the Google Cloud Platform, mostly because their generous free tier ended up completely covering our requirements. The idea is to have scheduled Python scripts download the data through the API-s, flatten it and load it into our Marketing Data Warehouse which we set up in BigQuery for the sake of simplicity. In a more mature environment, we would put a frontend onto App Engine to drive the Scheduler and the Functions, but in our case we skipped this and manage everything through the GCP console.
The very simple pipeline architecture we set up for this project
The dashboard | https://medium.com/starschema-blog/unified-marketing-analytics-69426752b2e5 | ['Istvan Korompai'] | 2019-04-25 14:40:16.450000+00:00 | ['Marketing', 'Tableau', 'Analytics', 'Dataviz', 'Google Cloud Platform'] |
How To Get More Personal With Your Users As A Tech Company: The “Tailored Web Design” Concept | Reducing the number of users who unsubscribe?
Therefore decreasing lost revenue? Yes please.
——
Yesterday’s article was about traditional marketing lessons and how they should be mere guidelines.
For those who haven’t read it, here’s the main idea roughly speaking:
Traditional marketing rules tell us that the homepage should “convert” — that doesn’t apply all the time to SaaS products. When I’m coming back to your website to unsubscribe, is there anything that will remind me of the added value? Or is it just the story I’ve heard at the beginning which I’ve definitely forgotten?
This morning I had this idea which still needs to be explored but I’ll list it here.
We don’t have tailored ways to tackle both new visitors and recurring visitors for our websites.
And we concluded yesterday that it matters. Especially if by“recurring visitor” we mean a paying user who’s looking to churn — that’s revenue that we lose. How about this: the website looks one way the first time it’s visited and then changes to something else on the second visit onwards.
We can adjust this slightly. Maybe it looks the same the first 2 or 3 times — call that the “pitch phase”. Only then it changes into phase 2. Maybe that phase 2 kicks in only for people who have converted.
With cookies, that can happen very easily. It does already, to an extent, on a lot of websites. My only concession would be having a pop-up or maybe a badge at the top saying something along the lines of “First time here? Click me.” — adjusted accordingly to the brand’s voice.
Why would you do that?
Very roughly speaking, user churn is likely to be fixed with two major directions:
Adding new core features to your product
Reminding the users about the product’s value delivery
We’re looking closely at the second bullet point. I’m thinking this concept could work for SaaS companies/startups because once their users converted, they need to be reminded about the value delivery in a different manner.
Objection: Yes Daniel, but that’s why the homepage is a pitch all the time! Because if they land there, they’re reminded the core features! Why would I want to change that?
What I’m saying is that you have the opportunity to speak to your converting users in a different manner. Your homepage/landing page right now is “mixed up”: it presents the same thing to both new and existing users.
How about talking differently to your existing (paying) users?
You don’t need to explain to them “the idea in a couple of words” — you have the chance to tell them a bit more, since they know already some things. Of course, maybe you’ll want to re-explain the simple version of the idea, but you can use different language.
Because they converted, you know their habits/language/way of communication. Through 2019’s technology, we can implement what I’m proposing without immense amounts of effort.
Objection 2 But my users will become annoyed if they always have to click “no” in case I put something like that up
Later I’ll tackle that with Basecamp’s example.
Different things are shown based on who I am? Credits: Undraw.co
Where did this idea come from?
If you’ve ever used Google/Facebook ads, you probably know where this comes from. For those who don’t, the brief explanation is that it traditionally works like this:
You create a couple of audiences that you think are relevant to what you’re doing You create something relevant and valuable to them. But this is not the “sign up to get this ebook” kind of bullshit. It could be something of actual help where you “lose money” on the ad. A/B test these audiences until you see what works better You “retarget” these audiences with the something you sell. Maybe it’s now that you ask them to download your eBook, if you want to play the long game, or maybe you’re going straight for the sale.
In practical terms that means this. Let’s say you’re selling a product that helps people clean/maintain their watch.
You create these audiences. Audience 1 people have liked the Rolex page and are 30 to 45 and working in this city. Audience 2 have liked a watch influencer’s page and are 40 to 55. You end up with 30 audiences. You create a video that’s basically a YouTube tutorial about cleaning your watch. That works wonders in a Facebook/Instagram feed since it’s “masked”— harder to tell whether it’s from a page you liked or an actual ad (as opposed to “BUY NOW” which is definitely an ad). People don’t get buyer’s resistance as you’re not selling something to them. These 30 audiences are A/B tested and then the top 5 audiences are picked (i.e. those 5 audiences that watched the most out of the video and/or engaged with a comment like etc). You run an ad now that actually sells your kit/product to those who have watched more than 60% of the video (i.e. “converted” in terms of video watch time)
This is the idea in short. Now, when it comes to how a SaaS company/startup presents itself, there’s no difference between what’s shown to a new user and one that’s more engaged with the product. It’s like comparing what I’ve just described above to a newspaper ad —the same thing is shown to everyone, regardless of their interaction level.
I’m proposing changing what’s shown to people based on a simple delimitation: whether they converted or not.
Credits: Undraw.co and Ch Daniel
This idea, taken even further
Now that I’ve given this context from the advertising world, I can go even further. What if within your product you’ve got multiple audiences?
Say Trello. Trello can be used by people who are:
Into project management and working with their teams People who use it for their life, as they are organised.
And within category 1, we’ve got the startup kind of team that’s just starting out and the more professional company — we can go and on with naming audiences.
How about a point along the onboarding process where these users place themselves into an audience pigeonhole?
And then based on which audience they are, they’ll have different versions of the homepage website (not the app!), should they ever go there? If not the homepage, then whatever page they have to go through before cancelling the payment.
And that’s not to hold them as hostages (that’s another thing I believe in) — rather something that’s there to remind them about the value delivery either before they unsubscribe or when they happen to visit that page.
Is this happening already?
Since I’ve mentioned Trello, their homepage takes you to their app, if you’re logged in, or to their landing page if you’re not. Different behaviours based on the conversion level of the user.
What if this website talked to me differently since I’ve already signed up? And also different to those who have paid for premium? Credits: Trello.com
Basecamp shows this pop-up if you’re logged in. | https://medium.com/startup-grind/how-to-get-more-personal-with-your-users-as-a-tech-company-the-tailored-web-design-concept-26071e7dbe7c | ['Ch Daniel'] | 2019-05-25 09:46:07.838000+00:00 | ['Design', 'SaaS', 'UX', 'Startup', 'Web Design'] |
Tensorflow vs PyTorch for Text Classification using GRU | Preprocessing
The dataset contains some columns that are not important for this problem and they were dropped. This is how the data frame looks like.
We apply some preprocessing to facilitate the data modeling, thus contractions, punctuation, non-alphanumeric characters, and stop words are removed using regex.
import re
from nltk.corpus import stopwords def decontract(sentence):
sentence = re.sub(r"n\'t", " not", sentence)
sentence = re.sub(r"\'re", " are", sentence)
sentence = re.sub(r"\'s", " is", sentence)
sentence = re.sub(r"\'d", " would", sentence)
sentence = re.sub(r"\'ll", " will", sentence)
sentence = re.sub(r"\'t", " not", sentence)
sentence = re.sub(r"\'ve", " have", sentence)
sentence = re.sub(r"\'m", " am", sentence)
return sentence def cleanPunc(sentence):
cleaned = re.sub(r'[?|!|\'|"|#]',r'',sentence)
cleaned = re.sub(r'[.|,|)|(|\|/]',r' ',cleaned)
cleaned = cleaned.strip()
cleaned = cleaned.replace("
"," ")
return cleaned def keepAlpha(sentence):
alpha_sent = ""
for word in sentence.split():
alpha_word = re.sub('[^a-z A-Z]+', '', word)
alpha_sent += alpha_word
alpha_sent += " "
alpha_sent = alpha_sent.strip()
return alpha_sent def removeStopWords(sentence):
global re_stop_words
return re_stop_words.sub("", sentence) #removes characters repeated
data['Text'] = data['Text'].apply(lambda x: re.sub(r'(\w)(\1{2,})', r'\1',x))
Now the text is cleaner, and we can transform the data into a form that is interpretable to the neural networks. The form we are going to use here is word embedding, which is one of the most common techniques for NLP.
Word embedding consists of mapping the words in the form of numerical keys resembling the Bag of Words approach. The vectors created by Word Embedding preserve similarities of words, so words that regularly occur nearby in the text will also be in close proximity in vector space. There are two advantages to this approach: dimensionality reduction (it is a more efficient representation) and contextual similarity (it is a more expressive representation).
There are a few ways of applying this method, but the one we use here is the Embedding Layer, which is used on the front end of a neural network and is fit in a supervised way using the backpropagation. To do that, it is necessary to vectorize and pad the text, so all the sentences will be uniform.
The dataset is hefty (almost 600000 rows), and a portion of the text has a high quantity of tokens — the 4th percentile varies from 51 tokens to 2030 tokens — which adds unnecessary padding to the vast majority of observations and, consequently, it is computationally expensive. Thus, I remove the rows with more than 60 tokens and sample 50000 observations because a sample size bigger crashes the kernel.
data['token_size'] = data['Text'].apply(lambda x: len(x.split(' ')))
data = data.loc[data['token_size'] < 60] data = data.sample(n= 50000)
Then we build a vocabulary based on the sample to build the Embedding Layer.
# Construct a vocabulary
class ConstructVocab():
def __init__(self, sentences):
self.sentences = sentences
self.word2idx = {}
self.idx2word = {}
self.vocab = set()
self.create_index()
def create_index(self):
for sent in self.sentences:
self.vocab.update(sent.split(' '))
#sort vacabulary
self.vocab = sorted(self.vocab)
#add a padding token with index 0
self.word2idx['<pad>'] = 0
#word to index mapping
for index, word in enumerate(self.vocab):
self.word2idx[word] = index + 1 # 0 is the pad
#index to word mapping
for word, index in self.word2idx.items():
self.idx2word[index] = word inputs = ConstructVocab(data['Text'].values.tolist())
Vectorize the text
input_tensor = [[inputs.word2idx[s] for s in es.split(' ')] for es in data['Text']]
Add padding
def max_length(tensor):
return max(len(t) for t in tensor) max_length_input = max_length(input_tensor) def pad_sequences(x, max_len):
padded = np.zeros((max_len), dtype=np.int64)
if len(x) > max_len: padded[:] = x[:max_len]
else: padded[:len(x)] = x
return padded input_tensor = [pad_sequences(x, max_length_input) for x in input_tensor]
Binarize the target
from sklearn import preprocessing rates = list(set(data.Score.unique()))
num_rates = len(rates)
mlb = preprocessing.MultiLabelBinarizer()
data_labels = [set(rat) & set(rates) for rat in data[['Score']].values]
bin_rates = mlb.fit_transform(data_labels)
target_tensor = np.array(bin_rates.tolist())
Finally, we split the data into training, validating, and test sets.
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(input_tensor, target_tensor, test_size=0.2, random_state=1000) X_val, X_test, y_val, y_test = train_test_split(X_val, y_val, test_size=0.5, random_state=1000)
GRU — Gated Recurrent Unit
Gated recurrent unit (GRU) is a type of recurrent neural network (RNN), and this type of artificial neural network, in which connections between nodes form a sequence, allowing temporal dynamic behavior for a time sequence.
The GRU is like a long short-term memory (LSTM) with forget gate but has fewer parameters than LSTM, as it lacks an output gate. GRU’s performance on certain tasks of polyphonic music modeling, speech signal modeling, and natural language processing was found to be similar to that of LSTM. GRUs have been shown to exhibit even better performance on certain smaller and less frequent datasets.
The model we are going to implement is composed of an Embedding Layer, a Dropout layer to decrease the overfitting, a GRU layer, and the output layer as represented in the following diagram.
Neural Network architecture
On Kaggle, we have available GPUs, and they are more efficient than CPUs when it comes to matrix multiplication and convolution, so we are going to use them here. There are some parameters that common to both frameworks, and we are going them.
embedding_dim = 256
units = 1024
vocab_inp_size = len(inputs.word2idx)
target_size = len(target_tensor[0])
Tensorflow
In newer versions of Tensorflow, there is a bug due to deprecated methods, and it is necessary to make an adjustment to use the GPU in the backend.
import tensorflow as tf
import keras.backend.tensorflow_backend as tfback
from keras import backend as K
def _get_available_gpus():
"""Get a list of available gpu devices (formatted as strings).
# Returns a list of available GPU devices.
"""
#global _LOCAL_DEVICES if tfback._LOCAL_DEVICES is None:
devices = tf.config.list_logical_devices()
tfback._LOCAL_DEVICES = [x.name for x in devices]
return [x for x in tfback._LOCAL_DEVICES if 'device:gpu' in x.lower()]
tfback._get_available_gpus = _get_available_gpus K.tensorflow_backend._get_available_gpus()
Here is the function for the model creation:
from keras.layers import Dense, Embedding, Dropout, GRU
from keras.models import Sequential
from keras import layers def create_model():
model = Sequential()
model.add(Embedding(vocab_inp_size, embedding_dim, input_length=max_length_input))
model.add(Dropout(0.5))
model.add(GRU(units))
model.add(layers.Dense(5, activation='sigmoid'))
model.compile(loss='binary_crossentropy',optimizer='adam', metrics=['accuracy'])
return model
We also implement a callback function, so we can know the time spent in each epoch of the training.
class timecallback(tf.keras.callbacks.Callback):
def __init__(self):
self.times = []
# use this value as reference to calculate cummulative time taken
self.timetaken = time.process_time()
def on_epoch_end(self,epoch,logs = {}):
self.times.append((epoch,time.process_time() -self.timetaken))
Now we can train the neural network in batches.
timetaken = timecallback()
history = model.fit(pd.DataFrame(X_train), y_train,
epochs=10,
verbose=True,
validation_data=(pd.DataFrame(X_val), y_val),
batch_size=64,
callbacks = [timetaken])
We train for 10 epochs, and the net already starts to overfit. The accuracy of the model with the test set is ~89% and takes ~74s/epoch during the training phase. The accuracy seems high, but when we have a better look at the confusion matrix, we notice that the model struggles with the medium rates (between 2–4). The model falsely classifies 2 as 1 and 4 as 5, having a high percentage of false positives.
Confusion matrix of the Tensorflow model
PyTorch
The PyTorch is not so straight forward, and it is a deeper preparation of the data must be implemented before transforming it into tensors.
# Use Dataset class to represent the dataset object class MyData(Dataset):
def __init__(self, X, y):
self.data = X
self.target = y
self.length = [np.sum(1 - np.equal(x,0)) for x in X]
def __getitem__(self, index):
x = self.data[index]
y = self.target[index]
x_len = self.length[index]
return x, y, x_len
def __len__(self):
return len(self.data)
We create the MyData class, and then we encapsulate it with DataLoader for two reasons: organization and avoid compatibility issues in the future.
import torch
from torch.autograd import Variable
from torch.utils.data import Dataset, DataLoader TRAIN_BUFFER_SIZE = len(X_train)
VAL_BUFFER_SIZE = len(X_val)
TEST_BUFFER_SIZE = len(X_test)
BATCH_SIZE = 64
TRAIN_N_BATCH = TRAIN_BUFFER_SIZE // BATCH_SIZE
VAL_N_BATCH = VAL_BUFFER_SIZE // BATCH_SIZE
TEST_N_BATCH = TEST_BUFFER_SIZE // BATCH_SIZE train_dataset = MyData(X_train, y_train)
val_dataset = MyData(X_val, y_val)
test_dataset = MyData(X_test, y_test)
train_dataset = DataLoader(train_dataset, batch_size = BATCH_SIZE,
drop_last=True, shuffle=True)
val_dataset = DataLoader(val_dataset, batch_size = BATCH_SIZE,
drop_last=True, shuffle=True)
test_dataset = DataLoader(test_dataset, batch_size = BATCH_SIZE,
drop_last=True, shuffle=True)
Pytorch differs mainly from Tensorflow because it is a lower-level framework, which has upsides and drawbacks. The organizational schema gives the user more freedom to write custom layers and look under the hood of numerical optimization tasks. On the other hand, the price is verbosity, and everything must be implemented from scratch. Here we implement the same model as before.
import torch.nn as nn
class RateGRU(nn.Module):
def __init__(self, vocab_size, embedding_dim, hidden_units, batch_sz, output_size):
super(RateGRU, self).__init__()
self.batch = batch_sz
self.vocab_size = vocab_size
self.embedding_dim = embedding_dim
self.hidden_units = hidden_units
self.output_size = output_size
#layers
self.embedding = nn.Embedding(self.vocab_size, self.embedding_dim)
self.dropout = nn.Dropout(p=0.5)
self.gru = nn.GRU(self.embedding_dim, self.hidden_units)
self.fc = nn.Linear(self.hidden_units, self.output_size)
def initialize_hidden_state(self, device):
return torch.zeros((1, self.batch, self.hidden_units)).to(device)
def forward(self, x, lens, device):
x = self.embedding(x)
self.hidden = self.initialize_hidden_state(device)
output, self.hidden = self.gru(x, self.hidden)
out = output[-1, :, :]
out = self.dropout(out)
out = self.fc(out)
return out, self.hidden
After the model is implemented, we use the GPU in case it is available and write the loss function alongside the accuracy function to check the model performance.
use_cuda = True if torch.cuda.is_available() else False
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = RateGRU(vocab_inp_size, embedding_dim, units, BATCH_SIZE, target_size)
model.to(device)
#loss criterion and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters())
def loss_function(y, prediction):
target = torch.max(y, 1)[1]
loss = criterion(prediction, target)
return loss
def accuracy(target, logit):
target = torch.max(target, 1)[1]
corrects = (torch.max(logit, 1)[1].data == target).sum()
accuracy = 100. * corrects / len(logit)
return accuracy
Finally we are all set to train the model.
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
total_loss = 0
train_accuracy, val_accuracy = 0, 0
for (batch, (inp, targ, lens)) in enumerate(train_dataset):
loss = 0
predictions, _ = model(inp.permute(1, 0).to(device), lens, device)
loss += loss_function(targ.to(device), predictions)
batch_loss = (loss / int(targ.shape[1]))
total_loss += batch_loss
optimizer.zero_grad()
loss.backward()
optimizer.step()
batch_accuracy = accuracy(targ.to(device), predictions)
train_accuracy += batch_accuracy
We also train for 10 epochs here, and the overfitting problem previously faced repeats itself. The accuracy is ~71%, but in terms of speed PyTorch wins by far with ~17s/epoch. The accuracy here is considerably lower, but this is misleading because the confusion matrix is similar to the Tensorflow model, suffering for the same pitfalls.
Confusion matrix of the RateGRU
Conclusion
Tensorflow and PyTorch are both excellent choices. As far as training speed is concerned, PyTorch outperforms Keras, but in terms of accuracy the latter wins.
I particularly find Tensorflow more intuitive and concise, not mentioning a wide access to tutorials and reusable code. However, I am biased because I have had more contact with Tensorflow so far. PyTorch is more flexible, encouraging a deeper understanding of deep learning concepts, and it counts with an extensive community support with active development, especially researchers. | https://medium.com/swlh/tensorflow-vs-pytorch-for-text-classification-using-gru-e95f1b68fa2d | ['Rodolfo Saldanha'] | 2020-05-27 15:34:20.624000+00:00 | ['Machine Learning', 'Python', 'Neural Networks', 'Artificial Intelligence', 'Deep Learning'] |
How to Make Decisions as a Team (When That Team Keeps Growing) | Reframing decisions.
At their core, the decisions we make every day, both at home and at work are nothing more than bets. As much as we like to think we make decisions based on all the available information, this is rarely the case. Without all of the information, we are essentially betting on the outcome of whatever we decide based on the limited information we have.
We don’t like to admit this because we all want to think that if a decision is being made, especially in the world of business, the decision-maker is sure it will be right. In a startup, this is magnified as often you will be doing something disruptive, new or different, meaning that the answers will rarely be laid out in front of you. After all, you will never be 100% certain of a future that does not exist yet.
Over the last year in my current role, I have been part of a project to design and build an entirely new product that will be taking the company in a new direction. This has been an exciting project with lots of moving parts. With a team that was growing quickly around us, it was vital that, as the product team, we were able to communicate what we called our “comfortable uncertainty” to the wider organization.
Early on, it becomes easy for conversations to end up descending into “Do we even know x is going to work?” which was absolutely the wrong way to be thinking about delivering a new product, especially when trying to do something disruptive. This question is unhelpful in several ways but ultimately at its core lies the question “Are we 100% certain?” which as we know, is just not possible.
In a world where we are thinking in bets, the question we can ask ourselves is “Have we done enough to be confident in this decision”. By asking this, we can move the conversation away from a binary position and enable ourselves to have a more constructive discussion, where we can understand the actions and decisions we have made to get to where we are. This, however, has required a pretty significant shift in the fundamental culture of the organization, moving to a world where everyone understands and embraces the fact that the decisions we make as a team are moving us in a direction where we cannot possibly know all the answers. This becomes even more difficult as the team grows and new hires start to become more specialized coming from larger, more mature organizations where this way of thinking often goes against conventional wisdom.
Moving to comfortable uncertainty
In our experience, the first step in adopting this new way of thinking about decision making in your growing team is to train everyone to be comfortable in this uncertainty. This should absolutely be a trait you look for in new hires and something that should be preached internally. This is, however, easier said than done. Merely stating that the team now has to be more comfortable without knowing the answers will be seen by skeptics as passive management or poor planning. We have found it crucial to be able to give tangible evidence, a north star metric or a stick in the ground far in the future that people can look at. “We don’t know all the answers now, but we know this is where we are heading”.
Many people will always feel a certain way when they hear the word bet in this context. It implies that we, as an organization, are going to leave our decisions to chance or take wild stabs in the dark. This simply isn’t the case. Thinking in bets is all about how you can frame decision making; it is not “I bet this will work.” Like any bet, we must ask “How confident am I of my decision, and what is my threshold based on its importance?”. Larger decisions will, of course, require us to have more confidence, whereas there will be smaller, less destructive decisions that we can make quickly with less confidence. Framing it in this way reduces the need to find a definitive solution and instead forces us to allocate it the correct amount of effort relative to its importance. This not only allows us the ability to make more decisions faster but also helps us prioritize the most critical issues and make sure we are focusing on the right things. | https://medium.com/swlh/how-to-make-decisions-as-a-team-when-that-team-keeps-growing-a0636fe4a63 | ['Jamie Carr'] | 2019-12-13 18:01:01.437000+00:00 | ['Work', 'Leadership', 'Startup Lessons', 'Productivity', 'Startup'] |
Latest Social Media Marketing Trends in 2020 | Digital Marketing
Embrace these Two Game Changing Social Media & Digital Marketing Trends to Take Your Customer Engagement to a Whole Different Level Madhur Dixit Follow Jun 8 · 5 min read
With around 5 billion people using the internet and the number of active social media users touching a 4 billion mark, it is needless to say that social media has undoubtedly become an unbeatable platform for marketing. Companies from almost every industry are embracing this fact and are channelizing their efforts on levelling up their social media marketing practices and engagement with the customers.
Social media is a world in itself and is evolving faster than any other platform when it comes to using it for the marketing activities. In order to stay on top of their game, companies must embrace trends in social media as they come.
The following two most recent trends and best practices are driving customer engagement and helping companies get traction like never before:
1. Content is king, Context is Kingdom and Storytelling is The Royal Guard
Creating attractive and engaging content is no doubt one of the key metrics to success when it comes to social media marketing. However, what is more important is the context in which this content has been used. A same post cannot always be used on all the social media channels by a company to drive its marketing activity. Different content posts should be curated for different social channels targeted on different audiences on these channels. For example, Instagram is a great platform to drive customer engagement and awareness for a particular product. Viewers on Instagram are looking for entertainment and engagement and they do not prefer to be pushed by an advertisement to make a purchase right away. Creating pushy ads for Instagram, for example, might backfire on a company’s marketing campaign. To make sure that the context and the content both resonate with the viewers seamlessly, social media updates or advertisements alone are not enough. A royal guard is needed. I consider Storytelling as this royal guard. Storytelling allows a company to tell a story behind its products and services to the consumer in a given context thus allowing for the content to breathe and flow seamlessly among the consumers.
“Just as a king is no one without his kingdom, his people and his guards, Content is nothing without the Context and the Storytelling.”
Social media apps have understood the concept and importance of storytelling even before the companies looking to market their products on these apps have done so. Instagram, Facebook, Snapchat all are examples of the platforms which have integrated a story feature that allows anyone to upload stories, create engaging posts and polls, ask questions or start a discussion. It is a classic way of community building and the companies which can leverage this story feature would stay ahead of their competition all the way.
Marks and Spencer is a great example of a business that uses Instagram’s story feature very efficiently and drives customer engagement. A proof of Marks and Spenser’s successful use of Instagram story feature is a fact that the following that M&S enjoys is more an organic play than a paid-for one. This is because they produce great content specifically for their audience on Instagram and give this content a beautiful breathable context with the help of visually engaging and attracting storytelling.
Heineken is yet another example of a brand that has got the basics right and mastered the social media marketing game on Instagram. Heineken used its sponsorship of the UEFA Champions league to not just increase sales penetration among loyal customers but also attract new digital customers.
Image captured by Vijeshwar Datt
The brand discovered that many of its consumers watching the UEFA Champions League, which it sponsors, were doing so through digital devices only, meaning they wouldn’t see activations taking place in so-called traditional media. It was found that 8 out of 10 people were following the game on Social media channels. Heineken leveraged this fact to their advantage by starting a UEFA campaign on Instagram using the story feature. This practice not only helped them connect with their existing consumers but also helped them attract new consumers who are die-hard fans of the game in general.
2. Augmented Reality is The New Cool in Marketing
Augmented reality allows brands to create one of a kind, immersive experiences which drive connection and brand-building opportunities. With the help of AR, brands are able to provide virtual tours, hold virtual events and enable customers to try their products virtually without leaving the comfort of their homes.
Nike has always been very innovative and open when it comes to embracing new marketing practices. They have incorporated AR in their marketing practices and customer engagement excellently. In July 2019, they integrated an AR feature in their app in the USA market which allows the customers to scan their feet and get the correct shoe size the first time, thus taking the guesswork out of buying shoes online. This feature is a great addition to Nike app and for the customers to get the correct shoe size and see how a particular shoe would look like on them.
Image Captured by Laura Chouette
Companies such as LVMH and Estee Lauder in the fashion industry could benefit from integrating AR in their marketing campaigns thus providing the buyers a unique experience of trying out clothes, accessories and beauty products in a virtual setting and providing them with a unique shopping experience.
EA Sports in the gaming industry is also leveraging the virtual reality concept in engaging with the gamers, thus providing them with an unforgettable gaming experience. They are using AR to allow the gamers to have an unmatched gaming experience and thus creating a buzz word around the brand. Electronic Arts CEO Andrew Wilson, in 2017, even said that AR is more interesting of an experience for the gamers as compared to the VR.
To make the most out of social and online media, companies must embrace the new trends as they come and should be able to take risks in these campaigns. | https://medium.com/swlh/embrace-these-two-game-changing-social-media-digital-marketing-trends-to-take-your-customer-57edee1dc41 | ['Madhur Dixit'] | 2020-06-09 19:42:09.677000+00:00 | ['Storytelling', 'Marketing', 'Social Media Marketing', 'Digital Marketing', 'Content Marketing'] |
Trading Dashboard with Yfinance & Python. | Beginner level coding with advanced techniques.
Table of Contents:
Pull Data with Yfinance Api
Set the Short and Long windows (SMA)
Generate trading signals
Plot Entry/Exit points
Backtest
Analyze Portfolio metrics
Serve Dashboard
Introduction
To begin, let’s first understand the goal of this article, which is to provide the average retail investor with a quick and easy way to pull live data, use that data to highlight key indicators and create a nice clean readable table before investing in a particular company(s).
This process will help you take emotion out of the equation and give you enough information to make informed decisions.
Substitute any stock ticker you would like at the bottom of the code block:
# Import libraries and dependencies
import numpy as np
import pandas as pd
import hvplot.pandas
from pathlib import Path
import yfinance as yf #Cloudflare
net = yf.Ticker(“net”)
net # Set the timeframe you are interested in viewing. net_historical = net.history(start="2018-01-2", end="2020-12-11", interval="1d") # Create a new DataFrame called signals, keeping only the 'Date' & 'Close' columns. signals_df = net_historical.drop(columns=['Open', 'High', 'Low', 'Volume','Dividends', 'Stock Splits'])
Moving Averages:
Next, we want to create columns for the short and long windows, also known as the simple moving averages. In this case, we will be using the 50-day and the 100-day averages.
In the code below we will need to set the trading signals as 0 or 1. This will tell python at which points we should Buy or Sell a position.
Keep in mind when the SMA50 crosses above the SMA100 or resistance level, this is a bullish breakout signal.
# Set the short window and long windows
short_window = 50
long_window = 100 # Generate the short and long moving averages (50 and 100 days, respectively)
signals_df['SMA50'] = signals_df['Close'].rolling(window=short_window).mean()
signals_df['SMA100'] = signals_df['Close'].rolling(window=long_window).mean()
signals_df['Signal'] = 0.0 # Generate the trading signal 0 or 1,
# where 0 is when the SMA50 is under the SMA100, and
# where 1 is when the SMA50 is higher (or crosses over) the SMA100
signals_df['Signal'][short_window:] = np.where(
signals_df['SMA50'][short_window:] > signals_df['SMA100'][short_window:], 1.0, 0.0
) # Calculate the points in time at which a position should be taken, 1 or -1
signals_df['Entry/Exit'] = signals_df['Signal'].diff() # Print the DataFrame
signals_df.tail(10)
The third step towards building our dashboard is creating a chart with green and red signal markers for Entry / Exit indicators.
Plotting the Moving Averages with HvPlot:
# Visualize exit position relative to close price exit = signals_df[signals_df['Entry/Exit'] == -1.0]['Close'].hvplot.scatter(
color='red',
legend=False,
ylabel='Price in $',
width=1000,
height=400
) # Visualize entry position relative to close price entry = signals_df[signals_df['Entry/Exit'] == 1.0]['Close'].hvplot.scatter(
color='green',
legend=False,
ylabel='Price in $',
width=1000,
height=400
) # Visualize close price for the investment security_close = signals_df[['Close']].hvplot(
line_color='lightgray',
ylabel='Price in $',
width=1000,
height=400
) # Visualize moving averages moving_avgs = signals_df[['SMA50', 'SMA100']].hvplot(
ylabel='Price in $',
width=1000,
height=400
) # Overlay plots
entry_exit_plot = security_close * moving_avgs * entry * exit
entry_exit_plot.opts(xaxis=None)
Next, we will set an initial investment stake of capital and set the number of shares. For this example, let’s say we want to buy 500 shares of Cloudflare.
# Set initial capital
initial_capital = float(100000) # Set the share size
share_size = 500 # Take a 500 share position where the dual moving average crossover is 1 (SMA50 is greater than SMA100)
signals_df['Position'] = share_size * signals_df['Signal'] # Find the points in time where a 500 share position is bought or sold
signals_df['Entry/Exit Position'] = signals_df['Position'].diff() # Multiply share price by entry/exit positions and get the cumulatively sum
signals_df['Portfolio Holdings'] = signals_df['Close'] * signals_df['Entry/Exit Position'].cumsum() # Subtract the initial capital by the portfolio holdings to get the amount of liquid cash in the portfolio
signals_df['Portfolio Cash'] = initial_capital - (signals_df['Close'] * signals_df['Entry/Exit Position']).cumsum() # Get the total portfolio value by adding the cash amount by the portfolio holdings (or investments)
signals_df['Portfolio Total'] = signals_df['Portfolio Cash'] + signals_df['Portfolio Holdings'] # Calculate the portfolio daily returns
signals_df['Portfolio Daily Returns'] = signals_df['Portfolio Total'].pct_change() # Calculate the cumulative returns
signals_df['Portfolio Cumulative Returns'] = (1 + signals_df['Portfolio Daily Returns']).cumprod() - 1 # Print the DataFrame
signals_df.tail(10)
Visualize the Exit positions relative to our portfolio:
# Visualize exit position relative to total portfolio value
exit = signals_df[signals_df['Entry/Exit'] == -1.0]['Portfolio Total'].hvplot.scatter(
color='red',
legend=False,
ylabel='Total Portfolio Value',
width=1000,
height=400
) # Visualize entry position relative to total portfolio value
entry = signals_df[signals_df['Entry/Exit'] == 1.0]['Portfolio Total'].hvplot.scatter(
color='green',
legend=False,
ylabel='Total Portfolio Value',
width=1000,
height=400
) # Visualize total portoflio value for the investment
total_portfolio_value = signals_df[['Portfolio Total']].hvplot(
line_color='lightgray',
ylabel='Total Portfolio Value',
width=1000,
height=400
) # Overlay plots
portfolio_entry_exit_plot = total_portfolio_value * entry * exit
portfolio_entry_exit_plot.opts(xaxis=None)
# Prepare DataFrame for metrics
metrics = [
'Annual Return',
'Cumulative Returns',
'Annual Volatility',
'Sharpe Ratio',
'Sortino Ratio'] columns = ['Backtest'] # Initialize the DataFrame with index set to evaluation metrics and column as `Backtest` (just like PyFolio)
portfolio_evaluation_df = pd.DataFrame(index=metrics, columns=columns)
Perform Backtest:
In this section we will look to highlight 🖐🏼 indicators.
1. Cumulative return — return on the investment in total.
on the investment in total. 2. Annual return — return on investment received that year.
on investment received that year. 3. Annual volatility — daily volatility times the square root of 252 trading days.
4. Sharpe ratio — measures the performance of an investment compared to a risk-free asset, after adjusting for its risk.
5. Sortino ratio — differentiates harmful volatility from total overall volatility by using the asset’s standard deviation of negative portfolio returns, downside deviation, instead of the total standard deviation of portfolio returns.
# Calculate cumulative return
portfolio_evaluation_df.loc['Cumulative Returns'] = signals_df['Portfolio Cumulative Returns'][-1] # Calculate annualized return
portfolio_evaluation_df.loc['Annual Return'] = (
signals_df['Portfolio Daily Returns'].mean() * 252
) # Calculate annual volatility portfolio_evaluation_df.loc['Annual Volatility'] = (
signals_df['Portfolio Daily Returns'].std() * np.sqrt(252)
) # Calculate Sharpe Ratio portfolio_evaluation_df.loc['Sharpe Ratio'] = (
signals_df['Portfolio Daily Returns'].mean() * 252) / (
signals_df['Portfolio Daily Returns'].std() * np.sqrt(252)
) # Calculate Downside Return sortino_ratio_df = signals_df[['Portfolio Daily Returns']].copy()
sortino_ratio_df.loc[:,'Downside Returns'] = 0 target = 0
mask = sortino_ratio_df['Portfolio Daily Returns'] < target
sortino_ratio_df.loc[mask, 'Downside Returns'] = sortino_ratio_df['Portfolio Daily Returns']**2
portfolio_evaluation_df # Calculate Sortino Ratio down_stdev = np.sqrt(sortino_ratio_df['Downside Returns'].mean()) * np.sqrt(252)
expected_return = sortino_ratio_df['Portfolio Daily Returns'].mean() * 252
sortino_ratio = expected_return/down_stdev portfolio_evaluation_df.loc['Sortino Ratio'] = sortino_ratio
portfolio_evaluation_df.head()
# Initialize trade evaluation DataFrame with columns. trade_evaluation_df = pd.DataFrame(
columns=[
'Stock',
'Entry Date',
'Exit Date',
'Shares',
'Entry Share Price',
'Exit Share Price',
'Entry Portfolio Holding',
'Exit Portfolio Holding',
'Profit/Loss']
)
Loop through DataFrame, if the ‘Entry / Exit’ trade is 1, set Entry trade metrics.
If `Entry/Exit` is -1, set exit trade metrics and calculate profit.
Append the record to the trade evaluation DataFrame.
# Initialize iterative variables
entry_date = ''
exit_date = ''
entry_portfolio_holding = 0
exit_portfolio_holding = 0
share_size = 0
entry_share_price = 0
exit_share_price = 0
for index, row in signals_df.iterrows():
if row['Entry/Exit'] == 1:
entry_date = index
entry_portfolio_holding = abs(row['Portfolio Holdings'])
share_size = row['Entry/Exit Position']
entry_share_price = row['Close'] elif row['Entry/Exit'] == -1:
exit_date = index
exit_portfolio_holding = abs(row['Close'] * row['Entry/Exit Position'])
exit_share_price = row['Close']
profit_loss = entry_portfolio_holding - exit_portfolio_holding
trade_evaluation_df = trade_evaluation_df.append(
{
'Stock': 'NET',
'Entry Date': entry_date,
'Exit Date': exit_date,
'Shares': share_size,
'Entry Share Price': entry_share_price,
'Exit Share Price': exit_share_price,
'Entry Portfolio Holding': entry_portfolio_holding,
'Exit Portfolio Holding': exit_portfolio_holding,
'Profit/Loss': profit_loss
},
ignore_index=True)
PLOT RESULTS:
price_df = signals_df[['Close', 'SMA50', 'SMA100']]
price_chart = price_df.hvplot.line()
price_chart.opts(title='Cloudflare', xaxis=None)
Final Step: Print Dashboard
portfolio_evaluation_df.reset_index(inplace=True)
portfolio_evaluation_table = portfolio_evaluation_df.hvplot.table()
portfolio_evaluation_table
Thanks for reading!
If you found this article useful, feel welcome to download my personal codes on GitHub. You can also email me directly at scottandersen23@gmail.com and find me on LinkedIn. Interested in learning more about data analytics, data science and machine learning applications? Follow me on Medium. | https://medium.com/analytics-vidhya/trading-dashboard-with-yfinance-python-56fa471f881d | ['Scott Andersen'] | 2020-12-18 13:57:31.065000+00:00 | ['Python', 'Dashboard', 'Stock Analysis', 'API', 'Finance'] |
R.I.P. Dangerfields: The oldest comedy club in the world. (1969–2020) | Photo ©Copyright 2020 Jason Chatfield
R.I.P. Dangerfields: The oldest comedy club in the world. (1969–2020)
It may have been a bit of a shithole, but it was my shithole.
No Respect, I tell ya.
October 14, 2020
When I moved to New York 6 years ago, I had a notebook with 7 years worth of jokes in it that I’d been performing in Australia. None of them worked in New York. I flushed my notebook down the toilet at the Ludlow Hotel, blocked the toilet and fled as the water rose and flooded the bathroom. A worthy death for a such tired, dreadful material.
Over the following 3 years, I went out every night of the week, did 3–4 spots each night and worked up a new hour of material (ok, 35 minutes of actually decent material, 25 minutes of B and C-grade material.)
I got a manager, an agent, started booking casinos, clubs and doing TV commercials and shows. It was a lot of work, and it didn’t once feel like it. I loved every minute of going out there and building an act.
Auditioning to get ‘passed’ at clubs was nerve-wracking, but I managed to get my foot in the door at a few ground-level places to cut my teeth at some late-night spots. (Getting passed is getting approved to be put on their regular roster of comics. You send in your avails to the booker each week and they give you times/shows that you’ll be on that week. The hardest club to get passed at is The Comedy Cellar; the best club in New York.)
The first club I got ‘passed’ at was called LOL. It wasn’t so much a comedy club as a converted sex dungeon in Times Square with a cheap vinyl banner that said ‘LOL STANDUP COMEDY’ on it. It had two separate rooms inside running concurrent shows every night, filled with people from the mid-west who, 15 minutes earlier, had been told they were about to see Chris Rock, Louis CK and Tina Fey (not a stand-up comic).
As you can imagine, by the time my schlubby face got up on stage, they had realised how badly they’d been screwed and every night there were people asking for their money back. One time the booker got punched in the face by an angry punter.
I would perform there 2–3 nights a week, sometimes 3 or 4 shows in a night, 10–15 minutes apiece. Sometimes I’d be hosting, other times I’d close out the show. We’d do shows every night. In 100 degrees or in the middle of a blizzard. Working that club taught me to deal with hostile audiences and how to digest uncooked hotdogs. Before long, new management came in and I was turfed out the door along with a swag of other comics who had been working there since it opened.
It was at that point a booker I was working with at Broadway Comedy Club put me up to audition for Dangerfields. He’d been producing outside shows for both clubs and threw me up with a few other comics for consideration. I passed.
Within a month I was performing there 2–3 times a week, and booking road gigs at Casinos through their management company. It was my new home club. | https://medium.com/sketchesbychatfield/r-i-p-dangerfields-the-oldest-comedy-club-in-the-world-1969-2020-8c627eff0f08 | ['Jason Chatfield'] | 2020-10-14 21:15:49.292000+00:00 | ['Comedy', 'Humor', 'New York', 'Dangerfields', 'Writing'] |
The Grand Master — Short Story. “For my entire life, I have moved along… | Photo by Jeswin Thomas from Pexels
“For my entire life, I have moved along a path that was set for me. It was as if I was being thought through rather than actually producing these thoughts. After many years of reflection and guilt, I have decided to explain it to you. For those listening to this speech, it will come as quite a surprise that this is the truth. I have achieved a great many things in all of my years. I have contributed much to the progress of science and the understanding of the cosmos. All of the awards I received in the past decades haven’t phased my resolve, but with the honor bestowed upon me by the Nobel committee, I cannot continue this sham any longer. I stand here in the beautiful Stockholm Concert Hall, thanking you for my award in physics, but I must speak the truth. I’ve been a pawn in the game of the Grand Master, and now I reveal it to the world.”
Around me and before me sat some of the most renowned scientists in the world. All were adorned with the most expensive tuxedos, the most ornate gowns. The Swedish Royal Family sat at my left and whispered to one another. Queen Silvia’s nine pronged tiara glistened in the stage lights. The stare laid upon me by King Gustaf penetrated my very soul. I could see it in his eyes. I mustn’t speak the truth. He knew as well as me how the chess game worked. I likely would not be able to finish my speech, but I don’t expect they were prepared for this. I continued.
“Where to begin? Ah yes, on a quiet spring day in 1975, I was sitting with friends around a campfire on a weekend getaway from MIT. We discussed many topics in our escape from the realities of collegiate life. There were no professors to demand changes to our theses, no modifications of our arguments. We were free to think without the guide rails of the intellectual enforcers. At least, that is what I thought. If I were to know what would happen in those trees, I wouldn’t have gone,” I paused, eying the room for feedback. Everyone seemed slightly uncomfortable. “I realized that my friends had already joined the game, and I was the new initiate. The promise of prestige and power to a young man was hard to pass on. It was in this dark forest that I became enslaved to the Grand Master.”
The room became even more uncomfortable, as they didn’t know if I was going to make the final revelation or was simply being comedic and sarcastic. That was my usual style of communication. I quite enjoyed making the room squirm. I could see it as they whispered amongst themselves and stiffened in their seats. It was time to say it.
“At first, I didn’t understand what was happening, was I hallucinating? Was I drugged? As I grew older, I understood what happened that day. I have spoken with many of you about such a subject — I won’t name any names. Many of us, if not all, were initiated into the grand chess game. Each is assigned a role on the board, this board being quite more complex than what one would traditionally think of as chess, and as long as one plays their rank, they can continue, take part in the winnings, and live a decadent life. We all are connected to the stream of the Grand Master, and we do as we’re told. But not today. Not anymore!”
King Gustaf stood up, “Oh, what a laugh you are making, Dr. Wilson,” he said in his thick Swedish accent.
“You highness, it is quite a laugh, but please, let me continue with my speech.”
The King didn’t know how to react, but I could see the guards at the back of the auditorium being radioed. I was sure to be hastened off of the stage in a moment, but I had to continue.
“For those of you watching the live stream, know this — everything that I have discovered was given to me from the stream of thought of the Grand Master. I was a willing servant for most of my life, but now I must reveal the truth behind it. Those on this stage will likely call me mad, delusional, or some variation of the words. Science was not built by a series of geniuses, but by intellectual slaves, and I am tired of being a vessel from which ideas emerge.”
A security guard from across the room began to make his way to the stage. I only had moments left.
“Who is the Grand Master, you may ask? I’ve spent most of my life trying to find that answer, but I’ve realized that I won’t be able to. This is why I’ve made my stand here, on this prestigious stage. A madman cannot make it to this stage, save for John Nash. He knew what I know now. He couldn’t live with himself, just as I cannot any longer. Those watching must achieve what I could not in my life. You must find the Grand Master. You must!”
The security guard was now at my back, and I was escorted off of the stage. I didn’t know what would happen from this moment on, but I expected I would be placed in an institution for the remainder of my days. Luckily, I didn’t have many remaining. I did this for all of those who had to live with this secret. I did this for John, who played the grand chess game and was broken by it. It breaks so many. He breaks so many. It’s now up to the next generation to transcend the game. | https://medium.com/beyond-the-river/the-grand-master-48ab02c03adb | ['Drunk Plato'] | 2020-06-11 14:46:33.652000+00:00 | ['Short Story', 'Science', 'Mystery', 'Psychology'] |
An Idiot With a Plan Can Beat a Genius With Hope | Is it just me or does it seem like the quiet ones that you least expected to succeed in high school are the ones now living the lives we all dreamt of heading into adulthood?
I mean, I guess I heard the idea of people "peaking" early — I just didn't have any true understanding of what that meant when I was younger.
Listen, I'm not trying to prop myself or anyone up in writing this — it's just something that's been on my mind.
I wasn't a part of the "popular" kids growing up. I had my friends and we were always kind of just doing our thing. Then — into adulthood, I moved away from home, "peeked" at what others from my graduating high school class were up to, kept my head down, and got to work.
Now, I'm making a very healthy living doing what I love to do and helping build a community of other badasses achieve the same for themselves.
It got me thinking about all the other weird kids I grew up with and how I notice a lot of them are doing really rad things in life. Of course, I'm able to see some of my own confirmation bias here — it isn't a perfect generalization, however I do think there is a trend.
I don't think I'm anything special.
Of course, I know that me being me is special — just like you being you is special.
What I mean is I don't have any special skills, knowledge or education that allowed me to accomplish what I have and I know there are millions of other people who fall into that camp as well.
In this piece, I'm going to go over the plan and share the strategy I have followed over the years that have helped me build, scale, and grow my online business so you can too. | https://medium.com/the-ascent/an-idiot-with-a-plan-can-beat-a-genius-with-hope-bf60faa4b3bd | ['Jon Brosio'] | 2020-10-17 13:03:08.498000+00:00 | ['Blogging', 'Motivation', 'Entrepreneurship', 'Life', 'Self Improvement'] |
A Concise Guide to Remember More of What You Read | 1. Start What You Can Finish
Before you pick up a book, use what I’d like to call it the “Three-Pronged Questionnaire”:
What do I want to learn or read? They can be categories such as fiction/non-fiction, self-help, politics, science, relationships, cooking, etc. Why am I reading this? What do I hope to get out of this book?
To help you with that, going through the table of contents, book summary, and reviews give you a wonderful sense of idea what that book is all about.
The list of questions above serves to ensure you’re reading a book that will pique your interest in the long run. If you’re likely to enjoy the purpose of the book, you’ll make an effort to understand the context of what the author has written.
As one of my teachers used to put it, “If you study to remember, you’ll forget. If you study to understand, you’ll remember.” — Which do you remember better: the content in your history textbook or the logic behind why 5+6 = 11?
2. Annotate (The Messy Way)
Scribbling notes on books is not something new, but the way you’re making notes on them make a difference.
Merely underlining or highlighting an excerpt or an idea in the book isn’t effective in imprinting those words in your memory. Instead, I’d highlight a specific sentence or paragraph of ideas, draw a curly bracket beside it, and rephrase it in my own words.
Doing this not only summarises the key points the author is trying to convey, but it also deepens your understanding. It’s the same way of telling someone what you’re trying to remember except you’re doing it for yourself.
Putting something in your own words helps you retrieve that information later on.
Don’t believe me? Try explaining the process of evaporation and revisit the concept a day or two later.
Put tabs on the first page
Sometimes, I’d also annotate striking ideas on the first page of the book. It’s usually filled with a title in the middle of a blank page, so there are tons of spaces left for me to write.
On it is where I’ll write a short subtitle along with the page number relating to a concept that appealed to me. Whenever I want to refresh my memory on some grand ideas or lessons listed in the book, I just need to turn to the page.
3. Create Your Encyclopedia of Book Summaries
Much of my advice here takes a great deal of work on your part but summarising each chapter and its takeaways help in remembering what you read.
I use Notion to compile all book summaries I’ve written. You may not be carrying your pile of books all the time so having an app like this allows you to retrieve information wherever you go.
Here are some useful functions that helped me organised my notes better:
Collapsable drop lists — useful for parking a chunk of the information under the main header of a chapter
Underline, bold , and italicise functions to emphasise various key concepts
, and italicise functions to emphasise various key concepts Colour tags to differentiate the categories of books you’ve read
Embed web links, images, or videos — could be book reviews or summaries by others online that you find useful
Another bonus tip I’d like to share is a compilation of my favourite websites to visit for concise book summaries: | https://medium.com/the-innovation/a-concise-guide-to-remember-what-you-read-16d651f64132 | ['Charlene Annabel'] | 2020-12-22 11:05:43.406000+00:00 | ['Books', 'Reading', 'Productivity', 'Productivity Hacks', 'Self Improvement'] |
Post-modern rock-pooling | This piece was originally published in Mediaview, Geology Today — a publication of the Geological Society of London and the Geologists’ Association.
Stygobites — Niphargus aquilex (Image: Chris Proctor).
Gazing into nature’s aquarium; a replica of the distant life in the ocean, rock pools show us a glimpse of the distant, marine world of crabs, shrimps and all manner of crustaceans jostling for life in their aquatic domain. But is the coast as far as they venture? The holidaying shores of the seaside may hold the classic rock-pool, but a similar crustacean abundance exists unknown beneath our dry, clad feet. Deep within our inland geology, a rich biodiversity of crustaceans is only yet beginning to be unearthed, living squeezed into the tiny nooks and crannies carved into the subterranean landscape.
The creatures of these unseen depths are known as stygobites, and after so long buried beneath ground have become like the ghosts of their more marine counterparts. Their bodies have become wraith-like; sightless and an eerie white, while they have sprouted further‐reaching limbs and antennae for fumbling around in the rocky crevices. These are not for catching in seaside fishing nets and buckets, but inhabit a unique ecological and geological niche, after a dual ancestry arising from both freshwater and marine animals. A suitably unique home is formed by deep underground hydro-geology, as groundwater erodes extensive submerged channels that permeate the land. They are found throughout these clandestine networks; from sparse, thin rock fissures to the deep aquifers in chalk, limestone and other rock strata, and even in the infinitesimal liquid spaces between the gravel grains of riverbeds.
Stygobites — Niphargus glenniei (Image: Andy Lewington)
Although also recorded in cave pools, it is thought that stygobites are native to the isolated channels of phreatic water (ground water below the water table) deep in rock beds, and only by flooding and heavy rains are they brought into the fringes of our world, as they are flushed out into cave and river systems. Despite their apparent isolation, this aquatic subterranean habitat has enabled stygobites to become relatively widespread throughout the subsurface, and they are wider ranging than the related troglobites (which are terrestrial, as opposed to the aquatic stygobites). This is likely due to the dynamics of water associated with flooding, allowing stygobites to disperse and spread in range. If you want to know which rocks beneath your feet may hold this secret life beneath, research has shown that stygobites appear to favour fissured, carbonate strata, which may be due to such rock providing the most fitting basis for their habitat.
Stygobites are not just fascinating largely unknown creatures in our landscape, but have wider implications, from revealing more about biogeochemical processes deep in continental geology, to acting as indicators for the condition of subsurface waters and our increasing impact on them through aquifer drainage as a water resource. Even with their natural secrecy and our predominant ignorance of them, it seems that even these remote creatures can’t escape the global anthropogenic changes to the Earth. It is thought that the stygobites’ adaptations for stable aquatic environments (such as long life‐cycles and slower egg development) may not withstand a modernity of farmed aquifers, where water levels must follow the rhythm of mankind’s insatiably thirsty lifestyle. Stygobites attempting to survive such altered environments may migrate or simply decline; becoming dormant under such stressors. This unfortunate trend may however also provide a hidden benefit; as such effects may be utilizable as biomarkers of pollution or climate change.
Knowledge of our geological past can also be gleaned from these unassuming creatures, such as our past climate through the distribution of the stygobite species. As past glaciers froze vast areas of the land surface, the stygobites’ ecosystem was deprived of nutrients and water; starving their population and leaving gaps in their distribution that still remain today, although recent research also suggests the survival of groups of stygobites from previously glaciated areas in other parts of the world, such as Canada and Ireland. Studies in everything from the micro‐structure of groundwater channels and aquifers, to large, extensive geological changes, and even to our past climate, can be advanced through a better understanding of these hidden rock-poolers.
(Readers can find additional information in: Lamoreux, J., Journal of Cave and Karst Studies, 2004, v.66, pp.18–19; and, Roberston, A.L., et al., 2009. The distribution and diversity of stygobites in Great Britain: an analysis to inform groundwater management. Quarterly Journal of Engineering Geology and Hydrogeology, v.42, pp.359–368.) | https://medium.com/swlh/post-modern-rock-pooling-2f6b59eb65e8 | ['Georgia Melodie Hole'] | 2020-05-18 21:30:56.849000+00:00 | ['Creative Writing', 'Geology', 'Science', 'Wildlife', 'Writing'] |
Visualising COVID19 | Visualising COVID19
Analysis of coronavirus from an Epidemic to Pandemic
Photo by Markus Spiske on Unsplash
Coronavirus was first identified in the Wuhan region of China by December 2019 and by March 11, 2020, the World Health Organization (WHO) categorised the COVID-19 outbreak as a pandemic. A lot has happened in the months in between with major outbreaks in Iran, India, the United States, South Korea, Italy and many more countries.
We know that COVID-19 spreads through respiratory droplets, such as through coughing, sneezing, or speaking. But this is an approach to visualise how quickly did the virus spread across the globe and, how did it take the form of a massive pandemic from an outbreak in China!
This is an attempt to visualize COVID-19 data from the first several weeks of the outbreak to see at what point this virus became a global pandemic and finally visualising its numbers across some severely hit nations.
The data used for visualisations have been collected from the publicly available data repository created by Johns Hopkins University’s Center for Systems Science and Engineering. Firstly, we use data till 17th march 2020, for the first several weeks of the outbreak to see at what point this virus became a global pandemic.
A. Importing the Dataset and required Libraries
Loading readr, ggplot2 and dplyr packages in R. Reading the data for confirmed cases from datasets/confirmed_cases_worldwide.csv using read_csv function and assigning. It to the variable confirmed_cases_worldwide.
B. First Glance at the Data by Plotting the Confirmed Cases Throughout the World
The data above shows the cumulative confirmed cases of COVID-19 worldwide by date. Just reading numbers in a table makes it hard to get a sense of the scale and growth of the outbreak. Hence, drawing a line plot to visualise the confirmed cases worldwide.
Using confirmed_cases_worldwide, drawing a ggplot with aesthetics cum_cases (y-axis) versus date (x-axis) and ensuring it is a line plot by adding line geometry. Setting the y-axis label to “Cumulative Confirmed Cases”
C. Comparing China to the Rest of the World
The y-axis in that plot indicated a very steep rise, with the total number of confirmed cases around the world reaching approximately equal to 200,000 by 17th March 2020.
Beyond that, some other things can also be concluded which are; there is an odd jump in mid-February, then the rate of new cases slows down for a while, then speeds up again in March. Early on in the outbreak, the COVID-19 cases were primarily centred in China. Hence, plotting confirmed COVID-19 cases in China and the rest of the world separately to see if it gives us any insight.
Reading in the dataset for confirmed cases in China and the rest of the world from datasets/confirmed_cases_china_vs_world.csv, assigning to confirmed_cases_china_vs_world. Using glimpse() to explore the structure of confirmed_cases_china_vs_world. Drawing a ggplot of confirmed_cases_china_vs_world, and assigning it to plt_cum_confirmed_cases_china_vs_world. Adding a line layer. Adding aesthetics within this layer: date on the x-axis, cum_cases on the y-axis, and then grouping and coloring the lines by is_china.
D. Annotation
We can observe that the two lines have very different shapes. In February, the majority of cases were in China. That changed in March when it really became a global outbreak: around March 14, the total number of cases outside China overtook the cases inside China. This was days after the WHO declared a pandemic.
There were a couple of other landmark events that happened during the outbreak. For example, the huge jump in the China line on February 13th, 2020, wasn’t just a bad day regarding the outbreak; China changed the way it reported figures on that day (CT scans were accepted as evidence for COVID-19, rather than only lab tests).
By annotating events like this, we can better interpret changes in the plot, hence modifying the plt_cum_confirmed_cases_china_vs_world as follows:
E. Adding a Trend Line to Chinese Cases
To get a measure of how fast the number of cases in China grew we need to add a trend line to the Chinese case’s plot. A good starting point was to see if the cases grew faster or slower than linearly.
We can see there is a clear surge of cases around February 13, 2020, with the reporting change in China. However, a couple of days after, the growth of cases in China slows down and to describe the same COVID-19’s growth in China after February 15, 2020 we added this trend line.
Filtering rows of confirmed_cases_china_vs_world for observations of China where the date is greater than or equal to “2020–02–15”, and assigning it to china_after_feb15. Using china_after_feb15, drawing a line plot of cum_cases versus date. Adding a smooth trend line, calculated by using the linear regression method, without the standard error ribbon.
F. Adding a Trend Line to Rest of the World Cases
From the plot above, the growth rate in China is slower than linear. Which indicated that China had at least somewhat contained the virus in late February and early March. Now, similarly comparing the growth of cases across the globe.
Filtering rows of confirmed_cases_china_vs_world for observations of Not China, and assigning them to not_china. Using not_china, drawing a line plot of cum_cases versus date, and assigning it to plt_not_china_trend_lin. Adding a smooth trend line, calculated by using the linear regression method, without the standard error ribbon.
G. Adding a Logarithmic Scale to the Trend for Rest of the World
From the plot above, we can see a straight line does not fit well at all, and the rest of the world cases grew much faster than linearly. Hence, trying to add a logarithmic scale to y-axis to check if the rise is exponential.
Modifying the plot, plt_not_china_trend_lin, to use a logarithmic scale on the y-axis.
H. Countries outside of China which have been hardest hit by COVID19
With the logarithmic scale, we get a much closer fit to the data. From a data science point of view, a good fit is a great news. But unfortunately, from a public health point of view, that meant that cases of COVID-19 in the rest of the world grew at an exponential rate which is quite evident today.
Not all countries are being affected by COVID-19 equally, and it would be helpful to know where in the world the problems were the greatest. Hence, to find the countries outside of China with the most confirmed cases in our dataset; data was imported on confirmed cases by country. Chinese data has been excluded to focus on the rest of the world.
Looking at the output of glimpse() to see the structure of confirmed_cases_by_country and Using confirmed_cases_by_country, we group by country. Summarising to calculate total_cases as the maximum value of cum_cases. And getting the top seven rows by total_cases.
I. Plotting the Hardest Hit Countries as of Mid-March 2020
Even though the outbreak was first identified in China, there is only one country from East Asia (South Korea) in the above table. Four of the listed countries (France, Germany, Italy, and Spain) are in Europe and share borders. To get more context, we can plot these countries’ confirmed cases over time.
Reading in the dataset for confirmed cases in China and the rest of the world from datasets/confirmed_cases_top7_outside_china.csv, and assigning it to confirmed_cases_top7_outside_china and Using glimpse() to explore the structure of confirmed_cases_top7_outside_china. Using confirmed_cases_top7_outside_china, drawing a line plot of cum_cases versus date, grouped and colored by country and setting the y-axis label to “Cumulative Confirmed Cases”.
J. Plotting the Hardest Hit Countries as of Today
Now in order to analyse the hardest-hit countries. As of today, we will have to import fresh data which is updated till today i.e 28th June 2020, hence we use coronavirus library from Github (Dev) version which is updated on a daily basis.
Conclusion
From the above analysis, we can conclude the timing and the shift in the virus from being an Epidemic in Wuhan, China to becoming a World crisis Pandemic. We can also observe a significant increase in the. Rise of cases in China. After mid-Feb due to improvement in tests and also considering CT Scans as a test for Corona Virus. Also, using a regression trend we can clearly see the exponential rise of the cases across the world and atlas we can visualise the countries which are severely hit by the virus in today’s time. This has been an effective way to study the growth in the number of cases across the world and especially in china through Visualisations in R using read, gig-lot and dplyr libraries.
Looking at India in specific, it ranks 4th currently in the number of confirmed cases and it is very correct to state that the number of cases is increasing day by day and even though the government is being negligent and removing the lockdown, one must keep in mind that the virus has not been eradicated and he/she should vitally maintain the social distancing norms with giving utmost priority to one’s hygiene. | https://medium.com/analytics-vidhya/visualising-covid19-d3577ebee496 | ['Kartikay Laddha'] | 2020-07-04 16:43:36.588000+00:00 | ['Data Science', 'Business Analysis', 'Coronavirus', 'Visualization', 'Covid 19'] |
Why your Kubernetes configuration strategy is broken… | ...and here’s how to fix it
At kapitan.dev we believe the current way to manage Kubernetes configurations is broken. Actually, it probably goes even deeper than that, and you will see soon why.
I have a strange way to look at Kubernetes: for me Kubernetes is something that allows me to define, package and distribute complex applications with a set of configuration files. Meaning, I can define every aspect of the deployment of a complex multi-components application with Kubernetes resources: the services that make the application, configuration files, network policies, services, load balancers, RBAC, monitoring, alerting, auth.
Everything can be captured, packaged and distributed using declarative Kubernetes resource definitions.
And yet, when we use tools like helm and kustomize, we tend to focus on one specific component at the time, effectively losing the big picture of what Kubernetes really is all about.
To draw a parallel with “old” tech, we are ~6 years into the age of Kubernetes, which completely disrupted the way we deploy our services, and yet we are still at the “rpm” or “deb” stage. To be fair, at least helm gets closer to a “yum” or “apt-get”, but doesn’t go much further than that.
Let me give you some examples of where these traditional approaches fall short.
Example 1: Adding a new component to your infrastructure
Imagine that you just created a new component/service and you want to deploy to your infrastructure. Fine, get your helm/kustomize configuration and deploy it, right? Pronto! Presto!
But here is the “expectation vs reality” moment: Life has a way to get to you, and for the new component to work, you also need to:
Add an env variable to another service
Add a new route to your ingress
Add a new annotation to all other services you have
Create a new DB username/password for the new service to use
Create a new network policy associated with the service
Add a CD step to deploy the new component
Please take a second to let it sink in, and answer these questions:
How many steps, tools and pull requests you will have deal with in order to fulfil a “business as usual” operation.
Could anyone in your company/team fulfil this request?
If the answer is not “1, 1, 1” and “yes” please keep reading.
Example 2: Enabling a feature flag
This other example is nothing different from the previous one, just something that is expected to happen more often.
Let’s imagine you have worked weeks cross-teams to define a new behaviour for you application, and it’s behind a feature flag: feature flags to be precise.
Because you have fully embraced microservices like a boss, you need to do the following to enable the new behaviour, which we should call “holiday sales reporting”
set the FLAG_HOLIDAY_SALES_REPORTING=true on fronted component
on component add the --enable-json-output on backend component
on component point the /report route to a new service
Same questions as before really, how easy is it for you to achieve this with your current setup? How much coordination is needed?
Now a bonus question: what if, after the holidays, you need to turn off this flag? How many steps will you have to go through? And what if, in the meanwhile, --enable-json-output has actually been required also by another feature?
And what if you have only added this feature to a couple of environments? How do you document that the feature is enabled?
Solving it with Kapitan
When you use kapitan , you don’t just capture the configuration of one single component, but rather the full configuration of everything else it is needed to run the whole application: Kubernetes resources, Terraform files, documentation, scripts, secrets.
The typical setup uses a “target file” to capture “at least” everything that you would normally put in one namespace, but you can easily track resources that need to be created in other namespaces (i.e. istio-system)
A typical target file (i.e. targets/production.yml ) would look like this:
classes:
- common
- profile.production
- location.europe
- release.production - component.frontend
- component.backend
- component.mysql parameters:
description: "Production environment"
Head over to our repository https://github.com/kapicorp/kapitan-reference for a more complete example i.e. Weaveworks “sock shop”
A class (i.e. component.frontend ) points to a file on the local disk, so you would expect to find a file inventory/classes/component/frontend.yml to capture the configuration needed for the frontend component, which would look like this:
parameters:
components:
frontend:
image: company/frontend:${release}
port:
http:
service_port: 80
ingresses:
global:
paths:
- backend:
serviceName: frontend
servicePort: 80
path: /web/*
Head over to our repository https://github.com/kapicorp/kapitan-reference for a more complete example.
Solving Example 1: Adding a new component to your infrastructure
With kapitan adding a new component consists on creating a new class to define that component: inventory/classes/components/report.yml
parameters:
component:
users:
# define new users
report:
username: report
password: ?{gkms:${target_name}/report||randomstr|base64} # definition of the component itself
report:
image: company/report:${release}
env:
MYSQL_USERNAME: ${users:report:username}
MYSQL_PASSWORD:
secretKeyRef:
key: mysql_password # Create a new Secret resource
secrets:
secret:
data:
mysql_password:
value: ${users:report:password} # Create new network policy
network_policies:
default:
ingress:
- from:
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 80 # <add more report component configurations here>
frontend:
env:
REPORT_SERVICE: # Add a new env variable in the fronted componentfrontend:env:REPORT_SERVICE: http://report:80 # Add a new ingress
ingresses:
global:
paths:
- backend:
serviceName: report
servicePort: 80
path: /report/* # Add a new annotation to all components in this target
generator:
manifest:
default_config:
annotations:
company.com/report: active
Notice how this one file captures everything you need to do to configure the new report service in your setup.
When you add the component.report class to a target file (or to another class file, i.e. application.website ), Kapitan will take care of configuring everything that is needed for the component to work in one go.
class to a target file (or to another class file, i.e. ), Kapitan will take care of configuring everything that is needed for the component to work in one go. As you might have guesses, when you remove it, all that extra configuration goes away.
Secrets are automatically generated for you (and in the example, encrypted using Google KMS
If you have a Kapitan integration with your CD software (i.e. the yet-to-be-released spinnaker integration), your CD pipelines will also be modified to include the new component
Solving Example 2: Enabling a feature flag
The solution here is pretty similar in principle to the Example 1: you create a class and you add it to your target.
The name that class: features/holiday_report.yml
parameters: frontend:
env:
FLAG_HOLIDAY_SALES_REPORTING: 'true' backend:
args:
- --enable-json-output # Add a new path to the ingress
ingresses:
global:
paths:
- backend:
serviceName: report
servicePort: 80
path: /holiday_report/*
Now when you want to enable/disable this feature, all you need to do it to add/remove the class from the target file:
classes:
- common
- profile.production
- location.europe
- release.production - component.frontend
- component.backend
- component.report
- component.mysql - features. holiday_report parameters:
description: "Production environment"
Notice how it becomes much cleaner and easy to understand which features are enabled where.
If you have docs that are automatically generated by Kapitan, you could add information related to the holiday_report features only to the affected targets.
Final words
I hope you have enjoyed this article, and you have been able to understand what Kapitan can offer. If you want to learn more, please check out our blog and our website https://kapitan.dev | https://medium.com/kapitan-blog/why-your-kubernetes-configuration-strategy-is-broken-c54ff3fdf9c3 | ['Alessandro De Maria'] | 2020-12-28 06:59:06.843000+00:00 | ['Kustomize', 'Helm', 'Microservices', 'Kubernetes', 'DevOps'] |
Personalised Christmas Cards With Unique Promo Codes | Every year, with the appearance of autumn, retailers feel the gust of approaching Christmas fever. It’s the last call to schedule a winning strategy.
According to research data, people are willing to spend more than ever on Christmas gifts. Needless to say, it’s hard work for brands chasing consumers with special offers at this busy time. In this post, we’re going to show you how these holiday promotions can be neatly wrapped into personalized Merry Christmas cards. But first, why are the cards so important in your strategy?
What’s under the tree
CreditDonkey asked Americans what gifts they’re about to buy this Christmas. For every ten consumers, seven plan to buy a gift card. It may seem like an easy way out of time-consuming searches for matching gifts, however; it’s undoubtedly true that there’s more under the hood. Data shows us that it’s not only a convenient choice for buyers but also a present desired by a great majority of receivers. 82% of survey subjects claim that gift cards are what they’d like to find beneath their Christmas tree.
Gift cards aren’t the only way you can use a promotion toolkit to drive sales; what’s even more important is a spending mood that consumers can catch and keep with the holiday spirit.
Around Christmas time, customers are more willing to spend their money. Not only on gifts but also on their own needs and habits. This means they’re more susceptible to incentives which can be pushed out in front of them in special Christmas offers.
What if you could wrap all the incentives for your employees, business partners or end customers into personalized Christmas cards? Or use gift cards to endow them with incentives not only as receivers but also as buyers?
Let’s look at three examples of personalized “Merry Christmas” cards in an email message made using Voucherify. Each of them includes a unique, trackable incentive like a discount coupon, a gift card with predefined credits or a referral code. Unique codes are the key to seeing how your Christmas promotions perform and turn insights into action for future events.
What’s most important here is that we’ve designed the cards in such a way as to allow for endowing a receiver (our customer), giving us a chance for immediate new acquisitions. These messages make universal Christmas cards a powerful weapon for sales.
Voucherify is a promotion management system that develops, manages, and distributes coupon, referral, and loyalty solutions for businesses of every shape and size, worldwide. If you’re interested in having a consultative talk to help you decide how you should implement your promotions, let us know at sales@voucherify.io — We’re always happy to help!
Christmas cards with individual customer codes for tracking
Example 1: Unique gift card for your employees and a discount for their friends.
Codes of both unique gift card and a discount for friends can be copied or scanned (QR code) from an email. Coupon history is stored in the Voucherify dashboard. You can see order details and customer data attached to every redemption that has ever occurred.
Example 2: A gift card for your partners and a referral code for referred companies. This is how B2B companies can implement our strategy.
Example 3: Exclusive discounts for your loyal customers and their friends to use in brick-and-mortar stores.
Summary
Every holiday opens up new opportunities for your growth. Besides well-targeted offers, keep in mind how you’re going to track your performance. Extreme traffic is not only about sales itself but also about learning about your audience. The more data you gather, the more insights for the future you will get. | https://medium.com/voucherify/personalised-christmas-cards-with-unique-promo-codes-c74a5a162238 | ['Jagoda Dworniczak'] | 2018-10-08 12:16:38.409000+00:00 | ['Christmas', 'Sales', 'Marketing', 'Ecommerce', 'Startup'] |
Rechunker: The missing link for chunked array analytics | by Ryan Abernathey and Tom Augspurger
TLDR: this post describes a new python library called rechunker, which performs efficient on-disk rechunking of chunked array storage formats. Rechunker allows you to write code like this:
from rechunker import rechunk
target_chunks = (100, 10, 1)
max_mem = "2GB"
plan = rechunk(source_array, target_chunks, max_mem,
"target_store.zarr",
"temp_store.zarr")
plan.execute()
…and have the operation parallelized over any number of Dask workers.
Motivation
Chunked arrays are a key part of the modern scientific software stack in fields such as geospatial analytics and bioinformatics. Chunked arrays take a large multidimensional array dataset, such as an image captured over many timesteps, and split it up into many “chunks” — smaller arrays which can comfortably fit in memory. These chunks can form the basis of parallel algorithms that can make data science workflows go a lot faster.
Example of a chunked array, as represented by Dask.
Chunked arrays are implemented in both parallel computing frameworks — such as Dask and NumpyWren — and as an on-disk storage format. Some storage formats that support chunked arrays include HDF5, TileDB, Zarr, and Cloud Optimized Geotiff. When these chunked array storage formats are paired with the above computing frameworks, excellent scaling performance can be achieved.
However, chunked arrays workflows can fail hard when the chunks are not aligned with the desired analysis method. A great example can be found in this post from a user on the Pangeo forum:
Geospatial satellite data is often produced as a global map once per day, creating a natural chunk structure (e.g. one file per day). But what happens if you want to do a timeseries analysis at each point in space? This analysis can’t be parallelized over chunks. Many array-based workflows get suck on similar problems.
One existing solution is to use Dask’s rechunk function to create a new chunk structure lazily, on the fly, in memory. This works great for some problems. For others, particularly those involving a full rechunk (every source chunk goes into every target chunk), Dask’s algorithm can run out of memory, or produce an unmanageably large number of tasks. (More details can be found in the post linked above.)
To address this problem, we created a new package that aims to solve this specific problem in an optimal way: rechunker.
The Rechunker Algorithm
Rechunker takes an input chunked array (or group of arrays) stored in a persistent storage device (such as a filesystem or a cloud storage bucket) and writes out an array (or group of arrays) with the same data, but different chunking scheme, to a new location. Along the way, it may create a temporary, intermediate copy of the array in persistent storage. The reliance on persistent storage is a key difference between Rechunker and Dask’s rechunk function.
Figuring out the most efficient way to do this was a fun computer science problem to solve. Via our Discourse forum, many people contributed to the discussion and shared different ideas they had implemented in the past. We identified a couple of key requirements for Rechunker’s algorithm:
Respect memory limits. Rechunker’s algorithm guarantees that worker processes will not exceed a user-specified memory threshold.
Rechunker’s algorithm guarantees that worker processes will not exceed a user-specified memory threshold. Minimize the number of required tasks. Specifically, for N source chunks and M target chunks, the number of tasks is always less than N + M.
Specifically, for N source chunks and M target chunks, the number of tasks is always less than N + M. Be embarrassingly parallel. The task graph should be as simple as possible, to make it easy to execute using different task scheduling frameworks. This also means avoiding write locks, which are complex to manage, and inter-worker communication.
These considerations led to the creation of an algorithm we call Push-Pull-Consolidated. | https://medium.com/pangeo/rechunker-the-missing-link-for-chunked-array-analytics-5b2359e9dc11 | ['Ryan Abernathey'] | 2020-07-21 12:01:02.033000+00:00 | ['Python', 'Data Science', 'Distributed Systems', 'Geospatial', 'Big Data'] |
The stars above inspire thoughts of perfection. | The stars above inspire thoughts of perfection. We look up to see constancy, harmony, eternity and above all serenity. Just remember that mud and stardust are ultimately the same thing.
From Shakespeare’s Merchant of Venice:
“Sit, Jessica. Look, how the floor of heaven
Is thick inlaid with patines of bright gold:
There’s not the smallest orb which thou behold’st
But in his motion like an angel sings,
Still quiring to the young-eyed cherubins, -
Such harmony is in immortal souls;
But whilst this muddy vesture of decay
Doth grossly close it in, we cannot hear it.”
From Marcus Aurelius’s Meditations: | https://stevengambardella.medium.com/the-stars-above-inspire-thoughts-of-perfection-9cb72a9a51b8 | ['Steven Gambardella'] | 2020-12-04 18:48:58.336000+00:00 | ['Self', 'Philosophy', 'Books', 'Psychology', 'Culture'] |
Tools to build a prototype web app in one month without writing code | Tools to build a prototype web app in one month without writing code
Prototyping stack (for non-developers)
For non-technical people who have an idea for a digital product, I’ve noticed two reasons that keep them from pursuing: 1) anxiety towards sacrificing some aspect of their life to make the time, and 2) convincing themselves that they need a technical cofounder and/or funding to build something tangible. This essay aims to dismiss reason #2, by clarifying some free (or price-of-several-coffees) tools I’ve used to build a functional web application version-1; my prototyping stack. One key advantage of building a prototype is that it drastically improves how seriously you’re taken by peers in co-working spaces, potential cofounders, prospective mentors/angel investors, and target users. m = more information available,
v = vlad’s (my own) personal experiences
Organize
Although I was skeptical how a “visual idea board” would be different than just writing ideas into a notebook, I was surprised to find that it had a side-effect of helping jumpstart momentum for doing work [v.1].
The first idea board was used purely for brainstorming along several categories which I guessed at being necessary for the idea to become tangible — the foundational product idea was found in the resulting conversations [v.2].
This is also a great time to apply the Jobs To Be Done framework as a way to understand how someone’s life can be improved; as a a consequence, this frame of thinking helps surface ideas for product features that would impact those improvements [m.1]. The point here isn’t to get stuck in brainstorming-paralysis, but rather to settle on a hypothesis for one “job to be done”, and then continue through the rest of this prototyping stack.
First idea board (using the free tool Trello) was decommissioned after figuring out what to build
Design
The amount of incredible (and freely accessible) designs out there make the motto, “you don’t have to re-invent the wheel” valid. Since you’re still in the brainstorming phase — and probably don’t have a background in human factors — it’s a good idea to borrow inspiration from best-practice user interfaces [v.3].
At this point, I was primarily spending time bookmarking designs which looked like they could be used to be used to fulfill my “jobs to be done” hypothesis.
Dribbble showcases professional-level design choices for any kind of user interface
Mock-Up (& Iterate)
Have you ever played around with Microsoft Paint? Well, there are tools which are just as simple to use, except they let you create a “fake app” that you can click around to navigate to various screens.
At this point, I would take the inspirations from the “Design” phase, and build them in this mock-up tool [v.4].
These tools also export nicely into a mobile app, so this is a great time to hand your phone to prospective users and silently observe how they use your interface to gather feedback for design changes. The more times you can do this — build mockup, demo to target user, observe their hesitations with usability, re-build mockup — the more professional your user experience will feel.
The non-free (low cost) tool Proto lets you create multiple screens to experiment with user-interfaces
Functional App
What if you could use your Microsoft Paint skills to clone an app like Uber or AirBnB over a weekend [m.2]? That’s the main appeal to a new class of app building tools, which feel like the next evolution of tools used in the “mock-up” phase.
Although there is a steep learning curve [v.5], showing up to a cofounder speed-dating event / prospective investor coffee chat becomes a 10x better experience when you have the first version of a functional app.
The free tool Bubble.is is probably my favorite tool discovered in 2019, for building fully-functional web (and mobile) apps where writing code is not a necessity for producing the first-version prototype
Data (honorable mention)
Despite poor design choices, user experience, app load times, bugs, etc, one way to immediately stand out with a prototype is through the data that you are now able to collect — not just typical user info, but rather behavioral data [v.6] which helps describe a trend starting to happen in your market; this a conversation every angel investor wants to have (and wants to be the first to have). | https://medium.com/swlh/tools-to-build-a-prototype-web-app-in-one-month-without-writing-code-dcda1afda5dd | ['Vlad Shulman'] | 2019-06-08 16:01:42.533000+00:00 | ['Startup', 'Product Management', 'Design'] |
The Startup Failure Curve: 7 Important Stats to Know | Photo by Quino Al on Unsplash
Have you ever heard the statistic that 90 percent of businesses fail within the first year? Maybe you heard that it was in the first 5 years, or that it’s actually 80 percent of businesses, but chances are you heard a number like this at some point in your life, without much direct evidence to back it up.
It’s certainly true that the majority of new businesses do fail — only a minority ever find success — but the stats aren’t nearly as dramatic as some would have you believe. Instead, failure tends to unfold over a curve, and understanding that curve could help your business from falling victim to the most common pitfalls.
The Startup Failure Curve
So what are the “real” statistics for business failure? It’s a complicated question, because definitions of “failure” might vary, and to be certain, there are many different types of businesses, each with different survival rates.
Still, there are some critical facts we can use to better understand what the failure curve really looks like.
1. 66 percent of businesses with employees survive at least 2 years. According to the most recent report from the SBA, with data from the Bureau of Labor Statistics, about two-thirds of all businesses with employees last at least two years. Those aren’t bad odds compared to the “90 percent” statistic that persists.
2. About half of businesses survive at least 5 years. The same study found that the same group of businesses tended to last at least 5 years at a rate of around 50 percent.
3. The economy does not directly affect the failure curve. These data come from a span of more than a decade, stretching back into the 1990s. The curve was not significantly affected by times of economic prosperity or by recessions, making rates of success and failure even more consistent.
4. Failure rates are similar across industries. Have you ever heard someone say that restaurants and bars are especially risky business investments, since they have a higher rate of failure than other businesses? The data suggest this isn’t true. The food service and hotel industry has a similar failure curve as the manufacturing, construction, and retail trade industries. The differences are negligible at nearly every point on the curve.
5. 25 percent of businesses fail the first year. As you might expect, the failure curve is steeper at the beginning, with 25 percent of small businesses failing within the first year, according to data compiled by Statistic Brain. This is likely due to the learning curve associated with business ownership; the longer you remain in business, the more you learn, and the more resilient you are to problems that could otherwise shake your foundation. It’s a period that naturally weeds out the weakest candidates as well.
6. Reasons for failure vary. According to the same data, a whopping 46 percent of all company failures were attributable to “incompetence,” a blanket term that can refer to emotional pricing, failure to pay taxes, a lack of planning, no financing knowledge, and/or no experience in record keeping. Another 30 percent of company failures were attributable to unbalanced experience, or a lack of management experience.
7. 75 percent of venture capital-backed startups fail. Of course, for VC-backed startups, the picture isn’t as pretty; according to one report, about 75 percent of all VC-backed startups ultimately fail. This could be due to a number of reasons, including the highly competitive nature of VC competitions and the volatility of tech startups that emerge on the scene.
When Failure Is a Good Thing
If you’re reading these statistics, and you’re still worried about your business being classified as a “failure,” keep in mind that failure can actually be a good thing. For starters, many businesses that fail in the first year didn’t have the potential for long-term success; early failure actually spares them significant expenses, and frees up their entrepreneurs to pursue more valuable opportunities.
On top of that, going through the process of starting a business and watching it fall apart can teach you valuable lessons, which you can apply to future opportunities; failed entrepreneurs who get back on the horse have a higher likelihood of success the second time around.
So what should you take away from all this? First, if you’ve thought about becoming an entrepreneur, but have been intimidated by the thought of becoming part of an overwhelming majority of failed entrepreneurs, reconsider your position; that majority isn’t nearly as strong as you might have previously believed. Every entrepreneur faces failure in some form, but it doesn’t always lead to the failure of the entire business.
Second, if you can make it past that trying first year, you can probably keep your business successful for years to come.
And finally, even if your business does fail, it isn’t the end of the world; you’ll have new knowledge and new experiences you can use to fuel your next venture. | https://jaysondemers.medium.com/the-startup-failure-curve-7-important-stats-to-know-f5a3fc617e43 | ['Jayson Demers'] | 2020-11-09 23:45:59.624000+00:00 | ['Entrepreneur', 'Startup Life', 'Startup', 'Failure', 'Entrepreneurship'] |
Google Kubernetes Engine (GKE) announcements from Cloud Next 2018 | There was an almost overwhelming number of announcements at Cloud Next this year, so I want to focus on the technologies I care most about, Kubernetes and GKE!
GKE On-Prem
GKE On-Prem
This is an important evolution of GKE for people who want the flexibility and power of Kubernetes in their own datacentre, but don’t want to invest in whole teams to manage the entire stack. Joe Beda discussed having multiple layers of Ops teams in his talk at KubeCon 2017.
The GKE console will also provide unified management for your clusters across GCP and on-prem, super cool!
GKE On-Prem cluster (moscone)
Service Mesh - https://sebiwi.github.io/comics/service-mesh/
Service Mesh
Service mesh is thrown around in buzz-wordy evangelism these days, but as projects such as Istio mature, the benefits for security, observability, and traffic management are starting to make people take notice. Istio v1.0 was announced, showing the product has reached a point of API stabilisation that will lead to much greater adoption.
A Managed Istio (alpha) product was also announced that will remove even more complexity for GKE users.
Cloud Services Platform family
GKE Serverless add-on
If you already use GKE and want to provide a Serverless platform to your developers, this add-on looks ideal. Google also provided a form for requesting early access.
This could be useful if you want to develop on a Serverless stack that’s more portable than services like Cloud Functions or AWS Lambda. In the future, if many developers adopt a common Serverless framework (like Knative), your Serverless components could be less coupled to a specific vendor.
Knative
Knative
This one is more for the Serverless platform developers out there. Knative is a suite of building blocks for creating modern, container based, Serverless applications. Google teamed up with Pivotal, IBM, RedHat and SAP to develop this open source framework that was then used to build the GKE Serverless add-on.
Knative helps with three main use cases for Serverless:
Serving
Deploying and serving Serverless applicaitons and functions. Build
On-cluster container builds. Eventing
Loosely coupled eventing system compatible with CloudEvents.
Expect more of your favourite Serverless platforms and projects in the ecosystem to announce support for running on top of Knative/Kubernetes in the future, if they haven’t already. | https://medium.com/weareservian/google-kubernetes-engine-gke-announcements-from-cloud-next-2018-7a9409872643 | ['Dylan Graham'] | 2019-07-08 04:20:20.148000+00:00 | ['Gke', 'Google Cloud Platform', 'Serverless', 'Cloud Computing', 'Kubernetes'] |
Design Patterns Saga: The Beginning | Factory Design Pattern
What are your associations with the word factory? Someplace where workers manufacture goods. This is exactly this. The Factory Pattern is a creational pattern, whose purpose is to create objects. Just like a factory in the real world. In this pattern, the object creation happens in the factories, without exposing the creation logic to the client. Imagine that your software implements a sushi bar and you want to create sushi to serve at your bar. There are many different types of sushi, but let’s start with the California and Dragon rolls that were mentioned in the polymorphism example.
A lot of the sushi rolls we’ve become familiar with are a Western take on Japanese Maki sushi. Therefore, to implement the sushi bar software, you need a Maki interface and concrete California and Dragon classes implementing Maki. Let’s say that all you need to create a maki roll is to add fish, fillings, and the topic. Can you smell polymorphism? Kidding, this is an overriding example we mentioned earlier.
Now, you need a createRoll(RollType rollType) method. The method will first declare the maki variable, which will refer to the maki object to be created. Think about it, like maki base, which is traditionally made with a sheet of nori, wrapped around a layer of rice. California Roll, for example, is an inside-out sushi roll with a layer of rice on the outside and a sheet of nori on the inside. The roll type parameter will determine which maki roll is actually instantiated.
When you have a base for your roll, you can call methods to add fish, fillings, and topics. These methods do not care what type of roll is created. All they want is a maki base to operate on.
The method looks like this.
Your brilliant chief adds more and more roll types to the menu. Spicy Tuna Roll, Spider Roll. In this example, the list of conditionals grows and grows as new roll types are added.
Notice that what we do with the roll after creating it doesn’t change. Each roll needs to contain fish, fillings, and topics. This is all getting pretty complicated. To decouple maki instantiation from the client, you can delegate the responsibility of maki base creation, based on provided roll type, to Sushi Factory.
In general, a factory object is an instance of such a class, which has a method to create product objects (maki base, in our case). Now, this Sushi Factory can be used by the Sushi Bar service. In other words, the sushi bar is now a client of the sushi factory.
Here is a UML diagram of the Sushi Bar service you just implemented.
Let’s look, what have you gained here? The Sushi Bar service and it’s createRoll(RollType rollType) method may not be the only client of Sushi Factory. Other clients, such as Sushi Delivery and Sushi Takeaway may use Sushi Factories to create maki as well. Since all of the actual maki creation happens in the Sushi Factory, you can simply add new roll types to your factory or to change the way the rolls are instantiated, without modifying the client’s code.
That’s all! Now you know your onions 🤓. And I know that I feel like ordering sushi delivery. This post made me hungry. Bon appetit! | https://medium.com/swlh/design-patterns-saga-the-beginning-17ea936472cc | ['Gene Zeiniss'] | 2020-07-05 05:05:33.133000+00:00 | ['Backend', 'Design Patterns', 'Factory Pattern', 'Polymorphism', 'Java'] |
I Knew Happiness Once | I Knew Happiness Once
Sky Collection quote prompt №24
Photo by Denise Jones on Unsplash
Looking back to where I have been,
Trying to figure out what went wrong,
Where did you go, and why do you elude me?
I knew happiness once.
I need closure from our relationship,
But you refuse to go there,
Does growing apart just happen?
I knew happiness once.
That job of my dreams -
It was like living a nightmare,
When did I become a visitor and not family?
I knew happiness once.
The friends who disappeared,
Without even a goodbye,
Why did they not miss me as I missed them?
I knew happiness once.
I clung to you madly,
And yet you still left me,
Alone, afraid, and paralyzed.
I knew happiness once,
I spent hours analyzing the past,
Revisiting the times you filled my heart,
But you had left all of my memories.
I knew happiness once.
But I have left it behind.
Fickle and fleeting is no longer for me,
I built a strong foundation instead,
Ready to face any disaster.
I know happiness now,
It grows inside of me
And depends on no one else. | https://medium.com/sky-collection/i-knew-happiness-once-35912099f691 | ['Kim Mckinney'] | 2020-12-11 20:46:22.589000+00:00 | ['Self-awareness', 'Mental Health', 'Happiness', 'Relationships', 'Poetry'] |
Collections in Python | Array
NumPy is a popular library for working with scientific and engineering data. Here, we highlight the array manipulation capabilities offered by NumPy.
A NumPy array is an N-dimensional grid of homogenous values. It can be used to store a single value (scalar), coordinates of a point in N-dimensional space (vector), a 2D matrix containing the linear transformations of a vector (matrix), or even N-dimensional matrices (not tensors though).
>>> import numpy as np >>> a_vector = np.array([1, 2, 3])
>>> print('vector shape:', a_vector.shape)
vector shape: (3,) >>> a_matrix = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
>>> print('matrix shape:', a_matrix.shape)
matrix shape: (3, 3)
Now, let us look at some of the frequently used array operations.
Query/Filter/Mask
>>> numbers=np.array([1,2,3,4,5,6,7,8,9,10]) # mask
>>> mask = numbers & 1 == 0
array([False, True, False, True, False, True, False, True, False, True]) # filter out the odds
>>> numbers[mask]
array([ 2, 4, 6, 8, 10]) # zero out the odds and retain the shape
>>> numbers * mask
array([ 0, 2, 0, 4, 0, 6, 0, 8, 0, 10])
Reshape
Reshaping simply rearranges the existing items in an array into a new shape.
# Reshape a row vector to a column vector
>>> row = np.array([1,2,3])
>>> np.reshape(row, (3,1))
array([[1],
[2],
[3]])
>>> np.reshape(row, (-1,1))
array([[1],
[2],
[3]])
Transform
All the power of NumPy comes from its ability to efficiently transform large arrays of data for scientific and engineering computations. This is really a vast topic and we will only touch upon a few key transformations here.
# Vector
>>> row = np.array([1, 2, 3]) # Scale
>>> row*2
array([2, 4, 6]) # Shift
>>> row + np.array([5,5,5])
array([6, 7, 8]) # Rotate by -90 degrees around the z-axis
>>> row = np.array([1, 2, 3])
>>> rotation = np.array([[0, -1, 0],[1, 0, 0],[0, 0, 1]])
>>> np.dot(rotation, row)
array([-2, 1, 3]) # Transpose
>>> rows = np.array([[1,2,3],[2,3,4]])
>>> rows.T
array([[1, 2],
[2, 3],
[3, 4]])
Sort
Sorting is a bit tricky. The Python sort function does not behave the same way as it does for lists.
# Sorting vectors on x-coordinate
>>> rows = np.array([[2, 1, 3],[1, 2, 3]]) # naive sort
>>> np.sort(rows, axis=0)
array([[1, 1, 3],
[2, 2, 3]])
# output does not contain the same vectors at all! # Correct method:
# Obtain the sorted indices for first column (x)
# and then use those indices to sort all the columns >>> ind = np.argsort(rows[:,0],axis=0).reshape(-1,1)
>>> ind = np.repeat(ind, rows.shape[-1],axis=-1)
>>> ind
array([[1, 1, 1],
[0, 0, 0]])
>>> np.take_along_axis(rows,ind,axis=0)
array([[1, 2, 3],
[2, 1, 3]]) | https://medium.com/swlh/collections-in-python-d8954b006bb7 | ['Rajaram Gurumurthi'] | 2020-10-26 22:12:38.511000+00:00 | ['Machine Learning', 'Python', 'Data Science', 'Programming', 'Java'] |
How I Created a Course on Lane Detection and Lane Keeping | A while ago I was searching the web because I wanted to learn how lane-keeping systems work. I knew that these systems use a camera to detect the lane boundaries and then some control algorithms to keep the vehicle centered within the lane. But I wanted to understand this in more detail, and ideally implement a simple version of lane detection and lane-keeping myself.
I love Massive Open Online Courses on platforms like Coursera and Udacity, so naturally, I started looking there first. Udacity offers the famous “Self-Driving Cars Nanodegree”, but I didn’t want to spend thousands of euros. On Coursera, you can find the “Self-Driving Cars Specialization” by the University of Toronto, and since you can audit it for free I tried it out. I learned how a vehicle can follow a path using a method called Pure Pursuit. And even better, I saw that the course offered an exercise where you would implement Pure Pursuit and try it out in the Carla Simulator. Neat! However, the course did not cover lane detection.
So I continued searching for lane detection tutorials. I found a lot of stuff online that explained how to detect which pixels in the image are the lane markers. But I also wanted to know how to go from detecting lane marker pixels to generating a path on the road that a Pure Pursuit controller can follow. How do you go from pixels to (x,y) coordinates in meters? I did not find any easy to understand tutorials for that. Eventually, I found some older papers mentioning “Inverse Perspective Mapping”, which was kind of what I was looking for. But the mathematical formulas seemed complicated and I found no derivations. I understood that the idea is to use the assumption that the road is flat to invert the projection equation of the pinhole camera model. In the end, I derived the equations which I needed myself. Naturally, I wanted to implement and test them. What I wanted to do consisted of three steps
South Park reference. Image from Wikipedia.
Implement a method to detect lane boundary pixels Convert lane boundary pixels to a list of road coordinates (x,y), measured in meters. Fit a polynomial y(x) for the left and the right lane boundary Feed polynomials into Pure Pursuit to control a vehicle
Since I had learned about the Carla Simulator on Coursera I decided to apply this pipeline there. I had found the equations for step 2 and had watched the Coursera videos on Pure Pursuit, so I knew how to implement step 3. I only needed to pick a method for step 1.
Lane detection is a computer vision/perception problem and you probably heard that deep learning methods are dominating that field. You might also know that it is common for deep learning researchers and practitioners to publish data sets as “challenges” and to compare the performance of their neural nets on leaderboards. For lane detection, one important data set is the TuSimple Lane Detection Challange and I scanned some research papers that focussed on this. One paper stood out to me, because of its elegant approach and excellent exposition: End-to-end Lane Detection through Differentiable Least-Squares Fitting. In their paper, they also described a baseline model, which is extremely simple but still performs quite well. Good enough for me!
So finally I had found resources to implement my own lane-detection and lane-keeping system. Since it had taken me so long to gather all this information, I decided to create my own online course: “Algorithms for Automated Driving”. This course should guide the reader in implementing lane-detection and lane-keeping for themselves. I published the course is in the form of an online book today and here you can see a screenshot of the landing page (you can also just visit the course by clicking here) | https://medium.com/swlh/how-i-created-a-course-on-lane-detection-and-lane-keeping-a78598914cfa | ['Mario Theers'] | 2020-11-26 14:05:31.325000+00:00 | ['Python', 'Self Driving Cars', 'Education', 'Jupyter Notebook', 'Deep Learning'] |
5 Reasons to Create Your Own Medium Publication (And 3 Reasons You Shouldn’t) | 1. Control The Visibility and Distribution of Your Own Content
For new writers on Medium, there is a bit of a paradox when it comes to writing for publications. When you don’t have a large following and haven’t written for major Medium publications — it can be very difficult to become approved as a writer for any Medium publications. Conversely, once you are published in a few major publications — seemingly everyone wants your work.
For writers that want to control the circulation of their own content, creating a Medium publication can be a great option. I’m a big believer in the idea that you should never let others stop you from pursuing your goals. So I started several Medium publications to better showcase my articles. While this doesn’t magically grant you followers, it does allow you to pick and choose which stories you would like to feature in your publication. Even if your article is selected for a major publication, it will most likely be pushed off the publications “featured article” section, fairly quickly.
2. Gain Access to More Detailed Analytics
A second benefit of creating your own publication is the increased access to data analytics pertaining to your articles. Medium only gives writer’s a relatively small amount of data on their articles (number of views, reads, claps, fans, and some traffic sources). So any increased insight into your content’s data analytics is extremely valuable.
Below are screenshots of the enhanced “views” and “visitors” data that Medium publication owners have:
Views: The total number of views your publication has received on all posts and pages.
Medium Publication Views
Visitors: The average number of unique daily visitors who have visited your publication. Each visitor is counted once per day, even if they view multiple pages or the same page multiple times.
Medium Publication Visitors
3. Utilize Features Only Available in Medium Publications
When you create your own publication, there are several useful features that you gain access to. The two features I find the most useful are the “homepage promos” tabs and the “letters” function.
Homepage promotions enable you to add custom blocks to your publication that link your readers to a post, a feature page, or even, an external link (outside of Medium). Below is an example from one of my publications, Black Edge Consulting: | https://medium.com/blogging-guide/5-reasons-to-create-your-own-medium-publication-and-3-reasons-you-shouldnt-8dddf72b5247 | ['Casey Botticello'] | 2020-07-10 03:00:46.880000+00:00 | ['Social Media', 'Medium', 'Journalism', 'Ideas', 'Writing'] |
Best Resources for Deep Learning | Best Resources for Deep Learning
Deep Learning Educational Resources
Deep learning is a machine learning method that uses neural networks for prediction tasks. Deep learning methods can be used for a variety of tasks including object detection, synthetic data generation, user recommendation, and much more. In this post, I will walk through some of the best resources for getting started with deep learning.
Let’s get started!
Online Resources
There are several online resources that are great for getting started with deep learning.
Sentdex
Sentdex is a YouTube channel, run by Harrison Kinsley, that has several tutorials on how to implement machine learning algorithms in python. While the channel contains many great tutorials on other machine learning algorithms like support vector machines, linear regression, tree-based models, and k-nearest neighbors, the tutorials on deep learning are a great place to start if you want to get your hands dirty with deep learning. The playlist Machine Learning with Python has a great 14 part series on learning to implement various neural networks such as simple multi-layer dense networks, recurrent neural networks, long short-term memory networks (LSTMs), and convolutional neural networks. The series also goes over tensorflow basics, preprocessing, training & testing, and installing the GPU version of tensorflow. The channel also has a playlist called Neural Networks from Scratch which has tutorials on how to build neural networks starting with their fundamental components. This is a great place to learn about how neural networks work under the hood.
DataCamp
DataCamp is a subscription-based platform that is great for those starting out in data science and machine learning. It has many great courses for learning how to implement neural networks. Specifically, I recommend the Introduction to Deep Learning Course. This course gives you hands-on and practical knowledge on how to use deep learning with Keras, through DataCamp’s interactive learning platform. This means in between videos, you apply what you’ve learned by writing and running real code. It goes over basic concepts such as forward propagation, activation functions, neural network layers and learned representations. It also goes into detail on neural network optimization with backpropagation, applying neural networks to regression and classification, and how to further fine-tune neural network models. After learning the basics, I recommend the course Advanced Deep Learning with Keras. This goes into detail discussing the Keras API and how to build neural networks using the functional building blocks. It also goes into some advanced concepts around categorical embedding, shared layers, and merged layers in neural networks
Andrew Ng’s Deep Learning Lectures
The resources I listed above heavily focus on implementation and practical application. For a more theoretical treatment of neural networks, I recommend Andrew Ng’s lectures on deep learning. The lectures cover many of the fundamentals including the gradient descent and the calculus behind it, vectorization, activation functions, backpropagation, model parameters and hyper-parameters and much more. I highly recommend these lectures if you’re interested in the math & theory behind neural networks.
Books
Hands on Machine Learning with Scikit-Learn & Tensorflow, by Aurelien Geron
If you learn more effectively using books, this book is a great place to start learning how to implement neural networks. This book covers many machine learning topics including much of the fundamentals of neural networks including how to build simple multi-layer dense neural networks, convolutional neural networks and recurrent neural networks.
Deep Learning, by Ian Goodfellow
This book covers much of the theory and math behind a variety of neural network architectures. The book covers the prerequisite math concepts behind neural networks, the math behind many modern neural networks, and even outlines the work being done in deep learning research.
Conclusions
In this post, we discussed several resources that are useful for getting started with deep learning. First, we discussed the Sentdex YouTube channel, which covers many practical examples of how to build neural networks in python for classification and regression tasks. This is a great place to start if the theory and math of neural networks intimidate you but you’d still like to get started building neural network models. We also went over DataCamp which provides a great interactive learning platform where you solve coding exercises in between videos. Once you’re comfortable with implementing the code for deep learning algorithms, Andrew Ng’s course is great for deepening your knowledge of the theory and math behind deep learning. If you’re better suited to learning from books, Hands on Machine Learning contains many great chapters discussing how to implement neural networks in python. If you’re interested in learning the theory from a book, Ian Goodfellow’s Deep Learning is a great resource. I hope you found this post useful/interesting. Thank you for reading! | https://towardsdatascience.com/best-resources-for-deep-learning-f4c774356734 | ['Sadrach Pierre'] | 2020-09-07 02:50:09.934000+00:00 | ['Data Science', 'Python', 'Artificial Intelligence', 'Education', 'Deep Learning'] |
Use Python to Upload Your First Dataset on Kaggle— Taiwan Housing Project (1/2) | Step 1: Collect Data from the Open Data Platform
On the website of the Ministry of the Interior, we can download the transaction records of the real estate based on the region and timeframe.
Step 2: Observe What We Collected
What can we see from the dataset below?
The first row of data is actually English translation of the column names
Some missing values (NaN)
In the column transaction year month and day (交易年月日), the year information follows the civil calendar rather than the Gregorian calendar
Text contents are in Chinese
Original Dataset
Step 3: Preprocess Data
According to our observation, we will start to preprocess the dataset step by step.
Drop the First Row
As mentioned, the first row is the record of column names in English, so drop it.
df.drop(0, axis=0)
Rename Column Names
To make the dataset easier to use, we translate all the columns in English.
COL_NAME = ['district', 'transaction_type', 'address', 'land_shift_area', 'urban_land_use', 'non_urban_use', 'non_urban_use_code', 'transaction_date', 'transaction_number', 'shift_level', 'total_levels', 'building_state', 'main_use', 'main_building_material', 'complete_year', 'building_shift_total_area', 'num_room', 'num_hall', 'num_toilet', 'num_partition', 'management_org', 'total_ntd', 'unit_ntd', 'carpark_category', 'carpark_shift_area', 'carpark_ntd', 'note', 'serial_no'] df.columns = COL_NAME
Drop Useless Columns
Some columns have high missing values (above 90% of data) or useless information, thus, drop them.
DROPED_COLUMNS = ['non_urban_use', 'non_urban_use_code', 'note', 'serial_no'] df = df.drop(DROPED_COLUMNS, axis=1)
Transform Data Types
By checking df.info(), we can transform data types into reasonable ones.
df['land_shift_area'] = df['land_shift_area'].astype(float)
Deal with Missing Values
For each column, there are different ways to deal with missing values. Here let’s look at the simplest example, fill missing values of total levels with 0 because there is no building for the transaction of land or parking lot. Other columns might reference additional information such as related columns to infer the missing values. The detail could be found in my Gitlab.
df['total_levels'] = df['total_levels'].fillna(0) # land and car park transaction have no shifting levels
Generate Additional Features
Column transactin_number “土地1建物0車位0” includes information telling us how many lands, buildings, and car parks are there in a transaction. By using the regular expression, we can extract key information out as a new feature.
df['number_of_land'] = df['transaction_number'].apply(lambda x: int(re.findall('土地\d+', x)[0][2:])) df['number_of_building'] = df['transaction_number'].apply(lambda x: int(re.findall('建物\d+', x)[0][2:])) df['number_of_carpark'] = df['transaction_number'].apply(lambda x: int(re.findall('車位\d+', x)[0][2:]))
Another column transaction_date helps us extract information of the transaction year. After adding 1,911 to the year of the civil calendar, we generalize our dataset in a more standard way.
df[‘transaction_year’] = df[‘transaction_date’].apply(lambda x: str(1911 + int(x[:-4]))) # year should be categorical value
Do Translation
To make this dataset more widely used, we translate the text content into English. Since there is a limit of API calls with translation package, we do not translate text row by row but translate the unique word in each column first and then map them back to the original dataset.
# 0. fields to translate
COL_TO_TRANSLATE = ['transaction_type', 'urban_land_use', 'main_use', 'main_building_material', 'carpark_category'] # 1. find unique words and do translation
dic_translation = {}
from translate import Translator
translator= Translator(from_lang="zh-TW", to_lang="english") for col in COL_TO_TRANSLATE:
for word in pd.unique(df[col]).tolist():
dic_translation[word] = translator.translate(word) # 2. conduct replacement
for col in COL_TO_TRANSLATE:
df[col] = df[col].map(dic_translation)
Next, I will show how I uploaded the preprocessed data to Kaggle and document more information on it. Happy Journey on Data Science! : )
For all detailed content in this section, please check Gitlab: https://gitlab.com/chrissmart/taiwan-housing-price-prediction/-/blob/master/src/data_collection.ipynb | https://medium.com/python-in-plain-english/use-python-to-upload-your-first-dataset-on-kaggle-taiwan-housing-project-1-2-41bf611a43c5 | ['Peiyuan Chien'] | 2020-06-21 20:48:37.459000+00:00 | ['Data Science', 'Python', 'Programming', 'Housing', 'Kaggle'] |
How to Break into Data Science | When your data needs to get dressed up, Tableau is a fool-proof style service. It offers a sleek, drag-and-drop interface for data analytics with native integration to pull data from CSVs, JSON files, Google Sheets, SQL databases, and that back corner of the dryer where you’ve inevitably forgotten a sock.
Data is automatically separated into dimensions (qualitative) and measures (quantitative) — and presumed to be ready for chart-making. Of course, if there are still a few data cleaning steps to be undertaken, Tableau can handle the dirty laundry as well. For example, it supports re-formatting data types and pivoting data from wide to tall format.
When ready to make a chart, simply ctrl+click features of interest and an option from the “Show me” box of defaults. This simplicity of interaction enables even the most design-impaired data scientist to easily marshal data into a presentable format. Tableau will put your data into a suit and tie and send it to the boardroom.
Follow these tips to go from “good” to “great” in your data visualization abilities.
Gain inspiration from master chart-makers
Throughout my time as a business analyst at a Big Four firm, these three blogs were my go-tos for how to create a great looking, functional Tableau dashboard.
Keep these 4 guidelines in mind
#1 — Sheets are the artist’s canvas and dashboards are the gallery wall. Sheets are for creating the artwork (ahem, charts), which you will then position onto a dashboard (using a tiled layout with containers — more on this in a second) along with any formatting elements.
#2 — To save yourself time, set Default Properties for dimensions and measures. This will provide a unified approach to color, number of decimal points, sort order, etc. and prevent you from having to fiddle with these settings each time you go to use a given field.
#3 — Along those lines, make use of the overarching Format Workbook and Format Dashboard options instead of one-off formatting tweaks.
#4 — Avoid putting floating objects into your dashboards. Dragging charts around becomes a headache once you have more than two or three to work with. You can make your legends floating objects, but otherwise stay away from this “long-cut.”
Instead, use the tiled layout, which forces objects to snap into place and automatically resizes if you change the size dimensions of your dashboard. Much faster and simpler in the long run.
Get started with your first dashboard
In summary, the Tableau platform is easier than finger paints to use, so if you’re ready to get started, Tableau Public is the free version that will allow you to create publicly accessible visualizations— like this one I put together after webscraping some info on questionable exempted developments from the Washing DC Office of Zoning — and share them to the cloud.
Getting ready to present financials to the C-suite. Photo by Lisa Fotios on Pexels.
After investigating data from your local community, another good sample project is pulling your checking account data and pretending you’re presenting it to a CEO for analysis.
Read more about the difference between a data scientist and a data analyst:
Now if you not-so-secretly love data viz and need to find more time to devote to putting your models into production (🙋♀️), let’s move on to…
🦁 Learning DevOps
Your machine learning model is only as good as its predictions and classifications on data in the real world setting. Give your model a fighting chance by gaining at least a basic understanding of DevOps — the field responsible for integrating development and IT.
Reframe your thinking about what data science is or isn’t
In this brilliant article, hero of deep learning Andrej Karpathy argues that machine learning models are the new hotness in software — instead of following if-then rules, data is their codebase.
Get a sense for how this works in enterprise
This clever novel fictionalizes The DevOps Handbook and is surprisingly readable. (Not free — but if you buy a copy, give it to your coworker and hope they become super passionate about productionizing your models).
Introduce your machine learning model to the wild
Check out this article about how to use Streamlit for both deployment and data exploration. I’d be remiss if I didn’t also mention Docker and Kubernetes as enterprise-level tools for productionization. | https://towardsdatascience.com/new-data-science-f4eeee38d8f6 | ['Nicole Janeway Bills'] | 2020-11-14 22:35:43.066000+00:00 | ['Machine Learning', 'Data Science', 'Python', 'Artificial Intelligence', 'Programming'] |
Objectivity vs. Subjectivity: An Incongruity That Isn’t Really | Photo by Alex wong on Unsplash
Nearly two years ago I started wearing glasses. At some point since I developed the strong impression that I had forgotten to take my glasses off after going to bed at night or laying down for a nap. They had become a part of me to such an extent that, like a phantom limb, I sensed I was still wearing them even though they weren’t there. I could even perceive the faint outline of their rims through my closed eyelids. If I happen to pull the blanket up over my face so that a fold touches the ridge of my nose just so, I become positively convinced I’m still wearing them and have to run a hand over my face to confirm I’ve taken them off.
I’m sure I am not the only person who regularly has experiences such as this. The feeling that something is still being worn or that something is touching our skin when it objectively isn’t can be mildly disturbing. Unless one is intentionally seeking out experiences that cause mismatches between perception and reality, whether by taking drugs or via other means, even minor experiences like this can trigger some reflection about our actual grasp on reality.
That subjective experiences don’t always accurately describe our environment isn’t exactly news. Indeed, subjectivity’s public stock has been steadily declining for well over a century, while its sibling rival, objectivity, has seen an unprecedented surge in credibility. Our collective lack of faith in subjectivity has grown in spite of the fact that when it comes to our own feelings we continue to inevitability overrate their importance.
Objectivity’s worth has reached almost self-evident proportions in some circles. To be sure, human frailties like confirmation bias and blind spots created by feelings such as love or disgust do in fact make a certain degree of self-awareness critical to any effort to define reality with precision. We don’t want our doctor’s judgment to be too clouded by empathy when she’s making a diagnosis or evaluating our best course of treatment. Nor do we want our judges making rulings from the bench that are heavily colored by personal beliefs or a desire for revenge. But the fact remains, no conscious creature can possibly obtain anything like a truly objective point of view.
Objectivity’s appeal, the philosopher Thomas Nagel wrote in his famous essay What Is It Like to Be a Bat?, is that it moves us “toward a more accurate view of the real nature of things. This is accomplished,” Nagel concluded, “by reducing our dependence on individual or species-specific points of view toward the object of investigation. We describe it not in terms of the impressions it makes on our senses, but in terms of its more general effects and of properties detectable by means other than the human senses.”
To put it another way, objectivity isn’t a kind of transcendent view from nowhere. It’s actually a universal view from anywhere. A water molecule will ultimately appear the same from the point of view of either a hypothetical silicon-based life form or an actual carbon-based one. Likewise, it will remain unchanged from the vantage point of a species with one eye, two eyes, a compound eye, or no eyes whatsoever. In every case, it will consist of two hydrogen atoms and one oxygen atom because that’s what a water molecule is. All that matters is that the species analyzing it has developed the capacity to detect it.
But the purpose of Nagel’s essay was neither to praise nor bury objectivity. His point was that the one thing we can never be truly objective about is our own experience. Beyond a certain level of complexity, it’s like something to be whoever we are. Consciousness means that even if who we happen to be is Spock or Data, our self-assessments will still have the quality of being subjective. There is no point of view from which our own experience can be truly understood for what it is. Nagel wrote:
It is difficult to understand what could be meant by the objective character of an experience, apart from the particular point of view from which its subject apprehends it. After all, what would be left of what it was like to be a bat if one removed the viewpoint of the bat?
Fortunately, the “problem” consciousness poses for objectivity is only really a problem if you’re wedded to the idea that individual consciousness can be reduced to an objective essence (self or soul) in the first place. That we actually have such an essence is far from certain. In fact, there have been people making very good arguments that we probably don’t for over two millennia now.
In his excellent book, Why Buddhism is True, Robert Wright describes in some detail the many things modern science, particularly psychology, has confirmed the Buddha got right, or at least probably did. Wright spends some time on what he describes as the Buddha’s “Seminal Not-Self Sermon,” commonly translated as Discourses on the Not-Self. In this sermon the Buddha, according to Wright’s overview, asks his disciples which of what Buddhists refer to as the five aggregates “qualify as self”: form (or the physical body); sensation (feelings); perception; mental formation; or consciousness. He asked ‘is it just the physical body (form)?’ ‘Is it just our feelings?’ And so on.
“If form were self,” the Buddha says, “then form would not lead to affliction, and it should obtain regarding form: ‘May my form be thus, may my form not be thus.” In other words, because our body does cause us suffering, it is clearly not under our control. Therefore, the body can’t be self. The Buddha then applies this same test of control to the remaining four aggregates to show they too could not possibly be self. It turns out that none of these, including consciousness, can truly be described as a self because all of them are beyond our control.
Though the Buddha never explicitly ruled out the possibility of a self, and recognized the practical role self-identity plays for individuals in other suttas, so far as I’m aware no one over the past twenty five or so centuries since his sermon has been able to offer a response to his queries regarding where exactly something like a self or essence can be found. It appears there is no one at the helm steering our individual ships through life’s rough waters. This doesn’t mean we are completely rudderless, but the idea that there is a central self running the whole show is so far completely unsupportable.
The American psychologist William James didn’t stop with the five aggregates. He turned outward in his challenge to the concept of self, asking us to clearly define where the boundary between the individual and the family lies. If that line exists at all, it is extremely fuzzy. Wright quotes James to lend a little extra contemporary support to the Buddha’s 2500-year-old point.
‘Between what a man calls me and what he simply calls mine the line is difficult to draw.’ In that sense, he [James] observed, ‘our immediate family is a part of ourselves. Our father and mother, our wife and babes, are bone of our bone and flesh of our flesh. When they die, a part of our very selves is gone.’
I would go even further than James. Consider the role friends and other contacts we make over the course of our lifetimes play in making us who we are today. Many of these contributions to our identity we aren’t even conscious of. Yet at the same time the number of people we honestly couldn’t imagine being the same without certainly extends well beyond our immediate family.
Wright sums the situation up as follows when describing the related Buddhist concept of emptiness:
In other words: nothing possesses inherent existence; nothing contains all the ingredients of ongoing existence within itself; nothing is self-sufficient. Hence the idea of emptiness: all things are empty of inherent, independent existence.
With the self no longer in the picture, there is no subject for us to contend with. The perceiver becomes a collection of characteristics molded by a combination of biology, personal experience and culture, none of which alone qualifies as the individual subjective viewer. What is it that is being influenced by all these feelings? By adopting a supposedly objective point of view in order to eliminate all the feelings that cloud our judgment, who is the subject we are discarding in order to obtain this more accurate view of the world? In recognizing there is no self, the objective/subjective dichotomy suddenly becomes not so much two sides of the same coin as a false choice created by a faulty dualistic premise.
One of the ten images developed by the psychiatrist Hermann Rorschach to help doctors effectively evaluate how their patients visually experience the world.
Perhaps the best demonstration of the fluidity of the boundary between subjects and objects is the famous, if widely misunderstood, Rorschach Test. The ten inkblots used in the test are not random smears of ink like many people think, but carefully crafted images created by the psychiatrist Hermann Rorschach.
Rorschach had been fascinated his entire life with how people see the world. In addition to his psychiatric training, he was the son of an artist with a considerable artistic talent of his own. This made him well suited for research into human perception; an area that had been largely overlooked by his more famous contemporaries, Freud and Jung.
Rorschach’s inkblots are not the visual equivalent of free association. As Damion Searls puts it in his book, The Inkblots: Hermann Rorschach, His Iconic test, And The Power of Seeing, “The image itself constrains how you see it — as on rails — but without taking away all your freedom: different people see differently, and the differences are revealing.”
Put another way, a Rorschach inkblot rests on the boundary between something that’s really there and multiple, if constrained, ways of viewing it. It’s hardly as fixed as a water molecule or the law of gravity, but it’s far from an entirely relativistic image either. In this regard, it’s an excellent metaphor for the complex patterns of relationships that make up both societies and ecosystems. According to Searls, Rorsach’s insight was that “perception included much more [than the physical mechanics of seeing or other sensations], all the way to interpreting what was perceived.”
In his recent book on Buddhism, Robert Wright also draws attention to the fact that perception and interpretation cannot be treated as separate actions. To make this case he quotes the psychologist Robert Zajonc:
There are probably very few perceptions and cognitions in everyday life that do not have a significant affective component, that aren’t hot, or in the very least tepid. And perhaps all perceptions contain some affect. We do not just see ‘a house’: we see ‘a handsome house,’ ‘an ugly house,’ or ‘a pretentious house.’ We do not just read an article on attitude change, on cognitive dissonance, or on herbicides. We read an ‘exciting’ article on attitude change, an ‘important’ article on cognitive dissonance, or a ‘trivial’ article on herbicides.
The point here isn’t that what we call objective reality doesn’t exist. Rather it’s that any species with the capacity to unveil truth can’t possibly be objective about their own experiences. There are no objective scientists or philosophers out there. There is no objective people out there period. We all have feelings about our existence that color every decision we make, no matter how rational we think we’re being. Furthermore, we all have the impression there’s an inner objective self or essence guiding the whole show, but there isn’t.
As was stated earlier, what makes something objectively true isn’t that it has been dispassionately observed, but that every single possible subjective observer can’t help but ultimately reach the same conclusion about its nature given the proper intellectual and technological tools to make the necessary examination. No matter how anyone feels about a water molecule, or through what physiological lens or mechanical device it is viewed, it will still be two hydrogen atoms and one oxygen. The same can’t be said about the relationships we form with each other or with our environment. It’s only by realizing we are enmeshed in the world rather than separate “objective” outside observers that we can truly hope to make any real progress in our understanding. | https://craig-axford.medium.com/objectivity-vs-subjectivity-an-incongruity-that-isnt-really-5c29ffe93c81 | ['Craig Axford'] | 2018-12-16 22:30:00.090000+00:00 | ['Philosophy', 'Spirituality', 'Consciousness', 'Psychology', 'Science'] |
How My School Will Stay Open in COVID-Crippled Spain | A torrent of tragedy — that’s our dear year 2020 so far.
Assuming that we survive the perils of existence — those from nature and, even more so, those from our own misguided steps — and humanity’s distant offspring look back on the trials and tribulations of their bumbling ancestors, the narrative of 2020 would be ripe for future historians.
Perhaps even one of those defining years like 1060 and the Battle of Hastings or 1492 and the New World. Maybe 2020 will be called the Loss of Eden, defined by the singular and symbolic face mask that has essentially divided our life-intake system from the life-giving system.
And yet, when those future historians read the narrative of 2020, hidden amongst the fear and confusion and crises and death and sorrow, they will uncover brilliant gems of humanity at its best.
The selflessness and sacrifice of the world’s doctors and nurses are the crown jewel of 2020. And rightfully so.
However, there’s a small school perched on a hill overlooking city and port on the Mediterranean island of Mallorca, and future historians looking for proof of humanity’s best would do well to pay attention.
This school — my school — overcame odds and immense pressure during the first wave of COVID. Many teachers knew little more than Gmail and the basics of a Google Doc. In the end, we kept quality online education rolling, just as paying parents had expected at the beginning of the academic year in September 2019.
And this is our strategy to once again do the impossible: stay open for the 2020/2021 academic in COVID-crippled Spain. | https://medium.com/age-of-awareness/how-my-school-will-stay-open-in-covid-crippled-spain-18140507d6bc | ['Drew Sparkman'] | 2020-09-09 08:27:26.889000+00:00 | ['Education', 'Productivity', 'Travel', 'Coronavirus', 'Covid 19'] |
I Tried Being Nicer to Myself for a Day | I don’t know why I don’t do this more often.
Photo by Septian simon on Unsplash
Wow, this was a challenge. Is it really sad that this took a conscious effort to say nice things to myself? Apparently, I’m kind of mean on a regular basis. When I look in the mirror, I focus on the “flaws” that I see. If my pants are fitting tighter than usual or I’m having a breakout or the circles under my eyes look darker than usual, I’ll sigh and think to myself, I wish I looked better.
I even have a hard time accepting praise from other people. I’ll brush off compliments with an unnecessary explanation, but I’m the first to internalize any criticism.
I became aware of how detrimental my negative self-talk can be when a friend told me it made me less attractive and not so fun to be around. She said that nobody else notices the “flaws” that I speak of and if I can’t take a compliment it makes people not want to give them to me anymore.
When people feel good about themselves, it’s infectious. When they don’t, it’s a repellant. There’s a reason Eeyore, the pessimistic donkey in Winnie the Pooh, is always off by himself. Pity parties are best enjoyed solo.
I don’t want to be like Eeyore. I don’t want to push away my friends with my sad-sack attitude about myself. So, I tried to be nicer to myself. I focused on only giving myself compliments with no critiques for a full day. This is how it went.
Instead of starting my morning run with thoughts of how much I hope it will help me lose weight, because I hate my thighs, I thought about how great I felt that I woke up early to do it. I felt lucky to live right next to such a nice park that I can run around. I enjoyed every step because I was happy to be able to do it. I ended the run feeling more energized because I focused on positive thoughts instead of negative ones.
I usually pick myself apart in the mirror before I take a shower, but today I skipped the mirror and instead just tried to focus on the parts of my body that I like while I was washing up.
I washed my hair thinking that I love my curly hair. As I scrubbed my stomach, I complimented my small waist. While washing my face, I remembered my cystic acne from when I was in college and was so happy that my skin was relatively smooth now. I got out of the shower feeling good about myself.
Getting dressed is always a problem for me, because if something doesn’t fit perfectly or like I think it should to be flattering, I can spiral into thinking I’m just so hideous.
In order to bypass that altogether, I just picked my favorite spring dress that always fits and makes me feel beautiful. Easy. No struggles on what to wear. My closet should be filled with clothes like that.
I looked in the mirror, told myself I looked pretty great and I believed it. If you only tell yourself to see the positives, it can change your entire mindset.
When I got in my car, I commended my driving skills, because I’ve only gotten in one accident since I was 16 and that wasn’t even my fault. I even snuck a peek at myself in the rear view mirror and thought, Ok! I look cute today! Even just the thought made me smile.
I sang along to a song on my playlist and thought, I don’t sound too bad. If I took voice lessons, I might just be unstoppable. Not all compliments have to be grounded in reality, but it felt good to think it. It even made me laugh out loud. Laughing at yourself is a pretty amazing joy.
When I was in class that evening, I got a paper back with a 98% score. Normal me would have been hung up on that 2% and probably would have gone up to the teacher at the end of class to ask about it. But on this day, I didn’t. I thought I was a pretty kick-ass student and I truly didn’t care about that 2%.
By the time I got home, I can honestly say I felt lighter than I usually do. Showering myself with compliments all day boosted me up. It put me in a great mood all day. I think that mood showed to others as well.
I had a couple people tell me that I looked happy, which made me feel even better. I also found myself taking compliments in stride. I believed them, which is key. I smiled, said ‘thank you’ and kept moving.
I’m going to make a conscious effort to do this more often. Obviously, I still had moments when little self-critiques tried to creep back in my mind, but I never let them stay around too long. Negativity is draining, especially if it’s consuming your thoughts. I need to be nicer to myself and I know that I can be.
It takes work, but the work is worth my well-being. If you find yourself stuck in a cycle of negative self-talk, I suggest this challenge. Try to only compliment yourself for a full day. I’m sure you will notice a change. Be kind to yourself. You deserve it. | https://maclinlyons.medium.com/i-tried-being-nicer-to-myself-for-a-day-81c66e078b61 | ['Maclin Lyons'] | 2019-04-26 20:35:00.243000+00:00 | ['Mental Health', 'Self-awareness', 'Self', 'Self Love', 'Life Lessons'] |
Four Lessons I Learned From Reading Origin by Dan Brown | Four Lessons I Learned From Reading Origin by Dan Brown
#3 is vital for Novelists
Screenshot by the Author
Over the weekend, I was feeling a bit drained on morale, so I decided that rather than lie in bed all weekend, I would read a book and have a genuine reason to lie in bed all weekend.
I decided to read a book from the famous author, Dan Brown because it had been sitting on my bookshelf for a while.
The first chapter — or rather the front matter of the book was enough to convince me to keep reading.
Dan Brown is a visual writer, an astounding one at that. His words created bold images in my head, and his choice of adjectives was spot on.
The first thought I had when I dropped the book was that I had to read it all over again — I can count on my right hand the number of books that have had that effect on me.
Before the enthusiasm of talking about my new favorite author consumes this article, here are four lessons I learned from reading Dan Brown’s book Origin.
1. Research can be made intriguing
“All art, architecture, locations, science, and religious organizations in this novel are real.”
This paragraph piqued my interest. There is a stupendous amount of research that went into writing Origin. The names of artists and artworks featured in this book alone are enough to make one’s head swirl, but the way Dan weaved facts with fiction was seamless and intriguing.
In this interview, Dan Brown says it took him two years to research Origin in Spain. Amazing!
I kept alternating between googling the landmarks (which he painted to produce the sweet music only a skilled wordsmith can craft) and reading, but alas, the strong pull of the book won, and I was content to read through and note down the locations for later viewing.
2. Character Development is vital
A quote that comes to mind is:
“No one is ever the villain of their own story.” ― Cassandra Clare
There is something I find more appealing in reading books when compared to watching their respective movie adaptation. The fact that I get to peek into the character’s head, and see their motives rather than just watching their actions brings me closer to every single character in the book, be it the good guy or the bad guy.
Origin was written in third person omnipresent POV, and every relevant character was featured in a chapter, especially when they had a confrontation with the main character. More, every single character had a backstory.
Hence, no matter what actions they took with or against the protagonist, it was justifiable.
For me, it made them realistic. And the fact that the characters didn’t see what we did — we were always a step ahead of each character because we could lead into the heads of other characters involved, gave us more empathy for their diverse causes.
Most noticeably in Luis Avila, who was on an assassin mission to serve his God by killing the atheist, Edmond Kirsch, but still waited long enough to save a waitress from being disrespected.
3. A single strong plot is enough to pull a book
The book was geared at answering the ultimate question of human existence, “Where do we come from, and where are we going?”
This question became the central plot.
I was deep into chapter 17 when I realized that Dan hadn’t told me anything about what I was reading, but I was still eager to turn the next page.
This is because each chapter was progressive. The secret was, each chapter reminded us of the problem at hand and stoked our curiosity to discover the secret of our origin.
I still felt the same enthusiasm in Chapter 58 that I did in chapter 17.
All chapters were directed to solving the question posed in the first chapters, “Would we ever have the answer to the question Edmond claimed to have discovered the answer to?”
I didn’t think we were ever going to get the answer, I thought the catastrophe the religious leaders feared would materialize. And I imagined how the author, being a writer, would be able to come up with a substantial answer to the amazing scientific discovery.
I legit came out of the book to ponder the author; How could he know so much!
The arguments were too sound and solid that the author made me question everything I knew about religion from a logical point of view.
4. Readers love mystery
I didn’t realize how much I loved mystery until I read this book.
Here again, is my talk about character backstory: To have everything you assumed about a character change due to new information is the kind of epiphany I want to see in books!
For a quick summary of the lessons mentioned in this post, here are the points:
1. Research can be made intriguing
2. Character development is vital
3. A single plot is enough to pull a book
4. Readers love mystery
In all, Dan is a master of his craft, and I can’t express how much I love his mode of writing. My next goal is to read every single one of his books before the end of this year! | https://medium.com/books-are-our-superpower/four-lessons-i-learned-from-reading-origin-by-dan-brown-94669b784d79 | ['Deborah Oyegue'] | 2020-09-10 13:16:01.417000+00:00 | ['Writing', 'Books', 'Reading', 'Fiction', 'Books And Authors'] |
Disaster Recovery on Kubernetes | Using VMWare’s Velero to backup and restore, perform disaster recovery, as well as migrate Kubernetes resources.
Photo by Markus Spiske on Unsplash
Although Kubernetes (and especially managed Kubernetes services such as GKE, EKS, and AKS) provide out-of-the-box reliability and resiliency with self-healing and horizontal scaling capabilities, production systems still require disaster recovery solutions to protect against human error (e.g. accidentally deleting a namespace or secret) and infrastructure failures outside of Kubernetes (e.g. persistent volumes). While more companies are embracing multi-region solutions, it is a complicated and potentially expensive option if all you need is a simple backup and restore option. In this post, we’ll look at using Velero to backup and restore Kubernetes resources as well as demonstrating its use as a disaster recovery or migration tool.
Are Backups Still Needed?
A key point that is often lost when running services in high availability (HA) mode is that HA (and thus replication) is not the same as having backups. HA protects against zonal failures, but it will not protect against data corruption or accidental removals. It is very easy to mix up the context or namespaces and accidentally delete or update the wrong Kubernetes resources. This may be a Custom Resource Definition (CRD), a secret, or a namespace. Some may argue that with IaaS tools like Terraform and external solutions to manage some of these Kubernetes resources (e.g. Vault for secrets, ChartMuseum for Helm charts), backups become unnecessary. Still, if you are running a StatefulSet in your cluster (e.g. ELK stack for logging or self-hosting Postgres to install plugins not support on RDS or Cloud SQL), backups are needed to recover from persistent volume failures.
Velero
Velero (formerly known as Ark) is an open-source tool from Heptio (acquired by VMWare) to back up and restore Kubernetes cluster resources and persistent volumes. Velero runs inside the Kubernetes cluster and integrates with various storage providers (e.g. AWS S3, GCP Storage, Minio) as well as restic to take snapshots either on-demand or on a schedule.
Installation
Velero can be installed via Helm or via the CLI tool. In general, it seems like the CLI gets the latest updates and the Helm chart lags behind slightly with compatible Docker images. However, with each release, the Velero team does a great job updating the documentation to patch CRDs and the new Velero container image, so upgrading the Helm chart to the latest isn’t a huge concern.
Configuration
Once you have the server installed, you can configure Velero via CLI or by modifying values.yaml for the Helm chart. The key configuration steps are installing the plugins for the storage provider and defining the Storage Location as well as the Volume Snapshot Location:
configuration:
provider: aws
backupStorageLocation:
name: aws
bucket: <aws-bucket-name>
prefix: velero
config:
kmsKeyId: <my-kms-key>
region: <aws-region>
volumeSnapshotLocation:
name: aws
config:
region: ${region}
logLevel: debug
(Note: There is an issue with CRDs with the latest Helm chart causing backup storage and volume snapshot location to not set the configured values as default. If you decide to name the storage and snapshot location, add --storage-location <name> --volume-snapshot-location name in the following Velero commands)
Creating a Backup
To create a backup, simply apply the backup command to a namespace or select by labels:
$ velero backup create nginx-backup --include-namespaces nginx-example $ velero backup create postgres-backup --selector release=postgres
When the backup command is issued, Velero runs through the following steps:
Call the Kubernetes API to create the Backup CRD Velero BackupController validates the request Once the request is validated, it queries the Kubernetes resources and takes snapshots of disks to back up and creates a tarball Finally, it initiates the upload of the backup objects to the configured storage service
Image Credit: OpenShift
Restoring Data
To list the available backups, first run:
$ velero backup get
Now you can restore from backup by issuing:
$ velero restore create RESTORE_NAME \
--from-backup BACKUP_NAME
Velero also supports restoring objects into a different namespace if you do not wish to override the existing resources (append --namespace-mappings old-ns-1:new-ns-1 to the above command). This is useful if you are experiencing outages and want to diagnose the problem for later while immediately restoring the service.
Velero can change the storage class of persistent volumes during restores. This may be a good way to migrate workloads from HDD to SSD storage or to a smaller disk if you over-provisioning the persistent volume (see the documentation for the configuration).
Finally, you can also selectively restore sub-components of the backup. Inspect the backup tarball by running:
$ velero backup download <backup-name>
From the tarball, you can choose a manifest for a specific resource and individually issue kubectl apply -f . This is useful if you took a snapshot of the entire namespace rather than filtering by labels.
Scheduled Backups
Instead of only creating backups on-demand, you can also configure scheduled backups for critical components:
Via CLI:
$ velero schedule create mysql --schedule="0 2* * *" --include-namespaces mysql
Via Helm values:
schedules:
mysql:
schedule: 0 2 * * *
template:
labelSelector:
matchLabels:
app: mysql
snapshotVolumes: true
ttl: 720h
Notice the ttl configuration, which specifies the time to expire scheduled backups. If you are using a Cloud Storage provider, you can leverage lifecycle policies or control that via Velero as shown above to reduce storage costs.
Other Uses
Besides simply taking backups, Velero can be used as a disaster recovery solution by combining schedules and read-only backup storage locations. Configure Velero to create a daily schedule:
$ velero schedule create <SCHEDULE NAME> --schedule "0 7 * * *"
If you need to recreate resources due to human error or infrastructure outage, change the backup location to be read-only to prevent new backup objects from being created:
$ kubectl patch backupstoragelocation <STORAGE LOCATION NAME> \
--namespace velero \
--type merge \
--patch '{"spec":{"accessMode":"ReadOnly"}}'
Restore from backup in another location:
$ velero restore create --from-backup <SCHEDULE NAME>-<TIMESTAMP>
And finally, revert backup to be writable again:
$ kubectl patch backupstoragelocation <STORAGE LOCATION NAME> \
--namespace velero \
--type merge \
--patch '{"spec":{"accessMode":"ReadWrite"}}'
This process works to migrate clusters to a different region (if the provider supports it) or to create the last working version prior to a Kubernetes upgrade. Finally, even if Velero does not natively support migration of persistent volumes across clouds, you can configure restic to make backups at filesystem level and migrate data for a hybrid-cloud back up solution.
Other Solutions
While Velero is very easy to use and configure, it may not fit your specific use case (e.g. cross-cloud backup). As mentioned above, Velero integrates with other solutions such as restic or OpenEBS, but if you are looking for alternatives, the following list provides both open-source and enterprise options: | https://medium.com/dev-genius/disaster-recovery-on-kubernetes-98c5c78382bb | ['Yitaek Hwang'] | 2020-09-24 07:10:26.796000+00:00 | ['Backup', 'Software Engineering', 'Disaster Recovery', 'Programming', 'Kubernetes'] |
1 Critical Skill Successful People Often Lose Over Time | What happens when you only think
James’s experience reminded me of my time in France. I don’t know if you’re aware, but people who live in France speak French. I do not. Naturally, there was a bit of a communication gap.
I would walk into the office and hear a coworker say:
“Blah, blah, blah, blah, Todd?”
I would smile and reply with the only word I knew at the time:
“Oui!”
What did I say yes to? I didn’t know. Nobody ever returned later in the day expecting me to help beat a person to death with crusty baguettes, though, so I counted that as a win.
Still, I’m a communicator. I wanted to understand French, not just fake it.
My plan was to think my way to learning French. Each day, I sat in the cafeteria reading the French newspaper. I chewed on smelly cheese and thought: “Okay, they just used the word lancé, and I definitely saw that word in another story about a software company releasing a new product. It must mean launch.”
This process continued for months.
My employer moved me to France to work, but most days I just eavesdropped. One morning my coworker Margaux came in late and furious. That was the day I learned how to curse at trains. Another day I sat in on a meeting about the company’s media bank with four people. They spoke French the entire hour. I caught snippets here and there about mot-clés after the conversation had already moved 10 minutes ahead.
I thought and thought and thought and thought. The French language consumed me as much as any hobby ever had.
Finally, two weeks before I left the country, a reckoning.
A colleague opened the door for our meeting, looked me right in the eyes, and said:
“Est-ce que cette chambre vous convient?”
For the first time, the words instantly translated.
“Is this room okay?”
A chorus of invisible angels sang my praises. I understood! All the reading and thinking and listening quietly must have paid off. Euphoria had to be pouring out of my eyeballs. I’d never been so proud.
That lasted for about 1.2 seconds.
I opened my mouth to reply. I couldn’t. Why couldn’t I reply?
Because I still couldn’t speak French.
Ashamed I couldn’t answer more thoroughly, I just nodded and muttered my old standby: “oui.” She gave me that look you give sad puppy dogs, and said: “Faut-il parler en anglais?”
I nodded miserably.
Yes, we should speak in English. | https://medium.com/personal-growth/1-critical-skill-successful-people-often-lose-over-time-edb5fb20fe9c | ['Todd Brison'] | 2020-06-23 16:13:12.051000+00:00 | ['Motivation', 'Personal Development', 'Entrepreneurship', 'Success', 'Inspiration'] |
Why user testing with real copy is like the ultimate bacon sarnie | First written for and published on naturalinteraction.com
Imagine you’ve spent months designing and building a shiny new website. It looks amazing and you’re sure it’s going to raise online profits and convert every visitor on first visit. You tested the design concept and even run click testing on the Information Architecture — what could go wrong?
Words.
Getting the content right, getting that microcopy right, using simple, on brand, audience appropriate language. These things are all important when it comes to delivering a great user experience. And yet, those words are often the last thing on the list and rarely form part of the testing process.
Wherever possible, you should always user test with real copy. If you don’t have the final copy ready, at least use a draft or as close to final as you can.
Say no to a dry sarnie
Lorem Ipsum is a well known form of dummy text, used to fill spaces on designs and wireframes before the polish. It dates back to the 15th century when it was first used by typesetters and printers to show how a page layout might look.
It’s still a useful tool but well, it’s a bit dry. You need to add some sauce (aka real words) so that when you’re usability testing a product or design with representative users, you really get to know that they fully understand the point of it all. And, that they find it easy to use from both a visual and reading point of view.
The gold standard (aka the sauce)
Having the content written and ready to go at the point you start user testing your designs is pretty rare. Being able to test the design in conjunction with its content with real users is absolutely the gold standard. Why? Because your users may well pick up things you’ve missed.
This is especially key if you’re working on the user interface for something like an app or a piece of software. Call to action text and tooltips need to work hand in hand with the visual appearance of the product to enable a fast onboarding process and of course, smooth user experience.
In this podcast episode Chris Myhill from Just UX Design talks about how a project he worked on for a large supermarket chain, it was actually the copy which caused usability issues and not the design which surprised the whole team. They were using sector jargon and complex wording which were both confusing users.
By providing more context and simplifying the content, pairing it with icons, he was able to really improve the overall product
Realistic best practices
Areas that are important such as call to action buttons and instructions should be as clear as possible because these are places in which customers are most likely to struggle or get confused. If you’re not able to test your product or website with real, polished copy, at least ensure the following are as close to final as possible:
Instructions
Tooltips
Microcopy
Call to action
Sign up process
Do what you can — something is better than nothing
If you’re working with a client, push for real copy to be supplied at the time of your initial brief and if that’s not possible, at least write something relevant in place of lorem ipsum to give your users a more holistic sense of the end product.
In conclusion, and if I was pushed to make an analogy — and let’s face it, I’ve been building up to it all the way through — I would say that user testing without real copy is like eating a bacon sarnie without brown sauce. It’s better than nothing at all but add that sauce and you’ve got yourself something truly great. | https://uxdesign.cc/why-user-testing-with-real-copy-is-like-the-ultimate-bacon-sarnie-2886ba0b8d3d | ['Alex Ryder'] | 2019-09-17 22:28:06.498000+00:00 | ['Copywriting', 'Marketing', 'Startup', 'Content Strategy', 'Ux Writing'] |
Saying That You Feel Ugly and Calling Yourself Ugly are Two Extremely Different Things | Photo by Florian Pérennès on Unsplash
So it is almost 4 AM here in Florida, and per usual I am awake till the crack of dawn obsessing over anything and everything. Recently, I did a shoot for a small reality show which was based around a blind date and going through each other’s phones. For months I waited anxiously to see the footage, and tonight I was finally able to watch it in which I was laughing positively and pinching my belly fat, crying inside at the same time. Looking at the set-up, it appeared to be the ugly duckling and the swan paired together, but deep down that is just a feeling and I know it is not a fact. That is why I prefer to say that I feel ugly rather than sealing the horrid perception of myself by means of two words: “I am”.
A few days ago, I wrote a post on here about men and their struggles with eating disorder in much of a positive and empowering tone. However, the very next day I was wearing a tank-top feeling very self conscious about breathing in public because of the way my perceived belly would protrude. That night, I took my medicine, got the munchies, and binge ate relentlessly and regretfully all at once. The next day, my belly and man boobs were in the back of my mind because I was not even all that hungry. However, night time came along and when it was time to shower, I got naked and saw my body feeling desperate, sad, and uncomfortable in my own skin. Earlier in the day I had no cup size and a flat belly only to feel like a balloon at shower hour. These feelings are symptoms of a common disorder known as Body Dysmorphic Disorder.
“A woman in tattoos and lingerie is wrapped in a white robe on a hotel bed” by Stas Svechnikov on Unsplash
Now, whenever I make mention about my own personal struggles with BDD, most of the time I am usually shrugged off because people think I am looking for validation. Additionally, when the subject comes up it usually comes off as if I actually believe I am ugly even though it is merely a feeling that comes and goes like a fair-weather side boy. For starters, anyone who talks about their mental illnesses should be treated as nothing less than brave for opening up about an issue that is still shoved under the rug by society. More to the point, I do not believe these things about myself but merely feel them so intensely that the illusion in my mind makes it feel so real.
Regarding tonight, seeing myself on taped blind date triggered insecurities that normally do not have any ounce of power over me. I didn’t see who I normally think I am whenever there is no mirror around for me and I’m free from my ego. Absolutely not. All I noticed was my hunched back, puffy cheeks, overly feminine qualities (which I’m unapologetic about but still insecure), and an image pale in comparison to the perfectly crafted wallflower across from me. I shed a tear and smiled all at once because of how grateful I am to be self aware, but the pain still exists.
Photo by Jairo Alzate on Unsplash
All in all, I absolutely refuse to let my mind’s distortion inhibit me from moving forward in my life. It is still nearly impossible for me to love a body picture enough to post it online, and when I do its usually removed within a day or two because of my insecurities. In addition, whenever I take pictures I am usually hiding part of my face, giving a kiss on the cheek, or doing anything to avoid smiling because I feel my face is puffy when I smile and it keeps me loathing myself. Ultimately, my insecurities come with a wisdom that makes the pain all worth the while. Knowing that language is extremely powerful can help us understand that feelings are not facts, so feeling a certain way does not equate to being it as well.
Overall, these are the things I see when I look at myself, but they are not the bricks that build the house of my identity. Seeing the clip made me feel ugly but there is no ugly bone in my body. It took me forever to get to this point, but ever since I replaced “I am” with “I feel” before using the words ugly or fat, the pain of Body Dysmorphic Disorder became fifty percent less extreme than it initially was. Tweaking my language relieves the pain so much more than the four medications I take for these disorders. And the best part about it is that this wisdom doesn’t have a copay. | https://astoldbynaomi.medium.com/talking-about-feeling-ugly-and-calling-yourself-ugly-are-two-different-things-cb9befb46d1 | ['Naomi Eden'] | 2018-08-15 19:11:24.854000+00:00 | ['Self Improvement', 'Body Image', 'Writing', 'Mental Health', 'Life'] |
AI Movies Recommendation System Based on K-Means Clustering Algorithm | AI Movies Recommendation System Based on K-Means Clustering Algorithm
Overview of Article
In this article, we’ll build an artificial intelligence movies recommendation system by using k-means algorithm which is a clustering algorithm. We’ll recommend movies to users which are more relevant to them based on their previous history. We’ll only import those data, where users has rated movies 4+ as we want to recommend only those movies which users like most. In this whole article, we have used Python programming language with their associated libraries i.e. NumPy, Pandas, Matplotlib and Scikit-Learn. Moreover, we have supposed that the reader has familiarity with Python and the aforementioned libraries.
Introduction to AI Movies Recommendation System
In this busy life as people don’t have time to search for their desired item and even they want it on their table or even in a little effort. So, the recommendation system has become an important part to help them to make a right choice for their desired thing and to grow our product. Since data is increasing day by day and in this era with such a large database, it has even become a difficult task to find a more relevant item of our interest, because often we can’t search an item of our interest with just a title and even sometimes it is harder. So, recommendation system help us to provide a most relevant item to individual available in our database.
In this article, we’ll build a movies recommendation system. Movies recommendation system has become an essential part to movies website because an individual don’t know which movies are more interested to him with just a title or genre. Sometime an individual likes action movies but he/she will not always like every action movie. To handle this problem, many authors has provided a better way to recommend a movie to user 1 from the watch list or favorite movies of another user 2 whose movies database is more relevant to the user 1. That is, if the taste of two people is same, then both of them will like each other favorite food. Many tech giants has been using these recommendation system in their applications like YouTube, Netflix, etc.
In this task, machine learning (ML) models has helped us a lot to build such recommendation system based on users previous watch history. ML models learns from users watch history and categorize them into groups which contain users of same taste. Different types of ML models has been used like clustering algorithms, deep learning models etc.
K-Means Clustering Algorithm
K-Means is an unsupervised machine learning algorithm which can be used to categorize data into different groups. In this article we’ll use this algorithm to categorize users based on their 4+ ratings on movies. I’ll not describe the background mathematics of this algorithm but I’ll describe little intuition of this algorithm. If you want to understand the mathematical background of this algorithm, then I’ll suggest you to search it on Google, many authors has written articles on its mathematical background. Since, the complete mathematics behind this algorithm has been done by Scikit-Learn library so, we will only understand and implement it.
Note: Plots of data in this section are designed randomly and only for intuition of K-means algorithm.
Figure 1 — Scatter Plot Before K-Means Clustering
Suppose that we have 2-dimensional data in the form of (x₁, x₂). Let, we have plotted our data in Figure (1). Next we want to divide this data into groups. If we take a look at data, then we can observe that this data can be divided into three groups. In this plot which is only designed for intuition, a common man can observe that we can divide into three groups. But some times we have very complex and big data or some time we have 3-dimensional or 4-dimensional or more generally we can have 100 dimensions or 1000 or even more than this. Then, it is not possible for human to categorize such type of data and even we can’t plot such a higher dimensional data. Also, sometimes we don’t know the optimal number of clusters we should have for our data. So, we use some clustering algorithms which can work for such big data which can even of thousands of dimensions and their are methods which can be used to know the optimal number of clusters.
Figure 2 — Scatter Plot After K-Means Clustering
In Figure (2), a demonstration of k-means clustering is shown. The data of Figure (1) has categorized into three groups and presented in the Figure (2) with a unique color for each group.
One can arise a question, how actually k-means worked to categorize the data?
To categorize data into groups which contain same type of items/data, there are 6 steps which k-means algorithm follow. Figure (3) is presenting the steps which k-means algorithm follow to categorize data.
Figure 3 — Graphical Abstract of K-Means Algorithm
Figure (3) is describing the following steps of k-means algorithm.
Firstly, we have to select the numbers of clusters which we want for our dataset. Later, an elbow method will be explained for selection of optimal number of clusters. Then, we have to select k random points called centroid which are not necessary from our dataset. Because to avoid random initialization trap which can stuck to bad clusters, we’ll use k-means++ to initalize k centroids and it is provided by Scikit-Learn in k-means algorithm. K-means algorithm will assign each data point to its closest centroid which will finally gives us k clusters. The centroid will be re-center to a position which is now actually the centroid of its own cluster and will be new centroid. It will reset all clusters and again assign each dataset point to its new closest centroid. If, the new clusters are same as the previous cluster was OR total iterations has completed then it will stop and gives us the final clusters of our dataset. Else, It will move again to step 4.
Elbow Method
The elbow method is the best way to find optimal number of clusters. For this, we need to find within clusters sum of squares (WCSS). WCSS is the sum of squares of each point distance from its centroid and its mathematical formula is following
Where K is total number of clusters, Nᵢ is the size of i’th cluster or we can also say that data points in i’th cluster, Cᵢ is the centroid of i’th cluster and Pᵢ,ⱼ is the j’th data point of i’th cluster.
So, what we’ll do with WCSS?
WCSS will tells us how far are centroid from its data points. As we increase number of clusters, WCSS will become small and after some value of K the WCSS will reduce slowly and we will stop there and choose optimal number of clusters. I’ll suggest to Google for elbow method and take a look at more clear examples of elbow method. Here we have figure for intuition of elbow method.
Figure 4 — Elbow Method Plot
A demonstration of elbow method is show in Figure (4). As we can observe that, when number of clusters K moves from 1 to 5 then WCSS value decreases rapidly from 2500 to 400 approx. But, for clusters number 6 to onward it is decreasing slowly. So, here we can make a judgment that it is good for our dataset if we have 5 cluster. Further, as we can see its look like an elbow, the joint elbow will be the optimal number of clusters which is in this case is 5. Later we’ll see that we don’t have always such a smooth curve so in this work I have described another way to observe changes in WCSS and to know optimal clusters.
Methodology Used in this Article
In this article, we’ll build a clustering based algorithm to categorize users into groups of same interest by using k-means algorithm. We will use data, where users has rated movies with 4+ rating on the supposition of that, if a user is rating a movie 4+ then he/she may like it. We have downloaded database The Movies Dataset from Kaggle.com which is a MovieLens Dataset. In the following sections, we have completely described the whole project, from Importing Dataset -> Data Engineering -> Building K-Means Clustering Model -> Analyzing Optimal Number of Clusters -> Training Model and Predicting -> Fixing Clusters -> Saving Training -> Finally, Making Recommendations for Users. A complete project of movies recommendation system can be downloaded from my GitHub Library AI Movies Recommendation System Based on K-means Clustering Algorithm. A Jupyter notebook of this article is also provided in the repository, you can download and play with that.
URL: https://github.com/asdkazmi/AI-Movies-Recommendation-System-K-Means-Clustering
URL: https://www.kaggle.com/rounakbanik/the-movies-dataset?select=ratings.csv
Now lets start to work on coding:
Importing All Required Libraries
import pandas as pd
print('Pandas version: ', pd.__version__)
import numpy as np
print('NumPy version: ', np.__version__)
import matplotlib
print('Matplotlib version: ', matplotlib.__version__)
from matplotlib import pyplot as plt
import sklearn
print('Scikit-Learn version: ', sklearn.__version__)
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.cluster import KMeans
import pickle
print('Pickle version: ', pickle.format_version)
import sys
print('Sys version: ', sys.version[0:5])
from sys import exc_info
import ast
Out:
Pandas version: 0.25.1
NumPy version: 1.16.5
Matplotlib version: 3.1.1
Scikit-Learn version: 0.21.3
Pickle version: 4.0
Sys version: 3.7.4
Data Engineering
This section is divided into two subsections. Firstly, we will import data and reduce it into a sub DataFrame, so that we can focus more on our model and can look what type of users has rated movies and what type of recommendation for him based on that. Secondly, we’ll perform feature engineering so that we have data in the form which is valid for machine learning algorithm.
Preparing Data for Model
We have downloaded MovieLens Dataset from Kaggle.com. Here first we’ll import rating dataset, because we want users rating on movies and further we’ll filter data where users has gives 4+ ratings
ratings = pd.read_csv('./Prepairing Data/From Data/ratings.csv', usecols = ['userId', 'movieId','rating'])
print('Shape of ratings dataset is: ',ratings.shape, '
')
print('Max values in dataset are
',ratings.max(), '
')
print('Min values in dataset are
',ratings.min(), '
')
Out:
Shape of ratings dataset is: (26024289, 3)
Max values in dataset are
userId 270896.0
movieId 176275.0
rating 5.0
dtype: float64
Min values in dataset are
userId 1.0
movieId 1.0
rating 0.5
dtype: float64
Next we’ll filter this dataset for only 4+ ratings
# Filtering data for only 4+ ratings
ratings = ratings[ratings['rating'] >= 4.0]
print('Shape of ratings dataset is: ',ratings.shape, '
')
print('Max values in dataset are
',ratings.max(), '
')
print('Min values in dataset are
',ratings.min(), '
')
Out:
Shape of ratings dataset is: (12981742, 3)
Max values in dataset are
userId 270896.0
movieId 176271.0
rating 5.0
dtype: float64
Min values in dataset are
userId 1.0
movieId 1.0
rating 4.0
dtype: float64
So, now minimum rating given by users is 4.0 and also data set has reduced from 2.6e⁷ to 1.2e⁷ which is less than half of the original dataset. But dataset is still large and we want to reduce it more.
For the intuition of this article, I want to work on a small dataset. So, now we will get a subset of this dataset for only first 200 movies. Later when we will reduce it further for first 100 users, then we’ll may have less than 200 movies which has been rated by users and we want to work around 100 movies.
movies_list = np.unique(ratings['movieId'])[:200]
ratings = ratings.loc[ratings['movieId'].isin(movies_list)]
print('Shape of ratings dataset is: ',ratings.shape, '
')
print('Max values in dataset are
',ratings.max(), '
')
print('Min values in dataset are
',ratings.min(), '
')
Out:
Shape of ratings dataset is: (776269, 3)
Max values in dataset are
userId 270896.0
movieId 201.0
rating 5.0
dtype: float64
Min values in dataset are
userId 1.0
movieId 1.0
rating 4.0
dtype: float64
Still the dataset is large, so we again get another subset of ratings by extracting it for not all users but some users i.e. for 100 users.
users_list = np.unique(ratings['userId'])[:100]
ratings = ratings.loc[ratings['userId'].isin(users_list)]
print('Shape of ratings dataset is: ',ratings.shape, '
')
print('Max values in dataset are
',ratings.max(), '
')
print('Min values in dataset are
',ratings.min(), '
')
print('Total Users: ', np.unique(ratings['userId']).shape[0])
print('Total Movies which are rated by 100 users: ', np.unique(ratings['movieId']).shape[0])
Out:
Shape of ratings dataset is: (447, 3)
Max values in dataset are
userId 157.0
movieId 198.0
rating 5.0
dtype: float64
Min values in dataset are
userId 1.0
movieId 1.0
rating 4.0
dtype: float64
Total Users: 100
Total Movies which are rated by 100 users: 83
And finally, its done. We have a dataset of shape (447,3) which includes 4+ ratings of 83 movies by 100 users. As we were started with 200 movies but when we extracted it for only first 100 users, it looks like that 117 movies were not rated by first 100 users.
As, now we are not worried for ratings column and further we have supposed that each movie which is rated 4+ by user is of his/her interest. So, if a movie is an interest of user 1 then that movie will also be interest of another user 2 of same taste. Now, we can drop this column as each movie is a favorite for every user.
users_fav_movies = ratings.loc[:, ['userId', 'movieId']]
Since we were sorted DataFrame by columns, so index may not be in proper order. Now, we want to reset the index.
users_fav_movies = ratings.reset_index(drop = True)
And finally, here is our final DataFrame of first 100 users favorite movies from the list of first 200 movies. The below DataFrame is printed with transpose
users_fav_movies.T
Now, let save this DataFrame to csv file on our local, so that we can use it later.
users_fav_movies.to_csv('./Prepairing Data/From Data/filtered_ratings.csv')
Data Featuring
In this section, we will create a sparse matrix which we’ll use in k-means. For this, let define a function which return us a movies list for each user from dataset
def moviesListForUsers(users, users_data):
# users = a list of users IDs
# users_data = a dataframe of users favourite movies or users watched movies
users_movies_list = []
for user in users:
users_movies_list.append(str(list(users_data[users_data['userId'] == user]['movieId'])).split('[')[1].split(']')[0])
return users_movies_list
The method moviesListForUsers returns us a list which will contain strings for each users favorite movies list. Later we will use CountVectorizer to extract the features of strings which contains list of movies.
Note: The method moviesListForUsers returns us list in the same order as users list. So to avoid trap, we will have users list in the descending order.
In above defined method, we need to have a list of users and users_data dataframe. As users_data is the dataframe we already have. Now, let prepair the users list
users = np.unique(users_fav_movies['userId'])
print(users.shape)
Out:
(100,)
Now, let prepare the list of movies for each user.
users_movies_list = moviesListForUsers(users, users_fav_movies)
print('Movies list for', len(users_movies_list), ' users')
print('A list of first 10 users favourite movies:
', users_movies_list[:10])
Out:
Movies list for 100 users
A list of first 10 users favourite movies:
['147', '64, 79', '1, 47', '1, 150', '150, 165', '34', '1, 16, 17, 29, 34, 47, 50, 82, 97, 123, 125, 150, 162, 175, 176, 194', '6', '32, 50, 111, 198', '81']
Above is the list for first 10 users favorite movies. First string contain first users favorite movies IDs, second for second users and so on. It looks that the list of 7th users favorite movies is larger than others.
Now, we’ll prepare a sparse matrix for each user against each movie.
If user has watched movie then 1, else 0
Let us first define a function for sparse matrix
def prepSparseMatrix(list_of_str):
# list_of_str = A list, which contain strings of users favourite movies separate by comma ",".
# It will return us sparse matrix and feature names on which sparse matrix is defined
# i.e. name of movies in the same order as the column of sparse matrix
cv = CountVectorizer(token_pattern = r'[^\,\ ]+', lowercase = False)
sparseMatrix = cv.fit_transform(list_of_str)
return sparseMatrix.toarray(), cv.get_feature_names()
Now, let prepare the sparse matrix
sparseMatrix, feature_names = prepSparseMatrix(users_movies_list)
Now let put it into DataFrame to have a more clear presentation. The format will be as columns will presents each movie and index will presents users IDs
df_sparseMatrix = pd.DataFrame(sparseMatrix, index = users, columns = feature_names)
df_sparseMatrix
Now, let make it clear that the matrix we defined above is exactly as we want it? We’ll check it for some users.
Let take a look at some users favorite movies lists
first_6_users_SM = users_fav_movies[users_fav_movies['userId'].isin(users[:6])].sort_values('userId')
first_6_users_SM.T
Now, let check the that if the users with above IDs have value 1 in the column of their favorite movie and 0 otherwise. Remember that in the sparseMatrix DataFrame df_sparseMatrix indexes were users IDs.
df_sparseMatrix.loc[np.unique(first_6_users_SM['userId']), list(map(str, np.unique(first_6_users_SM['movieId'])))]
We can observe from above two DataFrames that our sparse matrix is correct and have values in proper place. As, we have done with data engineering, now let create our machine learning clustering model with k-means algorithm.
Clustering Model
To clustering the data, first of all we need to find the optimal number of clusters. For this purpose, we will define an object for elbow method which will contain two functions first for running k-means algorithm for different number of clusters and other to showing plot.
class elbowMethod():
def __init__(self, sparseMatrix):
self.sparseMatrix = sparseMatrix
self.wcss = list()
self.differences = list()
def run(self, init, upto, max_iterations = 300):
for i in range(init, upto + 1):
kmeans = KMeans(n_clusters=i, init = 'k-means++', max_iter = max_iterations, n_init = 10, random_state = 0)
kmeans.fit(sparseMatrix)
self.wcss.append(kmeans.inertia_)
self.differences = list()
for i in range(len(self.wcss)-1):
self.differences.append(self.wcss[i] - self.wcss[i+1])
def showPlot(self, boundary = 500, upto_cluster = None):
if upto_cluster is None:
WCSS = self.wcss
DIFF = self.differences
else:
WCSS = self.wcss[:upto_cluster]
DIFF = self.differences[:upto_cluster - 1]
plt.figure(figsize=(15, 6))
plt.subplot(121).set_title('Elbow Method Graph')
plt.plot(range(1, len(WCSS) + 1), WCSS)
plt.grid(b = True)
plt.subplot(122).set_title('Differences in Each Two Consective Clusters')
len_differences = len(DIFF)
X_differences = range(1, len_differences + 1)
plt.plot(X_differences, DIFF)
plt.plot(X_differences, np.ones(len_differences)*boundary, 'r')
plt.plot(X_differences, np.ones(len_differences)*(-boundary), 'r')
plt.grid()
plt.show()
Why we write elbow method in object?
As we don’t know where we will get elbow i.e. optimal number of clusters, so we write it in object in such a way that the values of WCSS will be in attribute of object and we’ll not lost them. As, firstly we may run elbow method for cluster number of 1–10 and later when we plot it, we may find that we don’t get joint of elbow yet and we need to run it for more. So, next time we can run the same instance of object from 11–20 and so on, until we’ll get joint for elbow. So we can save our time to run it for again from 1–20. And thus, we’ll not lost data of previous run.
You may observe that in the above class method showPlot, I have written two plots. Yeah, here I’m going to use another strategy when we can’t observe an elbow. And this is the difference between each two WCSS values and we can set a boundary for more clear observations of changing in WCSS value. That is, when the changes in WCSS value will remain inside our required boundary then we will say that we have find elbow after which changes are small. See below the plots
Now let, first we analyze for clusters 1–10 with the boundary of 10 i.e. when the changes in WCSS value will be remain inside the boundary, we’ll say that now we have find an elbow after which change is small.
Remeber that the dataframe df_sparseMatrix was only for prsentation of sparseMatrix. For the algorithm, we always use only matrix sparseMatrix itself.
Let first create an instance of elbow method on our defined sparseMatrix.
elbow_method = elbowMethod(sparseMatrix)
Now, first we will run it for 1–10 number of cluster, i.e. first k-mean will run for no of clusters 𝑘=1, then for no. of clusters 𝑘=2 and so on upto no. of clusters 𝑘=10.
elbow_method.run(1, 10) elbow_method.showPlot(boundary = 10)
Since, we don’t have any clear elbow yet and also we don’t have differences inside the boundary. Now let run it for clusters 11–20
elbow_method.run(11, 30) elbow_method.showPlot(boundary = 10)
What happend?
We don’t have elbow, but we have boundary in differences graph. If we look at the differences graph, we observe that after the cluster 14, the differences are almost inside the boundary. So, we will run k-means for clusters 15 because the 14'th difference is the difference between 𝑘=14 and 𝑘=15. Since we have done to analyze the optimal clusters 𝑘. Now move to fitting the model and making recommendations.
Fitting Data on Model
Now let first create the same k-means model and run it to make predictions.
kmeans = KMeans(n_clusters=15, init = 'k-means++', max_iter = 300, n_init = 10, random_state = 0)
clusters = kmeans.fit_predict(sparseMatrix)
Now, let create a dataframe where we can see each user cluster number
users_cluster = pd.DataFrame(np.concatenate((users.reshape(-1,1), clusters.reshape(-1,1)), axis = 1), columns = ['userId', 'Cluster'])
users_cluster.T
Now we’ll define a function which will create a list of DataFrames where each DataFrame will contain the movieId and the counts for that movie (count: the number of users who has that respective movie in their favorite list). So, the movie which will have more counts will be of more interest to other users who has not watched that movie yet.
For Example, we’ll create a list as following
[dataframe_for_Cluster_1, dataframe_for_Cluster_2, ..., dataframe_for_Cluster_3]
Where each DataFrame will be of following format
where 3rd column of Count is representing the total number of users in the cluster who have watched that particular movie. So, we will sort movies by their count in order to prioritize the movie which have most seen by users in cluster and is more favorite for users in the cluster.
Now we want to create a list of all user movies in each cluster. For this, first we’ll define a method for creating movies of clusters.
def clustersMovies(users_cluster, users_data):
clusters = list(users_cluster['Cluster'])
each_cluster_movies = list()
for i in range(len(np.unique(clusters))):
users_list = list(users_cluster[users_cluster['Cluster'] == i]['userId'])
users_movies_list = list()
for user in users_list:
users_movies_list.extend(list(users_data[users_data['userId'] == user]['movieId']))
users_movies_counts = list()
users_movies_counts.extend([[movie, users_movies_list.count(movie)] for movie in np.unique(users_movies_list)])
each_cluster_movies.append(pd.DataFrame(users_movies_counts, columns=['movieId', 'Count']).sort_values(by = ['Count'], ascending = False).reset_index(drop=True))
return each_cluster_movies cluster_movies = clustersMovies(users_cluster, users_fav_movies)
Now, let take a look at any one DataFrame of cluster_movies.
cluster_movies[1].T
We have 30 movies in 1st cluster where movie with ID 1 is favorite by 19 users and at the top priority, followed by movie with ID 150 which is favorite by 8 users.
Now, let see how much users we have in each cluster
for i in range(15):
len_users = users_cluster[users_cluster['Cluster'] == i].shape[0]
print('Users in Cluster ' + str(i) + ' -> ', len_users)
Out:
Users in Cluster 0 -> 35
Users in Cluster 1 -> 19
Users in Cluster 2 -> 1
Users in Cluster 3 -> 5
Users in Cluster 4 -> 8
Users in Cluster 5 -> 1
Users in Cluster 6 -> 12
Users in Cluster 7 -> 2
Users in Cluster 8 -> 1
Users in Cluster 9 -> 1
Users in Cluster 10 -> 1
Users in Cluster 11 -> 11
Users in Cluster 12 -> 1
Users in Cluster 13 -> 1
Users in Cluster 14 -> 1
As, we can see that there are some clusters which contain only 1 user or 2 or 5. As we don’t want such small cluster where we can’t recommend enough movies to users. As the user in a cluster of size one will not get any recommendation for movies OR even user in size of cluster 2 will not get enough recommendations. So, we have to fix such small clusters.
Fixing Small Clusters
Since, there are many clusters which includes less number of users. So we don’t want any user in a cluster alone and let say we want at least 6 users in each cluster. So we have to move users from small cluster into a large cluster which contain more relevant movies to user
First of all we’ll write a function to get user favorite movies list
def getMoviesOfUser(user_id, users_data):
return list(users_data[users_data['userId'] == user_id]['movieId'])
Now, we’ll define a function for fixing clusters
def fixClusters(clusters_movies_dataframes, users_cluster_dataframe, users_data, smallest_cluster_size = 11):
# clusters_movies_dataframes: will be a list which will contain each dataframes of each cluster movies
# users_cluster_dataframe: will be a dataframe which contain users IDs and their cluster no.
# smallest_cluster_size: is a smallest cluster size which we want for a cluster to not remove
each_cluster_movies = clusters_movies_dataframes.copy()
users_cluster = users_cluster_dataframe.copy()
# Let convert dataframe in each_cluster_movies to list with containing only movies IDs
each_cluster_movies_list = [list(df['movieId']) for df in each_cluster_movies]
# First we will prepair a list which containt lists of users in each cluster -> [[Cluster 0 Users], [Cluster 1 Users], ... ,[Cluster N Users]]
usersInClusters = list()
total_clusters = len(each_cluster_movies)
for i in range(total_clusters):
usersInClusters.append(list(users_cluster[users_cluster['Cluster'] == i]['userId']))
uncategorizedUsers = list()
i = 0
# Now we will remove small clusters and put their users into another list named "uncategorizedUsers"
# Also when we will remove a cluster, then we have also bring back cluster numbers of users which comes after deleting cluster
# E.g. if we have deleted cluster 4 then their will be users whose clusters will be 5,6,7,..,N. So, we'll bring back those users cluster number to 4,5,6,...,N-1.
for j in range(total_clusters):
if len(usersInClusters[i]) < smallest_cluster_size:
uncategorizedUsers.extend(usersInClusters[i])
usersInClusters.pop(i)
each_cluster_movies.pop(i)
each_cluster_movies_list.pop(i)
users_cluster.loc[users_cluster['Cluster'] > i, 'Cluster'] -= 1
i -= 1
i += 1
for user in uncategorizedUsers:
elemProbability = list()
user_movies = getMoviesOfUser(user, users_data)
if len(user_movies) == 0:
print(user)
user_missed_movies = list()
for movies_list in each_cluster_movies_list:
count = 0
missed_movies = list()
for movie in user_movies:
if movie in movies_list:
count += 1
else:
missed_movies.append(movie)
elemProbability.append(count / len(user_movies))
user_missed_movies.append(missed_movies)
user_new_cluster = np.array(elemProbability).argmax()
users_cluster.loc[users_cluster['userId'] == user, 'Cluster'] = user_new_cluster
if len(user_missed_movies[user_new_cluster]) > 0:
each_cluster_movies[user_new_cluster] = each_cluster_movies[user_new_cluster].append([{'movieId': new_movie, 'Count': 1} for new_movie in user_missed_movies[user_new_cluster]], ignore_index = True)
return each_cluster_movies, users_cluster
Now, run it.
movies_df_fixed, clusters_fixed = fixClusters(cluster_movies, users_cluster, users_fav_movies, smallest_cluster_size = 6)
To observer changes for fixing clusters, first take a look at data which we were had before and and then data after fixing
First we’ll print those clusters which contain maximum 5 users
j = 0
for i in range(15):
len_users = users_cluster[users_cluster['Cluster'] == i].shape[0]
if len_users < 6:
print('Users in Cluster ' + str(i) + ' -> ', len_users)
j += 1
print('Total Cluster which we want to remove -> ', j)
Out:
Users in Cluster 2 -> 1
Users in Cluster 3 -> 5
Users in Cluster 5 -> 1
Users in Cluster 7 -> 2
Users in Cluster 8 -> 1
Users in Cluster 9 -> 1
Users in Cluster 10 -> 1
Users in Cluster 12 -> 1
Users in Cluster 13 -> 1
Users in Cluster 14 -> 1
Total Cluster which we want to remove -> 10
Now look at the users cluster data frame
print('Length of total clusters before fixing is -> ', len(cluster_movies))
print('Max value in users_cluster dataframe column Cluster is -> ', users_cluster['Cluster'].max())
print('And dataframe is following')
users_cluster.T
Out:
Length of total clusters before fixing is -> 15
Max value in users_cluster dataframe column Cluster is -> 14
And dataframe is following
So, we want max value in Cluster column is 4 starting from index 0, as we’ll remove 10 smallest clusters and we’ll have 5 remaining clusters
Now, let see what happend after fixing data.
We want to remove all those 10 small clusters and also the users_cluster DataFrame shouldn’t contain any user whose clusters which is invalid.
print('Length of total clusters after fixing is -> ', len(movies_df_fixed))
print('Max value in users_cluster dataframe column Cluster is -> ', clusters_fixed['Cluster'].max())
print('And fixed dataframe is following')
clusters_fixed.T
Out:
Length of total clusters after fixing is -> 5
Max value in users_cluster dataframe column Cluster is -> 4
And fixed dataframe is following
Now let see what happend when 10 clusters were deleted and how the remaining users clusters were adjusted which were already in large clusters.
Let take a look at anyone 11th cluster user. Since 11th cluster was already containing enough users i.e. 11 users and we were not want to delete that, but as now we only have max 5 cluster and max value of cluster column is 4, so what actually happend to 11 cluster? As there were 7 clusters before cluster no. 11 which were small and removed, so the value 11 now should be bring back to 4.
print('Users cluster dataFrame for cluster 11 before fixing:')
users_cluster[users_cluster['Cluster'] == 11].T
Out:
Users cluster dataFrame for cluster 11 before fixing:
Now let look at the cluster 4 after fixing
print('Users cluster dataFrame for cluster 4 after fixing which should be same as 11th cluster before fixing:')
clusters_fixed[clusters_fixed['Cluster'] == 4].T
Out:
Users cluster dataFrame for cluster 4 after fixing which should be same as 11th cluster before fixing:
Both DataFrame are containing same users IDs, So we don’t disturbed any cluster and simililarly we did same with list of movies DataFrames for each cluster
Now let take a look at list of movies dataframes
print('Size of movies dataframe after fixing -> ', len(movies_df_fixed))
Out:
Size of movies dataframe after fixing -> 5
Now, lets look at the sizes of clusters
for i in range(len(movies_df_fixed)):
len_users = clusters_fixed[clusters_fixed['Cluster'] == i].shape[0]
print('Users in Cluster ' + str(i) + ' -> ', len_users)
Out:
Users in Cluster 0 -> 45
Users in Cluster 1 -> 21
Users in Cluster 2 -> 8
Users in Cluster 3 -> 15
Users in Cluster 4 -> 11
Each cluster is now containing enough users so that we can make recommendations for other users. Let take a look at each size of clusters movies list.
for i in range(len(movies_df_fixed)):
print('Total movies in Cluster ' + str(i) + ' -> ', movies_df_fixed[i].shape[0])
Out:
Total movies in Cluster 0 -> 64
Total movies in Cluster 1 -> 39
Total movies in Cluster 2 -> 15
Total movies in Cluster 3 -> 50
Total movies in Cluster 4 -> 25
As, we have done working with training machine learning model k-means, making predictions of clusters for each user and fixing some issues. Finally, we need to store this training so that we can use it later. For this, we will use Pickle library to save and load trainings. We have already imported Pickle, now we will use it.
Let me first design object to save and load trainings. We will directly design methods for saving/loading particular files and also we will design general save/load methods
class saveLoadFiles:
def save(self, filename, data):
try:
file = open('datasets/' + filename + '.pkl', 'wb')
pickle.dump(data, file)
except:
err = 'Error: {0}, {1}'.format(exc_info()[0], exc_info()[1])
print(err)
file.close()
return [False, err]
else:
file.close()
return [True]
def load(self, filename):
try:
file = open('datasets/' + filename + '.pkl', 'rb')
except:
err = 'Error: {0}, {1}'.format(exc_info()[0], exc_info()[1])
print(err)
file.close()
return [False, err]
else:
data = pickle.load(file)
file.close()
return data
def loadClusterMoviesDataset(self):
return self.load('clusters_movies_dataset')
def saveClusterMoviesDataset(self, data):
return self.save('clusters_movies_dataset', data)
def loadUsersClusters(self):
return self.load('users_clusters')
def saveUsersClusters(self, data):
return self.save('users_clusters', data)
In above class, exc_info imported from sys library for error handling and error writings.
We will use saveClusterMoviesDataset/loadClusterMoviesDataset methods to save/load list of clusters movies DataFrames and saveUsersClusters/loadUsersClusters methods to save/load users clusters DataFrames. Now, lets try it. We will run and print responses in order to check if any error comes. If it return True then its mean our files has been saved successfully in proper place.
saveLoadFile = saveLoadFiles()
print(saveLoadFile.saveClusterMoviesDataset(movies_df_fixed))
print(saveLoadFile.saveUsersClusters(clusters_fixed))
Out:
[True]
[True]
As response is True for both save methods. Our trained data has now saved and we can use it later. Let check it if we can load it.
load_movies_list, load_users_clusters = saveLoadFile.loadClusterMoviesDataset(), saveLoadFile.loadUsersClusters()
print('Type of Loading list of Movies dataframes of 5 Clusters: ', type(load_movies_list), ' and Length is: ', len(load_movies_list))
print('Type of Loading 100 Users clusters Data: ', type(load_users_clusters), ' and Shape is: ', load_users_clusters.shape)
Out:
Type of Loading list of Movies dataframes of 5 Clusters: <class 'list'> and Length is: 5
Type of Loading 100 Users clusters Data: <class 'pandas.core.frame.DataFrame'> and Shape is: (100, 2)
We have successfully saved and loaded our data by using pickle library.
As we worked for very small dataset. But often movies recommendation systems works with very large datasets as the dataset we were had initially, and there we have enough movies in each cluster to make recommendations.
Now, we need to design functions for making recommendations to users.
Recommendations for Users
Now here we’ll create an object for recommending most favorite movies in the cluster to the user which user has not added to favorite earlier. And also when any user has added another movie in his favorite list, then we have to update clusters movies datasets also.
class userRequestedFor:
def __init__(self, user_id, users_data):
self.users_data = users_data.copy()
self.user_id = user_id
# Find User Cluster
users_cluster = saveLoadFiles().loadUsersClusters()
self.user_cluster = int(users_cluster[users_cluster['userId'] == self.user_id]['Cluster'])
# Load User Cluster Movies Dataframe
self.movies_list = saveLoadFiles().loadClusterMoviesDataset()
self.cluster_movies = self.movies_list[self.user_cluster] # dataframe
self.cluster_movies_list = list(self.cluster_movies['movieId']) # list
def updatedFavouriteMoviesList(self, new_movie_Id):
if new_movie_Id in self.cluster_movies_list:
self.cluster_movies.loc[self.cluster_movies['movieId'] == new_movie_Id, 'Count'] += 1
else:
self.cluster_movies = self.cluster_movies.append([{'movieId':new_movie_Id, 'Count': 1}], ignore_index=True)
self.cluster_movies.sort_values(by = ['Count'], ascending = False, inplace= True)
self.movies_list[self.user_cluster] = self.cluster_movies
saveLoadFiles().saveClusterMoviesDataset(self.movies_list)
def recommendMostFavouriteMovies(self):
try:
user_movies = getMoviesOfUser(self.user_id, self.users_data)
cluster_movies_list = self.cluster_movies_list.copy()
for user_movie in user_movies:
if user_movie in cluster_movies_list:
cluster_movies_list.remove(user_movie)
return [True, cluster_movies_list]
except KeyError:
err = "User history does not exist"
print(err)
return [False, err]
except:
err = 'Error: {0}, {1}'.format(exc_info()[0], exc_info()[1])
print(err)
return [False, err]
Now lets try it to make recommendations and updating favorite list request. For this, first we’ll import data for not only IDs but for movies details like title, genre etc.
movies_metadata = pd.read_csv(
'./Prepairing Data/From Data/movies_metadata.csv',
usecols = ['id', 'genres', 'original_title'])
movies_metadata = movies_metadata.loc[
movies_metadata['id'].isin(list(map(str, np.unique(users_fav_movies['movieId']))))].reset_index(drop=True)
print('Let take a look at movie metadata for all those movies which we were had in our dataset')
movies_metadata
Out:
Let take a look at movie metadata for all those movies which we were had in our dataset
Here is the list of movies which user with ID 12 has added into its favorite movies
user12Movies = getMoviesOfUser(12, users_fav_movies)
for movie in user12Movies:
title = list(movies_metadata.loc[movies_metadata['id'] == str(movie)]['original_title'])
if title != []:
print('Movie title: ', title, ', Genres: [', end = '')
genres = ast.literal_eval(movies_metadata.loc[movies_metadata['id'] == str(movie)]['genres'].values[0].split('[')[1].split(']')[0])
for genre in genres:
print(genre['name'], ', ', end = '')
print(end = '\b\b]')
print('')
Out:
Movie title: ['Dancer in the Dark'] , Genres: [Drama , Crime , Music , ]
Movie title: ['The Dark'] , Genres: [Horror , Thriller , Mystery , ]
Movie title: ['Miami Vice'] , Genres: [Action , Adventure , Crime , Thriller , ]
Movie title: ['Tron'] , Genres: [Science Fiction , Action , Adventure , ]
Movie title: ['The Lord of the Rings'] , Genres: [Fantasy , Drama , Animation , Adventure , ]
Movie title: ['48 Hrs.'] , Genres: [Thriller , Action , Comedy , Crime , Drama , ]
Movie title: ['Edward Scissorhands'] , Genres: [Fantasy , Drama , Romance , ]
Movie title: ['Le Grand Bleu'] , Genres: [Adventure , Drama , Romance , ]
Movie title: ['Saw'] , Genres: [Horror , Mystery , Crime , ]
Movie title: ["Le fabuleux destin d'Amélie Poulain"] , Genres: [Comedy , Romance , ]
And finally these are the top 10 recommended movies for that user
user12Recommendations = userRequestedFor(12, users_fav_movies).recommendMostFavouriteMovies()[1]
for movie in user12Recommendations[:15]:
title = list(movies_metadata.loc[movies_metadata['id'] == str(movie)]['original_title'])
if title != []:
print('Movie title: ', title, ', Genres: [', end = '')
genres = ast.literal_eval(movies_metadata.loc[movies_metadata['id'] == str(movie)]['genres'].values[0].split('[')[1].split(']')[0])
for genre in genres:
print(genre['name'], ', ', end = '')
print(']', end = '')
print()
Out:
Movie title: ['Trois couleurs : Rouge'] , Genres: [Drama , Mystery , Romance , ]
Movie title: ["Ocean's Eleven"] , Genres: [Thriller , Crime , ]
Movie title: ['Judgment Night'] , Genres: [Action , Thriller , Crime , ]
Movie title: ['Scarface'] , Genres: [Action , Crime , Drama , Thriller , ]
Movie title: ['Back to the Future Part II'] , Genres: [Adventure , Comedy , Family , Science Fiction , ]
Movie title: ["Ocean's Twelve"] , Genres: [Thriller , Crime , ]
Movie title: ['To Be or Not to Be'] , Genres: [Comedy , War , ]
Movie title: ['Back to the Future Part III'] , Genres: [Adventure , Comedy , Family , Science Fiction , ]
Movie title: ['A Clockwork Orange'] , Genres: [Science Fiction , Drama , ]
Movie title: ['Minority Report'] , Genres: [Action , Thriller , Science Fiction , Mystery , ]
And finally, we have successfully recommended movies to user based on his/her interest with most favorite movies by similar users.
You’re Done
Thanks for reading this article. If you want this whole project in the deployment coding, then please visit my GitHub library AI Movies Recommendation System Based on K-means Clustering Algorithm and download it to work with it, it is completely free for everyone.
Thank You | https://asdkazmi.medium.com/ai-movies-recommendation-system-with-clustering-based-k-means-algorithm-f04467e02fcd | ['Syed Muhammad Asad'] | 2020-08-19 11:21:08.647000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Recommendation System', 'Python', 'K Means Clustering'] |
Smile, You’re on Camera: The Future of Emotional Advertising | Smile, You’re on Camera: The Future of Emotional Advertising
For those worried about “Big Brother,” you should probably stop reading now. There is a new technology on the market that takes behavioral tracking to a whole new level.
Born from MIT’s Media Lab, Affectiva allows advertisers to record and analyze human emotional responses based on subtle, involuntary facial cues. The insights generated by this software completely surpass other creative testing methods by providing a treasure trove of accurate, objective and on-demand data.
Affectiva requires no $40K eye-tracking goggles or other extraneous technology, just your own computer. By tapping into any webcam’s existing functions, Affectiva can scan faces for subtle micro-shifts. The slightest uptick of an eyebrow or twitch of the mouth could indicate an emotional response to content. In real time, Affectiva catalogues facial movements and displays results almost immediately after completion of the video.
My Affdex data after watching Budweiser’s “Puppy Love” ad. It gets me every time!
Not only does Affectiva track key emotions like happiness, sadness and anger, it also has the potential to detect cultural nuances. Its software is now able to catch the “politeness smile,” an expression prevalent in Southeast Asia and India but rare in the Americas, Africa and Europe. As its database of faces expands and Affectiva’s “emotional AI” continues to grow in complexity, advertisers will be able to predict and decipher unique emotional responses to their work across cultures, genders and national borders.
This technology also solves the age-old dilemma of advertisers and psychologists alike. While tests and surveys can attempt to gauge emotional responses to ads before, during and after exposure, their results are often subjective and not generalizable to the natural environments where consumers would actually watch them. However, Affectiva’s technology can be used on any device, anywhere in the world. When using this technology, the only cue that hints you’re in a study is the light that comes on next to your webcam.
You might be wondering, are brands watching me right now? Do they have a databank of videos of me crying to TD Bank’s #TDThanksYou ads? The answer is no, do not worry. Affectiva’s services are explicitly opt-in and require consent from the end user. We reached out to Affectiva to see how exactly they recruit subjects and will update this when we receive a response. In the meantime, you can try Affectiva out for yourself here.
So, this is cool and all, but how can we use it to optimize ads? The information generated by Affectiva can be used to amplify key emotional moments, help place a call to action, or just prove that an ad is objectively awesome. The output of Affdex Discovery, Affectiva’s ad analysis software, clearly maps out the levels of surprise, smile, concentration, dislike, valence, attention, and expressiveness throughout a video. It also segments the data by age-group and gender, allowing marketers to see reactions specific to their target demographic. Best of all, performance on Affdex tests can accurately predict sales growth. Nice.
Affectiva is truly disruptive, even beyond advertising. Its technology has been used to build an app that helps people with Autism get real-time feedback on social interactions and in a video game that adapts to the player’s emotions. For brands, Affectiva represents a way to avoid the nemesis of emotional advertising, neutrality. Unilever, Kellogg’s, Mars and CBS are already on the bandwagon…who’s next? | https://medium.com/comms-planning/smile-youre-on-camera-the-future-of-emotional-advertising-a179cd8366ed | ['Ali Goldsmith'] | 2017-10-09 16:34:29.852000+00:00 | ['Marketing', 'Psychology', 'Emotions', 'Digital Marketing', 'Advertising'] |
Web scraping with Python & BeautifulSoup | The web contains lots of data. The ability to extract the information you need from it is, with no doubt, a useful one, even necessary. Of course, there are still lots of datasets already available for you to download, on places like Kaggle, but in many cases, you won’t find the exact data that you need for your particular problem. However, chances are you’ll find what you need somewhere on the web and you’ll need to extract it from there.
Web scraping is the process of doing this, of extracting data from web pages. In this article, we’ll see how to do web scraping in python. For this task, there are several libraries that you can use. Among these, here we will use Beautiful Soup 4. This library takes care of extracting data from a HTML document, not downloading it. For downloading web pages, we need to use another library: requests.
So, we’ll need 2 packages:
requests — for downloading the HTML code from a given URL
beautiful soup — for extracting data from that HTML string
Installing the libraries
Now, let’s start by installing the required packages. Open a terminal window and type:
python -m pip install requests beautifulsoup4
…or, if you’re using a conda environment:
conda install requests beautifulsoup4
Now, try to run the following:
import requests
from bs4 import BeautifulSoup
If you don’t get any error, then the packages are installed successfully.
Using requests & beautiful soup to extract data
From the requests package we will use the get() function to download a web page from a given URL:
requests.get(url, params=None, **kwargs)
Where the parameters are:
url — url of the desired web page
— url of the desired web page params — a optional dictionary, list of tuples or bytes to send in the query string
— a optional dictionary, list of tuples or bytes to send in the query string **kwargs — optional arguments that request takes
This function returns an object of type requests.Response . Among this object's attributes and methods, we are most interested in the .content attribute which consists of the HTML string of the target web page.
Example:
html_string = requests.get("http://www.example.com").content
After we got the HTML of the target web page, we have to use the BeautifulSoup() constructor to parse it, and get an BeautifulSoup object that we can use to navigate the document tree and extract the data that we need.
soup = BeautifulSoup(markup_string, parser)
Where:
markup_string — the string of our web page
— the string of our web page parser — a string consisting of the name of the parser to be used; here we will use python’s default parser: “html.parser”
Note that we named the first parameter as “markup_string” instead of “html_string” because BeautifulSoup can be used with other markup languages as well, not just HTML, but we need to specify an appropriate parser; e.g. we can parse XML by passing “xml” as parser.
A BeautifulSoup object has several methods and attributes that we can use to navigate within the parsed document and extract data from it.
The most used method is .find_all() :
soup.find_all(name, attrs, recursive, string, limit, **kwargs)
name — name of the tag; e.g. “a”, “div”, “img”
— name of the tag; e.g. “a”, “div”, “img” attrs — a dictionary with the tag’s attributes; e.g. {“class”: “nav”, “href”: “#menuitem”}
— a dictionary with the tag’s attributes; e.g. recursive — boolean; if false only direct children are considered, if true (default) all children are examined in the search
— boolean; if false only direct children are considered, if true (default) all children are examined in the search string — used to search for strings in the element’s content
— used to search for strings in the element’s content limit — limit the search to only this number of found elements
Example:
soup.find_all("a", attrs={"class": "nav", "data-foo": "value"})
The line above returns a list with all “a” elements that also have the specified attributes.
HTML attributes that can not be confused with this method’s parameters or python’s keywords (like “class”) can be used directly as function parameters without the need to put them inside attrs dictionary. The HTML class attribute can also be used like this but instead of class=”…” write class_=”…” .
Example:
soup.find_all("a", class_="nav")
Because this method is the most used one, it has a shortcut: calling the BeautifulSoup object directly has the same effect as calling the .find_all() method.
Example:
soup("a", class_="nav")
The .find() method is like .find_all() , but it stops the search after it founds the first element; element which will be returned. It is roughly equivalent to .find_all(..., limit=1) , but instead of returning a list, it returns a single element.
The .contents attribute of a BeautifulSoup object is a list with all its children elements. If the current element does not contain nested HTML elements, then .contents[0] will be just the text inside it. So after we got the element that contains the data we need using the .find_all() or .find() methods, all we need to do to get the data inside it is to access .contents[0] .
Example:
soup = BeautifulSoup('''
<div>
<span class="rating">5</span>
<span class="views">100</span>
</div>
''', "html.parser") views = soup.find("span", class_="views").contents[0]
What if we need a piece of data that is not inside the element, but as the value of an attribute? We can access an element’s attribute value as follows:
soup['attr_name']
Example:
soup = BeautifulSoup('''
<div>
<img src="./img1.png">
</div>
''', "html.parser") img_source = soup.find("img")['src']
Web scraping example: get top 10 linux distros
Now, let’s see a simple web scraping example using the concepts above. We will extract a list with the top 10 most popular linux distros from DistroWatch website. DistroWatch (https://distrowatch.com/) is a website featuring news about linux distros and open source software that runs on linux. This website has in the right side a ranking with the most popular linux distros. From this ranking we will extract the first 10.
Firstly, we will download the web page and construct a BeautifulSoup object from it:
import requests
from bs4 import BeautifulSoup
soup = BeautifulSoup(
requests.get("https://distrowatch.com/").content,
"html.parser")
Then, we need to find out how to identify the data we want inside the HTML code. For that, we will use chrome’s developer tools. Right click somewhere in the web page and then click on “Inspect”, or press “Ctrl+Shift+I” in order to open chrome’s developer tools. It should look like this:
Then, if you click on the little arrow in the top-left corner of the developer tools, and then click on some element on the web page, you should see in the dev tools window the piece of HTML associated with that element. After that you can use the information that you saw in the dev tools window to tell beautiful soup where to find that element.
In our example, we can see that that ranking is structured as a HTML table and each distro name is inside a td element with class “phr2”. Then inside that td element is a link containing the text we want to extract (the distro’s name). That’s what we will do in the next few lines of code:
top_ten_distros = []
distro_tds = soup("td", class_="phr2", limit=10)
for td in distro_tds:
top_ten_distros.append(td.find("a").contents[0])
And this is what we got: | https://towardsdatascience.com/web-scraping-with-python-beautifulsoup-40d2ce4b6252 | ['Dorian Lazar'] | 2020-11-27 14:30:31.033000+00:00 | ['Artificial Intelligence', 'Python', 'Data Science', 'Programming', 'Web Scraping'] |
Data pipelines on Spark and Kubernetes | Data pipelines on Spark and Kubernetes
Considerations for using Apache Spark and Kubernetes to process data
If you’re running data pipelines and workflows to get data from one location to the data lake, that usually means that the team will need to process huge amounts of data. To do this in a scalable way and to handle complex computation steps across a large amount of data (effectively from a cost perspective), Kubernetes is a great choice for scheduling Spark jobs, compared to YARN.
Apache Spark is a framework that can quickly perform processing tasks on very large data sets, and Kubernetes is a portable, extensible, open-source platform for managing and orchestrating the execution of containerized workloads and services across a cluster of multiple machines.
From an architectural perspective, when you submit a Spark application, one is directly interacting with Kubernetes, the API server, which will schedule the driver pod, the Spark driver container. Then the Spark driver and the Kubernetes Cluster will talk to each other to request and launch Spark executors. This can happen statically or this can happen, dynamically if you enable dynamic application.
Dependency management
When the team uses Kubernetes, each Spark app has its own Docker image, this means the team can have full isolation and full control of the environment. The team can set their Spark version, Python version, dependencies using this architecture. These containers package the code required to execute the workload, but also all the dependencies needed to run that code, removing the hassle of maintaining a common dependency for all workloads running on a common infrastructure.
Dynamic autoscaling
Another capability with this setup is that the team can have Spark applications with dynamic allocation activated and due scanning on the cluster. This also leads to better resource management, as the scheduler takes care of picking which nodes to deploy the workloads on in combination with the fact that in the cloud, scaling a cluster up/down is quick and easy because it’s just a matter of adding or removing VMs to the cluster, and the managed Kubernetes offerings have helpers for that. This in practice is a major cost saver.
Deployment
In the hybrid cloud world today enterprises want to prevent lock-ins. Running Spark on Kubernetes means building once and deploying anywhere, which makes a cloud-agnostic approach scalable.
Metrics and Security
For metrics, the team can export everything to a time series DB. This enables the superposition of a Spark stage boundaries and the resource usage metrics.
Kubernetes is a technology which has role based access control model and secrets management. There are many open source projects that the team can leverage with which managing security is easy like the HashiCorp vault.
Finally, running Spark on Kubernetes will save the team time. Data scientists, data engineers and data architects’ time is valuable, and this setup will bring more productivity to those people and departments will could lead to savings. | https://medium.com/acing-ai/data-pipelines-on-spark-and-kubernetes-8346d246ff6e | ['Vimarsh Karbhari'] | 2020-09-03 15:27:06.936000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Spark', 'Data Science', 'Kubernetes'] |
LOL — Issue 26 | I reproduced a poem that I’d written An Ode To Cockroaches on an Instagram post where she’d written a photo poem Ode To Spiders
She messaged me and shared me this poem, saying that she went batshit crazy after she read this. This poem is so confrontational and intense, with the roaches reference, it will make you shudder and awe. | https://medium.com/lol-weekly-list-of-lit/lol-issue-26-d54b0a7bd214 | ['Arihant Verma'] | 2017-08-23 05:51:08.828000+00:00 | ['Storytelling', 'Poetry', 'Writing', 'Pornography', 'Lolissue'] |
How to Solve Conflict Productively at Work | How to Solve Conflict Productively at Work
Seven strategies to turn tensions into fuel for growth.
“Conflict is the beginning of consciousness.” — M. Esther Harding
Tensions are a source of personal and organizational growth. Conflict is neither good or bad. If managed poorly, it can deteriorate the culture and collaboration.
Conflict keeps teams and organizations alive. It’s the tension that challenges people to adapt, learn, innovate, and grow.
Unfortunately, most organizations and leaders see conflict as a bad thing. They have an idealized version of collaboration. And expect people always to get along and agree on everything.
Positive dissent is vital to maximize opportunities and uncover new ones. Cognitive dissonance makes teams smarter, as research shows. Innovation feeds off diverse perspectives, skills, and experiences.
A practical approach to addressing conflict is adhering to the following ethos:
Friction creates energy-and energy drives creativity.
You can try to avoid conflict, but you cannot escape conflict. Tensions are unavoidable.
Arguing is an excellent thing if you and your team can do it in a healthy way. Great leaders confront, rather than avoid conflict. They turn tensions into fuel for growth.
The balance between appreciation and challenge
“Peace is not the absence of conflict, but the ability to cope with it.” — Mahatma Gandhi
Silence is the enemy of collaboration.
85% of people have failed to raise an issue with their boss even if that would harm the organization.
How can your team grow if you don’t share your genuine opinions? How can your company innovate if your colleagues keep their best ideas to themselves?
Managing diverse perspectives, tensions, and disagreements is not easy. But silence causes more harm in the long run.
Practice Radical Candor instead. Find the sweet spot between caring about your colleagues and challenging them.
Caring too much about your team can be harmful. People need to be challenged also. Your role as a leader is to help people grow. Just appreciating the good could be as detrimental as not caring at all.
Radical Candor means saying what you really think while also caring about the person.
As Kim Scott explains in Radical Candor, how we manage conflict can be broken down on a grid. One axis is Challenge Directly, and the other is Care Personally.
Most people fail to provide Radical Candor when they fall into one of the following quadrants.
Obnoxious Aggression happens when you challenge someone but don’t care. It’s praise that’s not sincere or criticism that isn’t delivered kindly. Aggression fuels defensiveness. Feedback feels like torture, not a gift.
Ruinous Empathy happens when you want to be nice and don’t challenge people. You provide unspecific praise or sugarcoat the feedback. Ruinous Empathy feeds ignorance. It doesn’t provide people with clear insights to improve their game.
Manipulative Insincerity happens when you neither care about people nor challenge them. You praise others without being specific or since. Or criticize them without being kind. Manipulative Insincerity seeds mistrust. It encourages backstabbing, passive-aggressiveness, and toxic behaviors.
Radical Candor is the sweet spot. You help people grow in a positive, caring way. It means pushing others beyond their comfort zone without being disrespectful. You solve tensions in a healthy way.
Seven strategies to turn conflict into fuel
“Never have a battle of wits with an unarmed person.” — Mark Twain
1. Turn arguing into a natural practice
Don’t avoid conflict, face it head-on. The sooner you address tensions, the easier you’ll solve them. Also, conflict becomes more personal once it escalates.
Teamwork is a contact sport. Friction creates energy that can propel your team forward. Train your people to build a practice of arguing in a healthy way.
2. Build a culture of trust and respect
Radical Candor is not about saying everything that comes to mind. It’s about being helpful but also respectful.
Start by setting clear ground rules for dissent. Don’t assume respect means the same for everyone. Include your team in creating the framework. Make it clear and public.
No name-calling or personal attacks. There is no winner but the team.
Be ready to show some vulnerability. Leaders expect people to challenge each other but have a hard time accepting criticism themselves.
Be patient. It takes time to find balance-especially if your team usually plays too nice.
3. Address real tensions
Most conflict is built on miscommunication and misunderstanding. People assume things or let emotions filter their judgment.
Practice separating real tensions from perceived ones. Which are the real, objective problems? And what are we creating?
Also, avoid anticipation. Most teams worry about what might happen in the future. Don’t get stuck on future tensions. They might happen or not.
Focus on solving real, present tensions.
4. Focus on the task, not on the person.
Conflict becomes a war when we make it personal. Most people can’t separate the ideas from the person.
Encourage a spirit of curiosity. Focus the debate on the idea, task, or project. Avoid making it -or taking it-personally. Keep the discussion about facts, logics, and events-not people.
Train your team to separate their identities from their points of views. That someone doesn’t like their ideas, doesn’t mean they are attacking them.
Give people the benefit of the doubt. Remember WikiPedia’s rule: “Assume good faith.”
5. Encourage diversity of thinking
Cognitive biases come in many forms and shapes. They blind our perspective and makes us feel overconfident. We want to repeat our past successes.
Diversity of thoughts require more than hiring diverse talent. Encourage people to speak up. Take turns so loud people don’t influence or silence quiet voices. Challenge people’s ideas and assumptions. Invite them to challenge yours as well.
6. Be intellectually humble
Usually, people get stuck trying to be right. Discussions are not meant to find the best solution. They just want to win an argument.
Intellectual humility turns people into better leaders, as I wrote here. They don’t let their ego blind their judgment. And feel okay with saying “I was wrong.”
Reward people for making progress, not for being right.
Solving tensions is not about wining an argument but finding the best answer to a problem. It takes wisdom to integrate opposing views.
7. Hit me with your best shot
Start by asking your team to criticize you. Ask for feedback. Prove you can take criticism before you start dishing it out. Embrace your discomfort so people would embrace theirs.
Kim Scott suggests not letting anyone off the hook. If they don’t say much, push back. It will take time for people to feel comfortable criticizing you. Pay close attention to silence.
Soliciting feedback is an ongoing practice. Start small. Use casual meetings to ignite the conversation. Try kicking off a meeting by asking, “What’s not working?” and “What’s working?”
Most people were taught to stay silent when they don’t have anything nice to say. Conflict is not about being harsh either but feeling comfortable with addressing tensions. You can only solve a problem that is made public.
Tensions are fuel for growth. There’s no perfect way to avoid conflict. But avoidance only make things worse.
The key to solving tensions is addressing them head-on. Model the behavior. Prove you can take criticism first before you encourage other to be radically candid. | https://medium.com/swlh/how-to-solve-conflict-productively-at-work-778e36755d60 | ['Gustavo Razzetti'] | 2019-09-17 15:04:21.492000+00:00 | ['Leadership', 'Work', 'Productivity', 'Conflict', 'Teamwork'] |
Recipe for Success. A new system that creates clean gas… | Recipe for Success
Green Heat is helping Ugandan households fire their kitchens with environmentally friendly gas heat made from natural waste.
Mama Justice doesn’t let her age slow her down. At 70 years old, she runs a small pig farm in Buwambo, Wakiso District in Uganda. And until recently, she gathered firewood every day to cook each meal for her four grandchildren. Mama Justice is a fantastic cook — easily whipping up meals of banana-like matoke, and cassava and groundnut paste with fish after her many years of practice. And she takes pride in her kitchen, even though the smoke from the firewood often made it difficult for Mama Justice and her grandchildren to breathe in her small home.
Mama Justice in her home in Buwambo, Wakiso District (Uganda).
After 70 years of hauling firewood, Mama Justice was beginning to struggle with the physical burden. A friend suggested that Green Heat, a social enterprise based in Kampala, could help. The Green Heat team recommended that she could use an anaerobic digester instead of firewood. Anaerobic digesters use biodegradable waste, such as plant leaves and livestock manure, to create clean-burning fuel. Microorganisms and bacteria break down the waste in a dark, oxygen-starved environment, until the mixture has fermented and renewable gas is produced.
Mama Justice’s kitchen filling with smoke from her wood burning stove.
Vianney Tumwesige, the managing director of Green Heat, and his team informed Justice that the digester could pipe clean gas directly into her kitchen — eliminating her reliance on firewood and ridding her home of dangerous smoke. The innovative Green Heat digester would also help her conserve water, saving her time-consuming and backbreaking trips to the well multiple times a day.
“Green Heat’s digester recycles water back into the system,” explains Tumwesige. “It’s better for the environment and less work for the farmer to maintain.”
Mama Justice had never used waste from her pigs as a source of energy before, and she had her doubts. As the cook responsible for feeding such a large family, she needed a reliable source of fuel. And she was concerned that cooking with a different kind of fuel would change the taste of her food. Justice wanted to know that her beloved recipes wouldn’t change when her fuel source did. Tumwesige assured her that her food would remain delicious.
It took some patience. First, Mama Justice had to invest in a few more pigs to provide enough manure to power the system. Then the digester needed several months to begin breaking down the waste and start filling the fuel tanks. In all, it took about 3 months and 2,000 liters of waste to get the system up and running at full capacity.
Then came the true test. Mama Justice set up at her new gas stove, with its shimmering blue flame, and set to work peeling onions, adding tomatoes and making her “famous green sauce.” It took over an hour to prepare, but when she was finished, she took a large spoon and tried the sauce — cooked for the first time with clean gas. The food tasted delicious, as always. Best of all, she didn’t have itchy, watery eyes like she usually did after an hour spent laboring over a hot, wood-burning stove!
Mama Justice was so excited she invited the whole Green Heat team to stay for lunch.
Mama Justice and her family gather for a delicious meal.
All her life, Mama Justice had endured problems from the smoke in her kitchen — coughing, wheezing, and enduring eye pain. The Green Heat anaerobic digester not only brought energy into her home using the waste she was already producing and eliminated her need to gather wood; it provided a healthier environment for her to cook for her family.
And as she teaches her recipes to the next generation, the only tears they will be crying will be from peeling onions — not from smoke.
The Hattaway team co-created this story with Green Heat and Securing Water for Food, an organization working with entrepreneurs and scientists around the world to help farmers grow more food with less water. To learn more about Green Heat, visit them on Facebook. | https://medium.com/aspirational/recipe-for-success-firing-ugandan-kitchens-with-natural-waste-97f82e90bd29 | ['Hattaway Communications'] | 2018-08-04 00:12:37.220000+00:00 | ['Storytelling', 'Archive', 'Sustainability', 'Innovation', 'Uganda'] |
The RSI² Leading Indicator. Detecting Trend Exhaustion Early in Trading. | The Relative Strength Index
We all know about the Relative Strength Index — RSI and how to use it. It is without a doubt the most famous momentum indicator out there, and this is to be expected as it has many strengths especially in ranging markets. It is also bounded between 0 and 100 which makes it easier to interpret. Also, the fact that it is famous, contributes to its potential.
This is because, the more traders and portfolio managers look at the RSI, the more people will react based on its signals and this in turn can push market prices. Of course, we cannot prove this idea, but it is intuitive as one of the basis of Technical Analysis is that it is self-fulfilling.
J. Welles Wilder came up with this indicator in 1978 as a momentum proxy with an optimal lookback period of 14 periods. It is bounded between 0 and 100 with 30 and 70 as the agreed-upon oversold and overbought zones respectively. The RSI can be used through 4 known techniques:
Oversold/Overbought zones as indicators of short-term corrections.
Divergence from prices as an indication of trend exhaustion.
Drawing graphical lines on the indicator to find reaction levels.
Crossing the 50 neutrality level as a sign of a changing momentum.
The RSI is calculated using a rather simple way. We first start by taking price differences of one period. This means that we have to subtract every closing price from the one before it. Then, we will calculate the smoothed average of the positive differences and divide it by the smoothed average of the negative differences. The last calculation gives us the Relative Strength which is then used in the RSI formula to be transformed into a measure between 0 and 100. | https://medium.com/python-in-plain-english/the-rsi%C2%B2-leading-indicator-detecting-trend-exhaustion-early-in-trading-284a59dc1ea3 | ['Sofien Kaabar'] | 2020-12-15 14:13:50.035000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Python', 'Finance', 'Data Science'] |