title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
4 Reasons Design in Marketing Matters More Than You Think | Here are 4 ways design can be used to better market your business.
1. Creating a Connection
Your logo and how you represent yourself online can help tell your story, even to those who have never heard of you. Your logo is often the first thing potential customers see, so use the opportunity to make a good first impression and say a little bit about yourself.
2. Building Trust
Everyone can tell the difference between a website that has been done properly and one that was created in a rush, even if they don’t know anything about design. A well-done website helps inspire confidence in your customers; not only because it looks professional, but also because it shows that you care about every detail — like the experience your customers are having online.
3. Showing Authenticity
Your brand should reflect your business, not the other way around. When making design decisions, focus on what sets you apart from your competition, and you could use this to represent your business authentically in your branding.
4. Creating Consistency
Make sure your business is represented consistently, be it on your website, business cards, or Facebook cover photo. Pay attention to details, especially the language you use online. If your business has a friendly tone, make sure it’s consistent on every page of your website. Lastly, remember the importance of design. Think of your branding as an outfit that your business wears. It tells a lot about you. Imagine a job interview. Even if you are the most qualified person for the job, dressing or representing yourself poorly might just be the reason that you don’t get the job. I hope that helps. Now let’s get designing!
“Think of your branding as an outfit. It tells a lot about you. Even if you’re the most qualified person for a job, if you dress poorly for an interview, you might not get the job.”
— Alborz Heydaryan, UX, The Incubator
What does your visual branding say about you? Book a complimentary consultation with one of our specialists. Emailhello@theincubator.io or call 1–888–713–2826 to get started. | https://medium.com/insights-from-the-incubator/4-reasons-design-in-marketing-matters-more-than-you-think-c26bb65c526c | ['The Incubator'] | 2016-09-13 22:04:00.352000+00:00 | ['Branding', 'UI', 'UX', 'Marketing', 'Design'] |
The only shortcut to success… | Is to show up every single day.
And if you can’t show up every single day, try to show up more than anybody else in your industry, field, niche or whatever.
This is the only shortcut to success.
That’s the only way to hack the learning curve.
That’s the only real growth hack you need to know.
While others write one book, try to write 3 books.
While others record one song, try to record 4 songs.
While others shoot one video, try to shoot 5 videos.
While others give one talk, try to give 6 talks…
But what about quality?
Quantity leads to quality… | https://medium.com/thought-pills/the-only-shortcut-to-success-3d5490272ddc | ['Yann Girard'] | 2017-09-16 19:49:26.841000+00:00 | ['Life Lessons', 'Writing', 'Life', 'Entrepreneurship', 'Poetry'] |
Defamiliarization | The Reflective Eclectic
Defamiliarization
See with new eyes what you thought you knew
Image of the Korean War Memorial, Washington DC, by the author
One of the advantages of being a reflective eclectic is I can borrow techniques from other fields and apply them to my own, psychotherapy. Some of these techniques come from surprising sources. Today I’d like to talk about something I learned from being a photographer: defamiliarization.
Photography can be a simple reproduction of the object photographed, or it can be art. When I get all artsy-fartsy with my pictures, I’m trying to enable the viewer to see something in an object that belongs to the object but she has never seen before. I‘m trying to cut through an overfamiliarity with the world that numbs us of delight and creativity.
The easiest way to do this in photography is often to shoot in black and white. When you look at a black and white photo of a familiar colored object, you can usually recognize the object, but it’s presented in a new way. This unforeseen appearance causes you to look closer and become more mindful of seeing. Suddenly new possibilities come to view. You might enjoy the play of shadows, the gradations of gray, and the stark contrasts that a black and white photo bring out. It’s funny how stripping things down to basics can enrich them.
Unfortunately, if you have seen a lot of black and white photos you can become immune to them. Black and white can become too familiar. Photographers have always got to come up with something new to stop people in their tracks. They crank up the saturation levels, adjust the tint, blow out the background, make something fuzzy or sharp, or find a new camera angle and frame things in a different way. However, they can’t make the new images so strange that the viewer cannot understand them. The art photographer has got to fit into a small window: familiar enough to be understood and strange enough to be intriguing.
It’s not hard to see defamiliarization at work in all the arts. The term itself comes from literature. The plot of a typical novel can be summed up in a few lines: boy meets girl, they fall in love, boy loses girl, they make up, and live happily ever after; so ordinary, you can see it happening every day. You’ll read a novel with that plot for 363 pages because the novelist has made it original. He’s added sparkling dialogue, unexpected twists, and quirky characters, all to keep you guessing. When you finish a good novel, you will have gained an understanding of the course of love as you have never understood it before.
Poetry and song do the same thing by putting in meter, verse, and startling vocabulary, thoughts you’ve had a million times before. This is why the same song is better in concert than it was when you heard it in your CD. The concert experience adds something new. For that matter, have you ever wondered why a singer or a musical instrumentalist doesn’t sing or play a well-known piece straight up, as it was originally written? He’s trying to make it fresh, so you can hear it as people first heard it when it just came out.
Have you ever wondered why some people, like me, prefer to live in a place like Rochester, New York, where the weather changes every day, from one extreme to another? There’s nothing like a new blanket of snow to make the world refreshed. Then we get sick of the same snow in February that we enjoyed in December. It’s gotten so familiar that we can no longer find the joy we once had in it.
Did you ever wonder why this person who you once fell in love with can do nothing but annoy you now? She’s gotten too familiar. Did you ever wonder why you get along so well when you’re on vacation? Just enough changes then that the relationship is renewed.
When you come to therapy and tell me something you’ve been thinking a million times before, you might think that going over it once more might not do you much good. Oh, but it does. Just hearing your voice say it, rather than your thoughts think it, may be just defamiliarizing enough to you that it enables you to look at the situation a whole new way. Then when I respond, you get another shot of defamiliarization. You see how that happens? The whole purpose is to wake you up.
To understand how defamiliarization works, you have to understand what’s happening when the opposite occurs. When you are familiarizing yourself with something, you’re taking it in and making it your own, making it part of the family. You’re fitting it in comfortably in your schema or world view. Once you have familiarized yourself, you no longer can do anything more with it. It’s become too close to you. You’ve lost objectivity. Defamiliarization gives you some distance, so you can see it more clearly and notice things you have not noticed before or have forgotten. When familiarization happens all over again, perhaps you fit it in a new place or have allowed it to change your schema. Generally, your world view becomes a little bigger then. You have more choices and more ways you can look at things.
There’s a saying in medicine: the thicker the chart, the worse the prognosis. That’s often true in therapy, too. The longer the person has been in therapy, the less likely a single session will do him much good. Therapy also can get too familiar. That’s another reason I’m a reflective eclectic. I have a big bag of tricks, so that when one method starts to get old, I can try another.
In the interest of defamiliarization, let me conclude in a way I don’t usually. I’d like to quote from the master of making the familiar fresh, J.R.R. Tolkien, from his lecture titled: On Fairy-Stories. You probably know Tolkien as the author of the Lord of the Rings trilogy. In this lecture, he surprisingly talked about recovery.
Recovery (which includes return and renewal of health) is a re-gaining — regaining of a clear view… as things apart from ourselves. We need, in any case, to clean our windows; so that the things seen clearly may be freed from the drab blur of triteness or familiarity — from possessiveness…This triteness is really the penalty of “appropriation”: the things that are trite, or (in a bad sense) familiar, are the things that we have appropriated, legally or mentally. We say we know them. They have become like the things which once attracted us by their glitter, or their colour, or their shape, and we laid hands on them, and then locked them in our hoard, acquired them, and acquiring ceased to look at them.
Since Tolkien’s thing was building fantasy worlds, he puts in a plug for his way of writing as the best defamiliarizing agent since sliced bread.
Creative fantasy, because it is mainly trying to do something else (make something new), may open your hoard and let all the locked things fly away like cage-birds. The gems all turn into flowers or flames, and you will be warned that all you had (or knew) was dangerous and potent, not really effectively chained, free and wild; no more yours than they were you. | https://medium.com/passive-asset/defamiliarization-9e347e408d1b | ['Keith R Wilson'] | 2020-12-14 18:51:15.398000+00:00 | ['Psychology', 'Psychotherapy', 'Photography', 'Mental Health'] |
The Pros and Cons of Trying to Find Your Passion | Why Trying to Find Your Passion is Dangerous and Counterproductive
You can’t eat passion. Passion doesn’t pay your bills. You can’t enter “finding your passion” into an application for medical assistance. Passion doesn’t keep the circumstances that affect your life at bay — the economy, politics, personal traits that negatively affect you, etc.
Who the hell are these millennials with no life experience to be telling you how to find your passion and live your bliss? They don’t know what they’re talking about.
Someone has to wash the dishes, haul the garbage, do your accounting, construct your roads, and wait your tables. The world spins because of people who don’t follow their passion. On top of that, finding your passion is all good and well until it doesn’t work. You can waste a lot of time and energy trying to find it only to get zero tangible results.
Again, there a ton of people like this who follow self-improvement advice without doing anything about it. In the search of finding their passion, they waste time doing a half dozen side hustles that never work. Then, because they have an entitled mindset, they never quite understand what real work ethic means.
If that wasn’t enough cold water splashed on your dreams, here comes the tidal wave: focusing on your passion doesn’t work. It doesn’t work because it comes with a poor underlying assumption. The assumption is that your level of love dictates how dedicated you’ll be to the journey. You think that once you find that ultimate passion, things will fall into place, and you’ll do the work necessary to succeed. This is backward.
In reality, you don’t find passion until you get good at something. When you develop competence in something you enjoy, you build more confidence to help you tackle larger challenges, and you continue to grow, which fuels more passion to repeat the process.
Most people want the results without the effort. They want passion to fall in their lap. You shouldn’t chase or seek your passion because that means it’s trying to evade you. Often, you’ll end up chasing your own tail, running on the advice treadmill, and making no progress toward building a life you love.
Passion is for the birds. Do your job, be thankful you have a roof over your head in the first place, and stop being so entitled. | https://medium.com/mind-cafe/the-pros-and-cons-of-trying-to-find-your-passion-fe7f3bc8c5c0 | ['Ayodeji Awosika'] | 2020-12-25 14:11:32.794000+00:00 | ['Productivity', 'Psychology', 'Self Improvement', 'Advice', 'Life Lessons'] |
Doing the Dishes Can Help You Become a Better Achiever | How Will Washing Dishes Make Me a Better Achiever?
1.A sense of achievement
When the dishes move entirely from the sink to the rack I have a sense of having sorted something; a diluted feeling of having achieved something good. It is like a small trigger that goes to my brain saying ‘I can do it’.
Takeaway:
Our mind requires motivation in small doses all the time. According to Ralph Ryback in Psychology today,
“The satisfaction of ticking off a small task is linked with a flood of dopamine. Each time your brain gets a whiff of this rewarding neurotransmitter, it will want you to repeat the associated behavior”
And every time the brain tastes this, it would want more of it. So it translates into actions leading to completing a task and relishing this feel-good emotion of achieving that.
2. Control over things
Everyone is repulsed by the scene of a sinkful of dirty dishes many times. Sometimes, the brain finds the easiest route to resolve that — not to do it. And one thing leads to another and one loses control over other associated chores as well, like cooking.
Doing the dishes not only gives me the message-I am in control of things, but it also prevents me from feeling daunted by other factors. Rather than letting them change my course, I tend to think about ways to tackle them.
Takeaway:
In our jam-packed schedules losing control is quite easy, leading to feelings of anxiety and overwhelmedness.
Alicia H. Clark, Psy.D., a licensed clinical psychologist and author of Hack Your Anxiety: How to Make Anxiety Work for You, in Life, Love, and All That You Do talks about how she advises her patients to do cleaning exercises. She goes on to explain that doing these things give us a feeling of control over things, which when practiced consciously, reflects on other aspects of our lives.
3. Deal with procrastination
In order to avoid a pile of dishes, the best way I have figured out is to sneak in some time and just do a bit of it. This avoids falling into the procrastination trap once it appears too much to handle.
Takeaway:
Rather than viewing every task in its entirety, it can be broken down into smaller parts, doable one at a time. I use it in every phase of my life; be it cooking, managing the house chores, or at the office. I have a ‘sinkful of jobs’ and pick a bit and do it whenever I can.
4. Just start; end eventually follows
The toughest part of a job is starting it. Everyone agrees to this. But, I have observed that if I just clean one dish, I end up cleaning the whole lot. It seems like an orchestrated act my mind and body follow.
Takeaway:
Who doesn’t relate to the pain of starting. Like Kendra Evin says in Psychology Today, it happens because of a mental leap that can sometimes be challenging: forcing ourselves to disconnect from what we are doing right now (which might be enjoyable) and do something that, at least initially, may not be enjoyable. Cutting the long story short —
You have to do that first dish.
I have conditioned my brain into linking every job to this simple act of washing dishes which may look so daunting but is sorted once you start with the first dish.
5. Increased mindfulness
Washing dishes engages a lot of senses- the temperature of the water, the smell of the soap, touching and seeing the dishes.
Takeaway:
Just as Thich Nhat Hanh has claimed, it is a conscious way to train the mind to be in the present. Going by the Buddhist zen beliefs, like muscle memory, the mind can be trained to behave in a certain way.
Today there is no debate over the effect of mindfulness on the outcome of a desired effect. Social relationships to corporate engagements, everyone is aware of the significance of ‘being there’. It not only increases the focus on the task but also brings about better problem-solving environments. | https://medium.com/narrative/doing-the-dishes-can-help-you-become-a-better-achiever-38e62a221a0e | [] | 2020-12-21 09:14:48.734000+00:00 | ['Self-awareness', 'Mindfulness', 'Self Improvement', 'Motivation'] |
When ‘Being’ Brings True Happiness | When ‘Being’ Brings True Happiness
1.3m steps later, I had reached an unintended destination
The work of Samy Benmayor, at La Galería Gabriela Mistral de Arte Contemporáneo, Santiago de Chile in March 2017
I conquered over 1 million steps in May 2018. After issuing myself with an even greater challenge, I ended up walking nearly 1.3 million steps the following month.
However, it all came to a sudden stop. On July 1st, I woke up and I could not walk. I had sprained my right foot.
I felt a sense of deflation, almost anger because I had been actively pushing myself to a place that was high on the ladder of ambition. What was I going to do now?
I then realised that whilst it was a great thing to have on my resume of life (to have walked 1,000km+ in 30 days) everything in life needed to be in moderation, even something as fulfilling as completing 7+hours of walking spread out though the day, resulting in around 30k+ steps every 24 hours.
I’m settling back in to walking 10k steps a day
Being able to accomplish something like this, came with a great sense of achievement. It was amazing, but it ended on that one Sunday morning when I woke up, and I couldn’t put my right foot flat on the ground.
I was not forced to stay at home, but I had made it a choice. Instead of spending so much time outdoors in the parks that I had grown to love, I was now mostly inside, choosing to be between the garden and the kitchen. And this prompted me to start looking closely into my living environment inside.
It’s the one place that I had not been taking notice of, for some time. I found how interesting it was, that I allowed myself to get used to the unconscious ways of the surroundings I had created, choosing to buy and have more of everything, as I fell back safely on cupboards and wardrobes as my ‘safe places’ to bury my life in.
A few months ago, I started to undo that cycle of clutter. It all stemmed from the first Audible audio-book I ever bought, ‘The Life-Changing Magic of Tidying’ by Marie Kondo on February 2nd 2017. The British narrator brought to life in great, yet relatable details the KonMari method™ that Marie had developed.
I listened attentively and started to learn that, in the midsts of everything that I had collected and stored over many years, the majority of it did not have any meaning or purpose anymore. Their time had passed, so why was I still choosing to live in that time?
I could live without 95% of it, to be honest. As nothing lasts forever, no-one needs to keep everything forever. So, I bravely took the decision to re-start my de-cluttering and elimination exercise on a grand scale. I chose to be aware — of the need to clear up, and of my bad foot. | https://ashluchmun.medium.com/when-being-brings-true-happiness-6e249971f2ef | ['Ash Luchmun'] | 2018-07-12 07:17:02.694000+00:00 | ['Life Lessons', 'Self Improvement', 'Wellness', 'Happiness', 'Mental Health'] |
6 design elements that encourage visitors to convert | Perhaps the most important aspect of online marketing today is conversion. Getting a visitor to convert to your brand is just as important as earning new clients, as it seems everyone already has a preference and align themselves with another company. It’s always a tight race and competition is fierce.
This means a good website — one that is eye-catching and piques the interest of a visitor — plays a pivotal role in business success.
Want more branding & design tips in your day? Subscribe to Lucidpress, right here on Medium.
In a study conducted at the Missouri University of Science and Technology, it was observed that users only take two-tenths of a second to form an opinion about the websites they visit, and the chance they will click away decreases immensely after the first 30 seconds.
If you want to attract and keep visitors on your website longer, here are 6 design elements that will encourage visitors to convert to your brand.
1. Keep it fresh
A fresh new layout is bound to capture a visitor’s attention, whether she is new to the site or has been a regular. Take CloudSponge as an example. They had an outdated and old-fashioned website, but once they upgraded to a newer, more up-to-date version, their conversion rate increased by 33%.
Basically, the more frequently you change the style or layout of your website, the more opportunities your conversion rate has to grow. It’s like buying fruit — the fresher it is, the more people want to buy it.
2. Have catchy headlines & calls-to-action
The first things that attract a visitor’s attention when she lands on your home page are the headline and call-to-action. Not the contact info, articles or product specs, but these two elements. For this reason, the more action-oriented your headline and CTA are, the higher your chances of success rise.
CTAs are designed to incite an immediate response from a customer. That’s why clear, concise CTAs are more effective. One software company reported that their site’s conversion rate increased by 106% after it got a makeover that included a clear, direct call-to-action.
3. Make it visually appealing
Visual appeal doesn’t mean over-crowded or complicated pages. Increase conversion rates by keeping your landing pages simple and appealing. Your page should show the customer what your company does without making them read a lot of content. One way to achieve this easily is by adding videos or image sliders. This also has the benefit of prolonging the time a visitor spends on your site.
Device Magic, a mobile software company, conducted an A/B test using VWO to determine whether an image slider would improve conversion rates for their website. The results indicated that the image slider increased completed sign-ups by 31%.
4. Don’t ask for too much information
People are usually very skeptical about sharing personal information with a new website. If you want people to subscribe to your site or services, it’s best not to ask for too much information up front.
Dropbox, one of the leading data storage platforms, has an incredibly simple sign-up form. It asks new users for nothing more than their name, email and a password. It doesn’t even ask you to re-enter the password.
When looking for new subscribers, it’s important to remember that new visitors aren’t familiar with your company or product. Gain their trust by giving them some incentive for free. It could be an eBook, a discount or something else. This tactic works like a hook, so that when they are looking to buy, they’ll seek you out.
5. Floating sidebars and drop-down menus
It’s very frustrating for users to have to move all the way to the top or bottom of a page to navigate the site. What many websites have now are floating menus: menus that move along the top or side of the screen as you scroll, making navigation a lot easier.
AMD, a giant in the computer hardware business, uses floating “share” buttons that visitors can use to share the content they find interesting across a variety of social sites like Twitter and Facebook. This helped AMD drive a whopping 3600% increase in social sharing, as more and more people found and shared their pages.
6. Establish contact
Creating a dialogue between you and the prospective clients is essential. If a user inquires about your service or product and gets a response quickly, the probability that they will buy from you increases immensely. Keep in mind that a customer is less likely to stick around if they don’t get the desired response in time.
Sending quick responses to customer inquiries can be simplified with CRM software. This is because CRM software segments and then converts leads from various channels. These leads can be segmented and organized to receive automated responses that save your time and energy — while improving conversion rates.
Getting people to convert to your brand is no small feat, but rest assured, the 6 design elements mentioned here will definitely give you some headway towards that goal.
Want more branding & design tips in your day? Subscribe to Lucidpress, right here on Medium.
About Erica Silva
Erica Silva is a blogger by choice. She loves to discover the world around her. She likes to share her discoveries and express herself through her blogs. Currently, she is associated with airG for development work. Check out her firm’s performance reviews. Find her on Twitter: @ericadsilva1. | https://medium.com/lucidpress/6-design-elements-that-encourage-visitors-to-convert-64796210db9d | [] | 2017-05-24 18:38:48.758000+00:00 | ['Design', 'Web Design', 'Digital Marketing', 'Conversion Optimization', 'Marketing'] |
3 Reasons Your Business Should NOT Go Digital | 3 Reasons Your Business Should NOT Go Digital
Make sure digital marketing is right for your business before committing
Our content manager is going to hate me.
He told me to write an article about why all small and medium sized businesses (SMBs) should go digital to market their product/services. But instead, I’m using this article to tell you the exact opposite.
While utilizing digital channels such as social media, ecommerce sites, and blogs can be extremely helpful for increasing reach and revenue for most businesses, digital channels are by no means a silver bullet that can solve all business problems.
In fact, being successful in the digital marketing world requires not only a coherent, well-constructed digital strategy, but also committed team members in your business who are constantly optimizing multiple channels according to this digital strategy.
In short, while digital marketing is an extremely attractive marketing method, it requires significant investment in time and resources to get it to work for your business.
For this reason, before investing time and money into digital marketing, it is absolutely crucial for all businesses to carefully examine whether digital channels are a good fit for their current objectives and commitment level.
Don’t waste time with channels that don’t fit your business. Image via Bob Lee Says.
Otherwise, you risk not only wasting money and effort, but precious time that is essential for your company’s survival.
During my time as a consultant and as cofounder of Humanlytics, I have run into this exact situation countless time. That’s why in this article, I will show you three of the most common cases I’ve encountered of a digital strategy mismatch, to prevent it from happening to you.
Case 1: When going digital does not magnify your “core competency”
A couple of weeks ago, I got a call from one of my clients in China, who runs a weight-loss chain with over 200 stores across the country. The client told me, “Bill, I want to create a product line for digital channels such as TaoBao and Jingdong because there are so many opportunities there!”.
TaoBao and Jingdong are popular Chinese ecommerce sites.
While it sounded like an attractive venture, I strongly advised against it. Why? For one important reason — selling online is not the core competency of their company.
A “core competency” is the key capability or resource that makes a company successful. In my client’s case, it includes their customers (which are mostly concentrated in one city), their service expertise, and their vast experience in weight loss coaching. It is these core competencies that make them so good at what they do — making sure each person that comes through the door finds a weight loss solution that works.
If they were to enter the fierce ecommerce space, they would not able to leverage their core competency in service expertise and coaching experience. Ecommerce would therefore take a significant amount of energy away from their core business, instead of contributing to it.
Given the competitiveness of the weight-loss related ecommerce market in China, combined with the lack of human resources and capital to launch this brand new e-commerce division, the future does not look so bright for my client’s ecommerce venture.
Instead, I recommended that they instead launch an online personal weight-loss coaching service. This service will leverage their vast experience that they have in helping people losing weight, along with the expertise that helped them excel in weight loss.
In addition, this service will also help my client use their underutilized therapists and consultants more effectively, and improve the quality of service they can offer to their customers.
How does this case apply to you?
When you are deciding to go digital, don’t simply pursue the most popular method on the market. Don’t just follow the method that your friends or competitors have succeeded with either.
Instead, look inside your company, and ask yourself, “what is the ‘core competency’ that makes my company successful?” Then build a digital strategy around magnifying that core competency.
Only this way can you have your new digital strategy working with, instead of against, your current business to achieve greater success for your business.
If you don’t think any digital strategy will help you leverage your “core competency,” then digital might simply not be the correct channel for you at the moment. That’s completely okay — I have seen many companies succeed over the years with minimal digital presence.
Case 2: You’re growing fast without a digital presence
If you see me speaking at a conference, or pitching a client, I will usually tell them “having a digital presence is extremely crucial to the success of your business, as over 50% of companies now advertise via digital channels.”
Just because digital is surpassing other ad channels, doesn’t mean it’s right for your business. Image via eMarketer.
While I strongly stand by every one of these words, I admit that I omitted a little fact — almost 50% of businesses still do not advertise through digital, and many of them are doing okay.
A recent experience with a potential client illustrated this case perfectly.
At Humanlytics, while we are developing our AI marketing analytics product, we are looking for companies who will help us test our product as focused beta testers. In return we help them with their digital strategy and analytics (if you are interested, email @bill@humanlytics.co. You get the point :D).
We pitched this idea to a very fast-growing organic pasta company within our personal network, trying to convince them to come on board with this beta testing program. They said no, and they are smart to do so. | https://medium.com/analytics-for-humans/3-reasons-your-business-should-not-go-digital-5df3836a9726 | ['Bill Su'] | 2018-06-08 19:55:52.616000+00:00 | ['Digital Strategy', 'Marketing', 'Digital Marketing', 'Startup', 'Critical Thinking'] |
Why Learning Java is a Starting Point For Big Data Developers Of The Future | Why Learning Java is a Starting Point For Big Data Developers Of The Future
Java is a big friend to Big Data scientists and developers. Here I am going to tell you why is that so
Photo by Franki Chamaki on Unsplash
Considering the impressive pace of big data growth over the last 2–4 years, it’s clear that this subset of data science will dominate the future tech. In this post, I decided to take a closer look at the technologies that are widely used for big data projects.
Java is a leading language when it comes to handling big data projects. Here’s why and how a beginning developer can learn Java for handling complex BD tasks.
A Few Words About Big Data Objectives
Before discussing the impact of Java in the big data of the future, let’s take a look at what types of projects data scientists will be focused on in the next 3–5 years. Here’s my personal take on what big data will mature into in the near future.
1. It’s going to converge with analytics
Now that businesses can gather terabytes of data on every user, tracking on-site behavior, communication preferences, and other relevant metrics, companies are more encouraged to invest in analytics than ever.
Gartner, for one, predicts that, if a business owner doesn’t invest in reporting tools by the end of the year, the company will no longer be competitive by the end of 2021.
Adopting big data algorithms will improve the precision of analytics and give business owners a big-picture view of brand reputation and customer relationships. The introduction of decision trees, linear regression, and other visualization and prognosis methods will help business owners anticipate customers’ needs and increase the quality of brand interactions.
2. Big data will help fight climate change
Although the impact of climate change is real, scientists lack the understanding of the most immediate threats humanity will face in the next 20 years. Big data is a way for the researching community to consolidate their efforts and stay connected via a stream of reliable real-time insights.
Needless to say, the ability of big data tools to process large datasets will improve the precision of approximations and the efficiency of contingency plans people can build based on these insights.
3. The impact of data cloud will grow but not dominate
According to statistics, the value of the global cloud computing market is expected to exceed $623 billion by the end of 2021. Right now, an increasing number of companies are switching to cloud from on-premise solutions.
Having said that, fully migrating to the cloud is a complex process, not to mention the security concerns that come along with trusting online third-party vendors with huge datasets.
In the big data environment, hybrid environments are a common solution, with cloud tools used to host dynamic datasets and on-premise storages used to keep track of the static ones.
Why is Java Still The Best Choice For Big Data Projects?
If you are a software developer considering a career in big data, learning Java should be your starting point. Let’s take a look as to why saying “Java is Big Data” wouldn’t be an exaggeration.
1. Big Data tools for Java are accessible
When considering big data implementation, most business owners are looking for the cheapest tech stack possible. Since most Java tools used in big data (Hadoop, Spark, Mahout) are open-source, such a tech stack is free and highly flexible. As a result, most employees looking for big data engineers will focus on Java proficiency and the working knowledge of the tools that use the language.
2. Java is Type-Safe
Almost every data scientist out there would confirm that being able to understand what types of data you are dealing with when working on a project is crucial when the set of information to process is huge. Being a type-safe language, Java is a first-choice for a fair share of developers and business owners since type-safety allows spending less time on unit testing and facilitates codebase maintenance.
3. Java is scalable
Most projects big data professionals work on are ambitious, designed with upscaling in mind. Thanks to its robustness, wide toolkit, huge community, and cross-platform compatibility, Java is unmatched by other languages in terms of scalability and makes a perfect fit for designing complex big data infrastructures.
4. Wide range of built-in features (not to mention libraries and frameworks)
If most of the other languages are only beginning to acknowledge the importance of machine learning and data science, Java was the first one to jump on the bandwagon. As a result, it has more tools for DS project than most alternatives, to name a few:
Other than libraries, the language becomes more suited to data science with every new release. Say, Java 8’s Lambdas help to make the code to-the-point and concise, the ninth version re-introduced REPL that made iterative development faster and more efficient.
Whether you prefer Hadoop or Spark, knowing Java will be crucial to be proficient in either platform. To make the most out of BD tools, developers often need to add new features to the source code — that’s where Java knowledge is essential.
Reviewing Top 5 Java Tools For Big Data Projects
After having explained the benefits business owners get from choosing Java for big data projects and the reasons why developers should mark the starting point of a data science career with ensuring Java proficiency, let’s take a look at the most widely used Java tools in big data projects.
1. Apache Hadoop
Hadoop Ecosystem and Their Components by Data-flair.training
Hadoop is a go-to big data processing technologies for most business owners — there are dozens of libraries and tools dedicated to sorting through and storing large datasets. Despite its popularity, among talent managers, Hadoop development job openings are known as the hardest ones to fill in since most developers lack an in-depth understanding of MapReduce.
Although Hadoop is one of the most complex technologies out there, the gain of becoming proficient in it is definitely worth the pain. According to statistics, the median yearly salary of a Hadoop developer is $103, 000.
2. Spark
Spark is a common Hadoop alternative that is attractive to developers thanks to its high speed, agility, and smooth learning curve. Typically, Spark is preferred over Hadoop for large-scale SQL projects, data streaming, and machine learning tasks.
It’s worth mentioning that Spark isn’t fully written in Java but in Scala — however, the interfaces of both languages are similar (as a proficient Java developer, you will not need a lot of time to get the hang of Scala).
As for workplace opportunities, Spark is a lucrative technology to master since some of the biggest names in tech are looking for professionals in the field — Facebook, Microsoft, Apple, or IBM.
3. Mahout
From the Bauman National Library wiki
Since big data projects are closely intertwined with machine learning, big data developers often cross paths with Mahout — an open-source, Java-based library of ML tools. Mahout gained a tremendous following thanks to its scalability and a large data processing toolset.
Typically, developers start learning Apache Mahout after understanding the functionality of Hadoop, which makes sense (a fair share of the library’s infrastructure is made up of repurposed Hadoop code).
4. Storm
Compared to Hadoop or Spark, Apache Storm is a narrower toolset of big data tools, focused predominantly on enabling real-time distributed data streaming. Equipped to deal with high-volume and velocity data, the platform is repeatedly praised for its high scalability and fault tolerance.
Other than that, Storm’s compatibility with most popular queuing and database systems makes learning how to use the platform a must-have for a beginning big data developer.
5. Deeplearning4j
Deeplearning4j is a Java-based tool neural network developers swear by. This platform is well-done on so many levels, from the ease of use to the quality of documentation. Deeplearning4j is scalable — you can integrate it with Apache Spark or run the platform on GPUs. The tool supports microservice projects as well — it’s one of the few platforms out there with a robust microservice infrastructure.
List of Resources to Learn Java For Big Data
If you are planning to build a career in big data, becoming proficient in Java is essential. However, since there are so many language learning resources around, developers often struggle to distinguish between the good and the bad ones.
Here’s the list of resources I compiled for my students over the years — in my opinion, these are all the tools you need to get from a newbie to a professional in Java.
Coding Games and Courses:
Codegym is a gamified platform for beginning and advanced Java learners. Using gamification and practical tasks to explain the core concepts of the language, this tool is your way to start coding from day one. My students repeatedly complimented the platform, saying it improves knowledge retention and gives a clear idea of how the concepts they learn are useful in the workplace.
Although Code Combat doesn’t offer programming learners a solid theoretical framework, the tool is perfect for using your skills to complete real-life tasks. Combined with high-quality graphics and an intelligent plot, Code Combat is an enjoyable RPG you wouldn’t mind spending your weekends with.
Being a data scientist, as well as a programmer, is about collaboration and teamwork. If you are anxious about coordinating with peers at the office, check Coding Game out. The idea behind the product is both simple and ambitious — uniting developers from all over the world to build a game together.
When it comes to learning from the best Java developers and getting noticed by top-notch employees, CodingGame is a decent resource.
This retro-style game might seem basic at the first glance — yet, there’s more to this robot battle than it lets show. Personally, I like this platform exactly because of how raw and stripped down it is. Other than enjoying robot battles, you can join hardcore challenges and chat with fellow developers via a dedicated forum.
Online courses
Learn Java For Big Data:
Books
Learning the basics of Java
Learning Java For Big Data:
Forums and social media
Learning Java for Big Data:
Conclusion
Personally for me, Java and big data always came together. Although I acknowledge the potential of Python as well, in my experience, a successful data scientist shouldn’t choose one over the other — rather, you learn how to handle both languages and draw on their respective strengths.
Since Java is one of the best-taught languages online, I am confident that, with enough determination and the understanding of basic programming concepts, you can be good at using its tools for DS projects.
Hopefully, the resources I linked above will be a good starting point to fuel a beginner developer’s journey. Good luck discovering the full potential of Java and using the language in your projects. | https://towardsdatascience.com/why-learning-java-is-a-starting-point-for-big-data-developers-of-the-future-9a9b6d240dea | ['John Selawsky'] | 2020-06-12 16:02:14.882000+00:00 | ['Big Data', 'Data Science', 'Java', 'Big Data Analytics'] |
What Rainbows Can Teach Us About Philosophy | Rainbows are a consequence of real things happening in the world, but they are not quite real themselves. You can see them, but you can’t touch them.
A rainbow is caused by a chance combination of light, a downpour, and being in the right place to see it — with your back turned to the light.
Human beings see different coloured bands in the rainbow but in reality the rainbow is a seamless spectrum of colour. The coloured bands are a product of human “colour vision” — the way we as a species see colour. The hardware of our eyes and brain makes sense of the world in a particular way and the bands of the rainbow show that.
We can all see a rainbow when one appears but no one person sees the same rainbow. This is because what you see is an optical illusion that exists in relation to where you are looking from. Rainbows look different to every eye beholding them.
Concepts are the same. What do I mean by concepts? Freedom, love, happiness, wisdom, gender and ethics — these are all concepts.
They exist only in so much that we speak about them. If we didn’t speak or write about these things they wouldn’t exist. And I would wager that if every single person in the world were asked to write an essay about love, they would have a different idea about what love is.
That’s why concepts are just like rainbows. They are not real, but caused by real things happening. You can speak about them, but you can’t touch them. They exist differently to every beholder, yet we also have a shared sense for them in the same way we see rainbows in that distinctly-human colour band way.
A dog would have no understanding of love in the way we do. Sure, a dog can show affection, form relationships and rear puppies and such like, but “love” is a human idea that we understand only on our own terms.
One of the biggest misconceptions about philosophy is that it’s a body of knowledge, that you have to study for years to understand it. People think you have to learn about philosophers: those long-dead pale males who wrote sleep-inducing texts with long words.
But you don’t. Philosophy is in fact an activity. It’s something everybody does, even if they don’t know it.
We spend most of our lives thinking directly about our lives and the world. We think about relationships, work, politics and a whole assortment of our experiences directly. We think about people we love, we think about our tax bills, work, death and jealousy.
Think about all the concepts that make emotions well-up inside you. Happiness, love, evil, duty, justice and honour. These are complicated concepts that cause us sleepless nights because they are concepts that imbue the thoughts that occupy us.
But sometimes we level up our thinking, we ask ourselves questions like, what really is “love”? Or — are we really “free” if we need to work to survive? Or — is paying tax a moral obligation? Or — why don’t I feel “happy” when all the signs say I should be?
We suddenly consider these things we think about in our everyday lives from a conceptual standpoint. That is what philosophy is. It’s thinking about thinking.
Philosophy is the craft of concepts — it is our means of making and moulding concepts. Philosophical contemplation is higher level thinking. We all do it, but more rarely than we ought to.
It allows us to better understand all these confusing concepts that cause us sleepless nights.
If we do it, and we talk (and write) about it, we start to better shape the world we live in. We can compare the way we understand these concepts. We can make sense of it a little more together.
We can chase rainbows together.
In doing so, we can be more empathetic, since we know more about each other. We can also feel less aggravated by our own limited understanding of the concepts that weigh so heavily on our lives, yet only exist in our minds.
Have a great Christmas. | https://medium.com/curious/what-rainbows-can-teach-us-about-philosophy-94ee97a2705a | ['Steven Gambardella'] | 2020-12-25 23:08:25.974000+00:00 | ['Philosophy', 'Culture', 'Psychology', 'Self Improvement', 'Science'] |
Branding Your Own Startup | This article starts by covering some of my process and thinking when it came to branding a startup I co-founded earlier this year. It then goes on to look at the visual results with images of the key assets and an example of the brand in action.
Designing For Others
I’ve spent a large part of my career working on or creating brands for other people. At the beginning of this year I co-founded a startup, Akord, and had the unusual experience of being my own client, as it were. Creative freedom with no one telling you to make the logo bigger, this is every designer’s dream, right? Well, I won’t lie, it was pretty great.
Initially, though, I was almost dreading undertaking the work and considered outsourcing it. It felt like it would be too much pressure, that I would be too closely invested to be able to step back and make good decisions.
When you create a brand for another company, even though your name will be attached to that work and you strive for the best outcome, you won’t have to work with that brand day in and day out for potentially many years to come. In that sense, there’s a liberating distance from your work. When it’s a company you’ve co-founded and now you’re designing the brand you’ll have to work with on a daily basis, there’s suddenly a lot more pressure to get it right.
I started to think of the ‘client’ as our small team, which helped reframe the project. I work with three others and part of the unwritten brief to myself was to create a brand we could all be proud of and that captured our values. I also imagined working for future team members, knowing how powerful a strong brand is in attracting talented people to a company, and how it can give people a sense of pride to be part of that story.
I’ve always felt that when I was consciously designing for others, telling myself things like, ‘I’m doing this to help this person achieve this goal’, as opposed to feeling, ‘I’m doing this to advance my career/portfolio/bank balance’, that the design process became a lot more enjoyable, freer easier because of it.
Branding at a Startup
An early-stage startup can go through brands like a snake shedding skins. I don’t mean to suggest these startups are somehow devious or ‘snakey’ by doing this, but like the snake shedding its skin it’s a naturally occurring event when something has served its purpose.
When branding is superficial it truly is like a skin — an outer layer that doesn’t really have too much to do with the real inner workings, the heart of the operation. When branding is done right it be a visual representation of the product, values and mission, but also manifest itself through all the company’s interactions with employees, customers, partners and the outside world in general.
The reality of a startup is that the first few years will inevitably be an unpredictable rollercoaster, where carefully considered business plans will be torn up, the product will pivot, and the ideal customer will change as the company is trying to figure out its product market fit. Even those startups who hire professional designers to work on the first version of their brand will find it hard to feel like they haven’t outgrown it within a few years.
With that in mind, I believe the most effective strategy for branding an early-stage startup is to focus on the core values and mission to create a strong set of foundational branding assets. These foundational assets I define as logo, colours, typography and graphics (or illustrations). Other elements, such as tone of voice, I believe are better developed later on when product market fit is closer or already achieved.
These foundational assets should be broadly and simply defined, starting with a few simple rules to ensure a solid coherence runs through all work. This will allow for a certain amount of room for these assets to evolve, with the aim that the brand can adapt rather than be overhauled as the company goes through its growing pains.
Our Values
Everything in a brand should stem from its values, because that’s where the vision, product and company itself ultimately all come from. The values are often the thing about a company everyone thinks they know, but when the question is posed you never get two answers that are the same. It isn’t necessarily that people aren’t aligned, but that there are so many different ways to describe something so subjective. People can also end up listing an endless amount of values, so it’s worth prioritising or distilling a company’s values down to a core set that really captures the soul.
At Akord we’ve captured our values in the following three points.
Commitment & Caring
We’re committed to our mission, the fundamental right to privacy, and to building a company that adds real value to the world with an exemplary culture. We’re committed to operating as carbon negative and creating an equitable company, caring for our working environment as well as the world outside. We are a small committed team that cares about what we do. Quality & Craft
We aim to produce a quality product and constantly sweat the details. We adopt a learning attitude, continuously looking to improve. We believe in the power of new technologies and design — the tools of our craft. Conviction & Confidence
We have strong beliefs. And while those beliefs mean sometimes doing things the hard way, it’s also where we derive our conviction, knowing that our mission and our product will add value to people’s lives. We want people to feel confident in owning their data. We want our customers to have confidence in us.
The Brand Concept: Lock & Flow
The Lock & Flow concept represented in the Akord graphics.
As well as the company values, the visual elements of the brand are also focused around a concept I named, Lock & Flow. I like paradoxical ideas, and these two seemingly opposed forces, locking and flowing, nicely capture the core of what we are trying to achieve with Akord.
We want to secure data, keep that sensitive information safely locked down, and at the same time we believe that work, or processes in general, can only truly flow when you have confidence that you’re in a secure context. When you don’t trust your tools, you never feel at ease, second-guessing what’s right and using convoluted methods to try and make an effort to cover your bases.
The main graphics used throughout the marketing site and at points in the product are intended to capture the Lock & Flow idea, as well as a modern feel overall that’s relevant to the technology we’re working with.
We use flowing organic shapes that can morph from solid gradient colour to fine lines, as a visual representation of dynamic flow. Occasionally overlaid and for separate use, we have crosses, squares and triangles that evoke a more machine like process of locking in (as well as tracking and mapping), which relates to our encryption and internal blockchain.
The Logo
The Akord logo and breakout branding elements.
The padlock iconography has multiple meanings. As a standard representation of encryption, it’s a clear, easily easily understood and constant reminder that wherever you have Akord you have encryption, end to end. We also want you to feel secure and confident whenever you interact with our product or the company in general. That little padlock on a browser tab is your marker, telling you, ‘OK, here you’re safe’.
The padlock has a small stylistic twist, mirroring the triangle from inside the capital ‘A’ as the keyhole in the padlock. This upwards pointing mark is another reminder to keep pushing onwards and upwards with everything we do. When we use just the A and padlock together, it becomes like an asterisk or star, that brings to mind an A* grade — symbolic of the standard we hold ourselves to.
The padlock is relatively small, a mark at the end of Akord, as a reference to the fact our encryption, while ever present, should never get in the way of the other product features.
The Colours
Akord brand colour palettes.
Our brand leads with a dark-mode aesthetic, which lends itself to a contemporary feeling of security and a technologically-focused product. We offset that dark mode with a bright red-orange, or ‘redange’, as our primary brand colour.
The secondary palette is a set of vibrant colours carefully chosen for a dual purpose. First, we assign different colours to members of a data room, and these colours are used for elements such as the speech bubbles used for when people post messages. Second, the colours capture the creativity and collaboration element of Akord, as well as enabling us to make the product more visually striking.
We also have 9 greys that range from almost black to almost white. Having a wide range of greys to draw on for product design is incredibly useful. They allow you to create subtle effects of depth and get contrast right when choosing borders and backgrounds, as well as providing another dimension to create type hierarchy. I never use opacity to create greys as you will encounter issues when elements are overlaid on colours other than white. I also like to give grey colours a slight tint of the brand colour, so they’re not pure monochrome, giving them a warmer more cohesive feel with the whole product.
The Typography
Type hierarchy and specimen examples for Akord’s typography.
For the product and marketing website I set basic type hierarchies covering just what I need. I normally get some way into the designs before cycling back to set this hierarchy. This way I have a clear idea of what I need, then I create some framework for consistency. Setting type hierarchy before you have an idea of the needs are always seemed to me like an illusion of efficiency and an unnecessary hindrance to getting going.
I use Larsseit as the main typeface for the brand. It’s a sans-serif but it has enough quirks with the double-storey lowercase ‘g’, ‘a’ and the compact ‘s’, for example, to not have a completely cold and machine-like feel that many popular modern geometric sans-serifs have. I wanted Akord to feel current but personable at the same time.
Larsseit is supported with Everett Mono, a contemporary and versatile grotesque mono-spaced font. It’s only available to purchase by contacting the designer, which is nice to know that this element of the brand is not going to be ubiquitous. The ‘A’ in the Akord logo is redrawn along the same lines as the Everett font. I’m in discussions with the type designer of Everett to work on a custom drawing of the Akord logotype, and this will most likely be one evolution of the brand in 2021.
The First Evolution
All the core brand assets in action on the homepage of the Akord website.
The nature of a tech startup means we could, and probably will, be in a very different place in a few years from now. In the context of a startup it’s wise to establish a solid foundation for your brand but not to attempt to construct something on the level of corporate guidelines. We need to be nimble in our product development and the brand needs to be able to flex and pivot alongside if necessary.
Hopefully these core elements are strong and yet flexible enough to last longer than a few years. The relative simplicity of the logo is an attempt to hold its relevance through the early years. Likewise the concept for the graphics is sufficiently open whereby we can add and evolve those elements to keep the brand fresh and relevant.
I hope this first expression of our values and product is the first step in a meaningful relationship with our brand for our customers. | https://uxplanet.org/branding-your-own-startup-c10a16e8dfdd | ['Pascal Barry'] | 2020-12-15 20:41:26.318000+00:00 | ['Branding', 'Startup', 'Visual Design', 'Design', 'Design Process'] |
The Importance of Keeping Your Promise | Wellington : Night or the Prussian Must Come
Meeting of Wellington and Blücher, from The Wars of Wellington. The British Libary.
Two days prior to the Battle of Waterloo, General Gebhard Leberecht von Blücher, Prince of Wahlstatt, commander of the Prussian army was on his backfoot, retreating, having been defeated at the Battle of Ligny. Hot on his heels was Marshall Grouchy , with a third of Napoleon’s army, brimming with confidence in the earlier victory. Grouchy had been ordered by Napoleon to pursue the Prussians “with your swords on their backs”,which history would tell us, Grouchy, unfortunately followed stubbornly to the letter. Grouchy was eventually led by the Prussian’s rearguard to a minor battle at Wavre which he won but too far to return to aid Naploeon at Waterloo.
Grouchy had refused to listen to his subordinate’s General Gérard’s advice to “march to the sounds of the guns”
With his ally in full retreat, the Duke of Wellington, himself, was in no better shape. Wellington was also defeated at the Battle of Quatre Bras two days earlier and his motley crew of the “Seventh Coalition” of United Kingdom, Netherlands, Prussia, Hanover, Nassau, and Brunswick was about to be driven to the sea!
Wellington was also outgunned and his army, in his own words, “was an infamous army, very weak and ill-equipped, and a very inexperienced Staff”. Defeat was imminent. In fact, Napoloen was so confident of victory that he had had the time and peace of mind, to dine in silverware for breakfast on the fateful Sunday June 18, 1815 at Le Caillou, Waterloo.
History, would tell us that Waterloo was Napoleon’s last and final attempt at glory; to restore his Emperorship and the French Empire. Wellington and the rest of the continent must yield to France and France only. When the dust settled, 41,000 Frenchmen, two third of Napoleon’s finest including the undefeated Imperial Guards laid dead, wounded, captured or missing.
The Battle of Waterloo remains the most studied battle in history by military strategists and all who seek the answer as to how, Napoleon’s Armée du Nord, comprising the finest and most professional soldiers of their time were soundly defeated.
What was the key that unraveled the briliance and the might of Napoleon’s army?
Blucher kept his promise
“Night or the Prussians must come,” Wellington at Waterloo
A promise kept made the difference. Blucher, had made a promise to Wellington that he would come to his aid, not with one or two corps but his entire army even though he is on a full retreat. Whilst Napoloen had grossly underestimated the conviction of the Prussian, Wellington has put his overwhelming faith on the promise of a friend and an old warrior. Wellington would proceed to retreat to a ridge at Mont St. Jean, two and a half miles south of the village Waterloo and face the might of Naploeon’s army, betting all on a promise made.
“It was apparent to Wellington, as he was dispatched back to London after the war, that the Prussians were the deciding factor that won the epic battle in Waterloo. When Prussians came to his aid at the 11th hour, most of Wellington’s lieutenants were either killed or wounded. And he was trapped in an infantry square, a defensive position. Wellington’s center was about to be smashed, and with no reserves at hand, defeat was imminent if not for Blücher, keeping his word in coming to Wellington’s aid. After the battle of Ligny the Prussians had crucially retreated northwards, parallel to Wellington’s line of march allowing them to continue to support Welling- ton throughout, instead of going east and back toward their supply line.” (Yong & Lee, 2019)
The rest from that point was history. Many would know Waterloo as Napoleon’s final defeat but few would know about the promise of Blucher to Wellington and how based on that single conviction, history was written.
In today’s culture of broken promise and trust, we need to reach back to the past and cultivate amongst us that our word is our honor. The business world today needs to build on trust and the popular concept of authentic leadership is in its essence, keeping ones promise.
References
Yong and Lee. 2019. Department of Startup: Why Every Fortune 500 Should Have One. Business Expert Press. New York | https://medium.com/history-of-yesterday/wellington-night-or-the-prussian-must-come-4c80ad18741d | ['Ivan Yong Wei Kit'] | 2020-09-10 08:34:16.833000+00:00 | ['Human Resources', 'History', 'Leadership', 'Trust', 'Startup'] |
How I Managed to Save A Stalled Story | Photo by Tim Gouw on Unsplash
My SF novel Hunter began life as 15 ‘seat of the pants’ chapters. I created a complete outline and knew my characters. I knew what the story was, where it was supposed to go and exactly how it ended.
And then I got to chapter 16 and everything fell apart. I tried for several weeks to restart the story but each attempt failed. I tried the trusty advice of letting the manuscript sit for more than a month but that didn’t help either.
Then I made a horrifying realization — I’d stalled because I had no idea what was supposed to happen next.
That was 10 years ago.
I have scrivener folders full of projects like that…
Recently I looked over Hunter with fresh eyes. I recognized that, despite the flaws, I have a start on a salable book.
The premise — alien invasion — is old hat. I added my own spin in an attempt to take the story in an original direction.
I believe the story is salvageable. It’ll take time and effort to fix but so what? That’s what writing costs.
Blowing 5 years of dust off of the manuscript was enlightening. At first I didn’t realize how long I’d let this story sit. But when I began my re-read I realized that its easy to allow years to slip by with an unfinished project. It was as disheartening as is was eye opening.
I decided that I was going to apply some of the new stuff I’ve learned my writing coaches to see if I could save this one.
Sitting down with a hand full of 3x5 cards and my favorite Mind Mapping program I turned the myself loose on the story.
The premise is simple and solid. After aliens invade earth a rag tag team of military types investigate the ruins of Seattle to see what the aliens are up to.
The humans are hilariously outgunned and desperate for answers. The aliens have devastated the world in a few short days but have never once communicated or even been seen. They just showed up and started wrecking shit.
As I reread through the story the problems became obvious.
Not only had I written myself into a corner, the story was boring. There were three main problems -
1. At 15 chapters in, nothing significant had happened. Sure some minor incidents occurred, but it wasn’t enough to pull the reader in and keep them reading.
2. The story wasn’t alive in my mind. I was writing it but it wasn’t moving me. If the story doesn’t have a pulse, it will show in print.
3. I’d picked the wrong protagonist. The story contained two protagonists, a powerful antagonist and a bunch of individuals all trying to do their own thing. There was no real ‘hero’ to root for or care about.
4. In my desire to write a ‘blockbuster’ I was doing too much. I tried to throw in everything including the kitchen sink right off the bat.
The story was muddy and had no direction.
A story like Hunter should be linear and simple:
Basically — Something horrible has happened. People have no choice but to respond. They discover things are much worse than they thought. Now they have to make dangerous choices without knowing if things will get better.
Linear yes, BUT written properly, not boring at all.
I don’t how it works for you, but my stories appear on a giant movie screen in my head. All I do is write what I’m seeing.
Hunter stalled when I made the story more difficult than necessary. The screen image in my head had more blank or missing moments than visible moments.
I was trying to do too much.
There’s a natural ebb and flow to the process. The story almost never pops out whole. Some of the dark screen moments occur because that part of the ‘movie’ hasn’t been ‘shot’ yet.
Nothing to show = blank image.
That’s normal because it takes time to develop what you’re trying to say.
What happened here was different. The blank moments existed because I wasn’t connecting with the story. It was too complex and my brain backed away from it.
If you’re having trouble with ‘imaging’ your story and getting onto paper I’d like to suggest slowing things down and making it less complex.
I use the analogy of the human body to help me think through the process. A body starts with the skeleton. Everything else, from placement of the organs to the operation of muscles and tendons is dependent on the skeleton to give it form and foundation. No skeleton = no body - or at least a form unrecognizable as human.
The basic through-line of your story is your skeleton.
The moment to moment of your story occurs within the skeleton framework. Without the framework you end up with what I had, a hot mess of people running around without a clue.
Create the skeleton in the simplest possible terms. You don’t need to know every single bone in the body to create your skeleton. Even if you know it’s called a femur (example — your character works at a job for some multinational company selling widgets to polar bears) call it a bone during your draft (your character has a job).
Later (after you get the entire story out) you can go back and explain that the bone is actually a femur, two inches around and 8 inches long with tendons and ligaments and marrow — get the idea? This is how I keep myself from bogging down on the details.
I overthink things. It’s how my brain is wired and I have to guard against getting carried away with adding too many details at the outset. Once that happens I vapor lock and stall.
It doesn’t matter what the first draft looks like, they’re usually hot garbage at first anyway. If you have to sit there and type ‘Wayne walked down the freaking driveway like he’d done thousands of times blah blah blah’ (And I MEAN adding the blah blah blah) just to get the words out of your head then please don’t hesitate to do so.
Don’t worry about the color of the driveway or what kind of car is sitting there or if its raining or if Wayne’s hair is on fire. At least not initially.
Starting a story from scratch or saving one that is worth saving is really that simple. Leave in all of the um’s and ahs and sections that aren’t even from this story until AFTER you get the rough draft out.
Don’t even let yourself stop because what you wrote doesn’t make sense. Worry about that later.
Every time I sit down to write I’m reminded of a line from a western I saw years ago. Robert Culp and Raquel Welch in Hannie Caulder.
After Raquel’s character is attacked and her husband murdered she hires Robert Culp to teach her how to shoot so she can get revenge. She’s VERY motivated and wants to do everything in fast.
As a way to slow her down and keep her on track he utters a line that has since become part of my soul — ‘First comes right, then comes fast.’
To me that line symbolized getting the story out of me and on paper (First comes right) and later I can fill in the blanks, add the blue sky and make it awesome (Then comes fast).
I hope that helps anyone who may be stuck somewhere in a story or considering tossing one to the side. Not ever story is worth the effort, but if you feel that you have a shot then why not?
Good luck!
You just read another exciting post from the Book Mechanic: the writer’s source for creating books that work and selling those books once they’re written.
If you’d like to read more stories just like this one tap here to visit | https://medium.com/the-book-mechanic/how-i-managed-to-save-a-stalled-story-45a5d3d261e6 | ['Ronn Hanley'] | 2019-08-21 13:47:07.230000+00:00 | ['Rescue Story', 'Storytelling', 'Writing', 'Stalled Story', 'Fix Story'] |
Protecting Peru’s natural heritage — permanently | In Peru, we’re proud of our cultural heritage and world-famous cuisine. But alongside Machu Picchu and ceviche, nature is right up there as part of our national identity. From the Pacific coast to the Andes to the Amazon, we’re blessed with an amazing natural heritage, much of which remains unspoilt.
After Brazil, Peru holds the second largest area of the Amazon rainforest. We rank number one in the world for our variety of butterflies and freshwater fish, second for birds, fourth for amphibians and fifth for mammals and reptiles. The whole planet benefits from the vast amount of carbon stored within the Peruvian Amazon, and its ecosystems help guard against both droughts and floods, which appear to be intensifying in Peru as result of climate change.
But we need to look after these natural treasures. As the economy expands, our nature is coming under increasing pressure. We’re losing more than 150,000 hectares of forest every year to agriculture, and (often illegal) gold mining and logging, and the roads built to reach these developments. Deforestation also accounts for half of Peru’s greenhouse gas emissions.
One important way of conserving nature and benefiting people who depend upon them is through protected areas like national parks. For every $1 invested in the effective management of natural resources in protected areas, we get $100 worth of value in benefits for people — in the form of clean water, food, natural medicines and more. Peru has a good network of 76 protected areas. Half of these are in the Amazon. But for protected areas to really be effective, they need to be well managed and properly funded over the long term.
While there’s been increased government support over the last decade, there remains a large gap between current spending levels and the funding actually needed for an effective protected area network — from the costs of staffing and equipment, to infrastructure, wildlife monitoring and engaging local communities.
© Hugh M. Smith
But a solution is in sight in the shape of Patrimonio Natural del Peru (PdP), known in English as “National Parks: Peru’s Natural Legacy.” This innovative public-private partnership led by the Peruvian government aims to provide funding to ensure the long-term stability of the country’s protected areas, using a mechanism called Project Finance for Permanence. WWF is one of several partners involved.
It works like this. Together, the partners agree on a long-term vision and plan for managing the whole protected area network, and work out how much it will all cost. Donors from the public and private sector, including international development agencies, foundations and businesses, contribute funds to cover the shortfall, on the condition that after an agreed period of time, the country’s government will take on all the costs itself. It’s an approach that’s being used successfully in a growing number of countries — WWF has previously helped set up similar programmes in Brazil and Bhutan.
This month, the PdP project celebrated a major milestone — a commitment of US$140 million in funding from international donors and the Peruvian government, to strengthen and expand Peru’s protected areas network, starting in the Amazon.
This is a huge cause for celebration. It means we can pursue a unified, integrated vision for Peru’s protected areas for the long term with agreed plans and priorities. Rather than funding being insufficient and piecemeal, guaranteed revenue streams will be available where they are needed. This will allow us to build a stable structure and a strong system for conserving Peru’s natural heritage.
Better management of our protected areas will also bring greater benefits for people living in or near them. One of the PdP conditions is that all protected areas should have a management committee that represents local communities and other stakeholders. A long-term aim is to create more opportunities for local people to benefit from the sustainable use of natural resources, including nature tourism.
© Jeffrey Dávila _ WWF Peru
It’s my hope that PdP will lead to a greater appreciation of the importance of our protected areas. I would love to see more Peruvians as well as international travellers visiting protected areas in the Amazon. And for a country that cares passionately about food, we should remember how many of our fish, fruits, nuts and wild crops originate from protected areas — we need them to continue to produce the flavours we love and that are unique worldwide.
And this is a milestone not just for Peru but for the whole Amazon. PdP follows in the footsteps of the ARPA for Life programme, which secured US$215 million for 60 million hectares of protected areas across the Brazilian Amazon. And WWF is currently partnering in a similar effort in Colombia, where the next largest area of Amazon rainforest is found.
Combined, ARPA, PdP and Heritage Colombia will ensure permanent protection of around 12 per cent of the whole Amazon biome. That’s something we can all be proud of. | https://medium.com/wwftogetherpossible/protecting-perus-natural-heritage-permanently-42eea7fc8fc8 | [] | 2019-06-03 02:26:23.666000+00:00 | ['Nature', 'Environment', 'Amazon', 'WWF', 'Forest'] |
6 Tips for Developers to Handle Imposter Syndrome | 6 Tips for Developers to Handle Imposter Syndrome
The things that worked well for me
“Every one of my successes is no big deal and due to luck.”
“I feel like a fake because I still don’t know [xxx].”
“Every failure is due to my lack of expertise and I should give up.”
“I’m lacking experience in that topic, I better keep my mouth shut.”
Hi! You are not alone… I went through that also and a lot of developers suffer from the imposter syndrome too! I will be honest with you. It took me more than a year to endorse the job title of developer.
During my first year of employment as a developer, I’d never felt more of a fraud than ever before. Even though I had my share of knowledge, of course, every developer around me felt like they were way more talented than me.
Which is a problem when you constantly try to compare yourself to others. Everyone seems better than you. By respecting my coworkers, my feeling was that I was not deserving of that title yet.
How many days did I go home feeling like a fraud? A lot.
Was it justified, at least once? Nope.
Remembering this today really seems absurd. What was I thinking? Just because I don’t have the same expertise as they do, does it make me a fraud?
Photo by Brooke Cagle on Unsplash
Today, I feel way better about my knowledge. I’m fine with my current expertise, my learning curve, and don’t punish myself when I don’t know something.
Here are some tips that helped me to overcome the imposter syndrome and I hope you find them interesting and helpful! | https://medium.com/better-programming/6-tips-for-developers-to-handle-imposter-syndrome-7473ea7924f6 | ['Thomas Guibert'] | 2020-01-28 19:42:52.129000+00:00 | ['Programming', 'Technology', 'Imposter Syndrome', 'Startup', 'Software Engineering'] |
The brain is not lazy it’s a survival mechanism. | Brandt and Berge both explain a lot of human (marketing related) behavior with the core message: “The brain is lazy”. Although the brains ability to ignore information is an important feature, it doesn’t explain all marketing related scenarios, and it doesn’t explain WHY it is lazy.
If the brains end state is that it its lazy, then why does it stress itself to find a reason for a purchase when it was impulse and emotional (not rational)?:
“In the choice between changing one’s mind and proving there’s no need to do so, most people get busy on the proof”.
- John Kenneth Galbraith, http://www.gresham.ac.uk/event.asp?EventId=512&PageId=108
If the brain is lazy, then why is it extreme at detecting minute changes in everything from graphic detail, objects in motion in the distance or unexpected outcomes of events?
“The human brain is a difference detector”
- Stephen M. Kosslyn, http://io9.com/357063/how-cognitive-science-can-improve-your-powerpoint-presentations
Looking at a lot of references we can see that the brain doesn’t relax in order to not get “overheated”, it relaxes in order to conserve energy in case something unexpected happens. The brain does everything it can to stay in control of its surroundings (so that it can stay alive longer).
If one starts looking at the behaviors of the brain from the perspective of it’s in built ability to survive, it makes a lot more sense:
Our ability to conserve and focus energy in order to stay alive:
Gladwell writes in his book Blink about our ability to “shut of” some senses in extreme situations in order to concentrate all our energy on the important ones. His example is of police officers remembering seeing the trajectory of the bullet they fired, but can’t remember any sound. Chip and Dan Heath writes in their book Made to Stick about how humans react when they are surprised, eyebrows lift in order to open the eyes to the maximum for larger vision, at the same time the jaw drops, and mouth opens so that no processing is needed to maintain that function. The dog often does everything it can to avoid trouble (often misunderstood by humans because of difference in sign systems). In order to avoid a fight it barks to say “stay of my territory” and it goes to a great length to show submission.
At the same time we can see that the brain does everything to remain in a constant (and safe) status quo:
We naturally reject change as it will force us to use energy in order to redefine our present stand. If being bullied we will invent reasons to make the bullies comments become untrue. The same thing does not happen if it is a friend being bullied, which is why we feel worse when being a spectator to bullying. (Gilbert) We synthesize happiness in order to be happy. People who lost a leg rate themselves as just as happy as lottery winners a year later. (Gilbert) On the other hand we look up to risk takers and embrace novelty, the reason being Darwinism. In order to survive we know that it is the species more adaptable by change that will survive. Therefore, even though most people are focused on just remaining in the status quo, we admire and adore risk averse people, as they are the ones who will take the species forward.
On control and discovery in order to control the surroundings to stay alive:
According to Gilbert the brain wants to exceed control, not for anything other than control itself. People who are given control are better of, people who loose control are worse of than not having had control at all. Instinctively we want to figure out how things work, in order to predict its outcome (Godin). Children can watch the same TV-programs over and over again — because they want to be able to accurately predict what is going to happen. Adults are the same, if we are exposed to something new, we try to figure it out in order to being able to aanticipate it’s outcome. We bet more money on the outcome when we throw the dice ourselves and before the dice has been thrown. The only people who don’t are the manically depressed. (Gilbert)
The brain uses a lot of energy when something disrupts the existing “real”.
In order to use as little energy as possible, the brain creates “schemas”. These are predefined interpretations of stimulus from the senses and are stored in Wernickes triangle. As soon as new information from the senses reach this part of the brain it is recognized and puzzled together to create a response. Most processing capacity in the brain is done by almostsubconscious “puzzling”. (Eisenberg) But, every time the puzzling is disrupted by unexpected pieces, the brain turns on it’s attention in order to understand, and create a new predictable schema.
My statement is that although the brain often acts as lazy, the reason for this is that it conserves energy when it can in order to have the reserves available when something new or unexpected happens — so that it can invest in figuring it out, predicting and controlling it — which it does to a great extent.
So the brain is not lazy, it is a very selective energetic survival mechanism… | https://medium.com/137-jokull/the-brain-is-not-lazy-its-a-survival-mechanism-d361a89ce890 | ['Helge Tennø'] | 2017-01-21 05:07:47.283000+00:00 | ['Psychology', 'Marketing', 'Research'] |
Can Social Media Help Us Overcome Our Social Anxiety and Become Better Marketers? | The world has become a noisy place
Maybe it is easier now to make a name for yourself as a writer. But there are so many of us, it’s hard to find an audience in the middle of all the creative souls scattering their words to the four winds.
This all seems so awesome in theory.
The shattering of the glass ceiling, the downfall of the middlemen, the easy access to free and powerful means of communication.
There are no obstacles anymore… except for the ones we’ve built for ourselves in this new world.
In the land of opportunities, attention has become the world’s most coveted currency. Clickbait, fake news and “get rich quick” courses keep popping up all over our social media feeds like mushrooms after a heavy storm.
But I have to wonder if we didn’t end up where we started?
Aren’t the middlemen algorithmic entities who scavenge our feeds looking for valuable content? Aren’t our glass ceilings our social media stats? Aren’t our powerful and free channels crowded with noise?
And what does this mean for introverts like us?
Marketing for introverts — it’s not what you think
Some days I feel I’m 10 years old and back at school.
I feel the same paralysis I once felt when I looked around me and saw my classmates chatting happily and expressing their views freely in front of the entire class.
As for me, it terrified me even to think the teacher would remember I was there.
So my mouth remained shut. And my shoulders bent. But my eyes and ears were wide open capturing every single excruciating detail so I could write all about it when I got home.
Most of my teachers thought there was something wrong with me. Most of them would take pleasure in mocking me and my red cheeks in front of the entire class.
For a long time, I thought there was something wrong with me too. I wanted to be like everybody else… but I couldn’t.
I always stood out like an awkward moment of silence at a dinner party.
I knew I was different, but I would only find out about introversion a decade later. I first heard about it from the lips of Susan Cain, an introvert like me. And at that point, I’d completely forgotten which had started first: my introversion or my social anxiety.
Did I become anxious in social settings because my colleagues and teachers mocked me for being an introvert? Or was my social anxiety there all along?
I can’t remember at all.
Social media was supposed to make things easier. But it hasn’t. Not in the slightest.
Most days I feel I’ve been stuck in a giant classroom with thousands of people screaming at each other at the top of their lungs and ignoring everyone else around them.
To be honest, I feel the same fear and insecurity I once felt. I feel ignored and heartbroken. I feel just as powerless as before.
But there’s a quiet and solid strength to introverts that hardly anybody cares to recognize.
We’re good at creating comfortable and intimate environments around us.
Sure it might take us a while to adapt to new people and new circumstances. But once we start building bridges towards others, we’ll always make sure to keep our foundations strong.
We work at a different speed and start our journeys from unusual places.
But we know where we stand and we know what we want because… while others were screaming for attention, we were searching for our dreams and strengths deep within our souls.
How we can use introversion to become good marketers
Does this mean we’ll be bad at social media? Probably. Or at least we’ll suck at social media for a while.
But don’t let this discourage you. We’ll be bad because we need to learn how to express ourselves in this new medium in a way that’s lined up with our vision and our strengths.
We can’t do social media the same way everybody else is doing. The same way we can’t scream for attention in a crowded room.
But this doesn’t mean we can’t achieve the same or better results as natural extroverts. In the end, it’s all about the conversion and the engagement rates, not the number of fans, likes or comments.
To understand how our particular personality can help us become better marketers, we need to understand what marketing is all about.
Real marketing
Real marketing is about people. And it’s about change.
Seth Godin said it himself. Marketing is about foresting change. And change starts by creating a relationship with others. And a relationship starts by listening to what others have to say.
Good marketing it’s about going the extra mile for our clients and collaborators.
It’s not about standing out in a crowded and noisy room. It’s about knowing how to find the little and quiet rooms and building something valuable in there and making sure it is found by the right people.
Real marketing starts in one-on-one conversations.
Photo by Joshua Ness on Unsplash
As introverts, we may hate crowds, but we can understand others just as well as we understand ourselves. So don’t try to grab a microphone and scream into a room full of people. Grab a single person and listen to what she has to say.
Big changes can start small. They don’t need to go viral in the first hour. They don’t need to have millions of fans.
Social media has created the illusion of effortlessness success. But there is nothing easy about good marketing.
You can try all those shortcuts splashed all over the internet. You can choose to do what everybody else is doing. Or you can try to be different and try to find ways to go the extra mile.
The extra mile is where you should be. Find what everybody else is doing and do something else, something harder, something unscalable, something that cannot be replicated by others.
I’m still discovering what this means for writers. But as a web marketer I start to understand that the real opportunities are not on short posts on our social media accounts… but in genuine human interaction and collaboration.
Go out there. Get out of your feed and try to look for opportunities in the real world. Literary cafes, workshops, conferences, art exhibits…
Pick up that phone and start talking to people, interviewing them, creating collaborative content.
In the end, you can have a beautiful Instagram feed and hilarious tweets. You may have the numbers and the stats and the social proof.
But if you don’t have the relationships, if you don’t have a network, none of it will matter in the long run. | https://medium.com/the-ascent/can-social-media-help-us-overcome-our-social-anxiety-and-become-better-marketers-2d3d5f9af274 | ['Ana C. Reis'] | 2019-09-23 15:17:37.732000+00:00 | ['Social Media', 'Writing', 'Marketing', 'Social Anxiety', 'Introvert'] |
How much can one make with Airbnb in Bristol, UK: an exploratory analysis | Airbnb has changed the way we travel, with more affordable values for guests, who can know real local customs while choosing the most appropriate option. Hosts also benefit from an increase in their revenues.
In July 2019 I visited the city of Bristol, UK. Bristol is the largest economic, cultural and educational center of southwest England. Historically the city’s development was linked to seaports, and more recently the economy has relied more on electronic and aerospace media creation, and the docks and ports in the city center have become historical and cultural heritage (https://pt.wikipedia.org/wiki/Bristol). This trip made me excited to analyze some data about Airbnb in Bristol.
Airbnb makes listing, reviews and calendar data available to download (http://insideairbnb.com/get-the-data.html) which is an interesting source to use in data science toy projects. The data was used to try to answer relevant business questions. The data was cleared using the pandas library on python. Columns with more than 60% missing data have been removed, categorical data has been transformed into dummy variables. Further details can be found on https://github.com/LPontes/Udacity_DSND.
Estimating income per month
Airbnb guests may leave a review after their stay, and these can be used as an indicator of Airbnb activity. A conservative estimate of monthly earnings can be obtained by multiplying the number of monthly reviews, minimum nights and price. However, the review rate is estimated to be between 50 e 72% (http://insideairbnb.com/about.html#disclaimers).
Figure 1. Histogram for minimum income per month, estimated with price, number of reviews and minimum nights.
Figure 1. Histogram for minimum income per month, estimated with price, number of reviews and minimum nights.
Figure 1 presents the histogram for the estimated income per month that is less than £ 2,000.00. It shows a right-skewed distribution, which means that there are much more properties with a small income and a few with higher income. The mean and maximum income per month is £ 236.58 and £ 4950.0 respectively.
Another interesting question is about the seasonality of Airbnb activity, which can be seen in Figure 2.
Figure 2. Number of reviews per month
From Figure 2 it is possible to observe that between December and February the number of reviews is lower than the other months, that is a good indicator of the Airbnb activity. This activity pattern should be related to the winter season.
Which neighborhoods have the highest income?
In Figure 3 we can observe the estimated income per month for each neighborhood.
Figure 3. The income per month for each neighborhood in Bristol, UK.
The neighborhood with the higher income is Hotwells and Harbourside which have a relatively high price (£ 92.7) and a reviews rate of 2 per month. On the other hand, Hartcliffe & Withywood has lower-income.
Figure 4, generated with the folium package on python, shows the heatmap of income per month, where red colors indicate higher income while blue means lower income.
Figure 4. Heatmap of income per month in Bristol, UK.
Figure 4. Heatmap of income per month in Bristol, UK.
Which features indicate greater income?
Besides neighborhood, which others can be related to income? This is an important question to identify the reasons that determine income amount, and to support feature selection to be used on machine learning models.
Initially I had considered using spearman correlation score, however, some of the dataset variables are categorical. Therefore, for categorical data bar graphs will be used and for numerical variables correlation indices.
As categorical variables we will consider the rent and property type, and if the host is a super host.
Airbnb enables two main types of rent: the entire home and private room. This is the main characteristic that defines the income per month, i.e. entire home type provides higher income (Figure 5).
Figure 5. The mean income per month for each room type.
Another relevant categorical feature is the property type (Figure 6).
Figure 6. The income per month by property type.
Super hosts seem to have a higher income than non super hosts (Figure 7).
Figure 7. The income per month for superhosts and non superhosts.
Amenities are another feature that is interesting to analyze, however, it will be addressed in a next post where I will use machine learning models that enables feature selection, such as random forest.
For numerical variables, it was used a log transformation to normalize the data and then the Spearman correlation score was calculated (Figure 7).
Figure 7. Spearman correlation for the normalized numerical features and income per month.
It is also possible to visualize the correlation between each numerical feature by using a matrix correlation heatmap (Figure 8).
Figure 8.
In this way, it is possible to conclude that the main features that are related to income per month are the rent type (entire apartment or private room), the property type, if the host is a superhost, the number os beds and bedrooms.
Are you an Airbnb host at Bristol? If so this information is accurate? How it can help you to improve your income? | https://medium.com/how-much-can-one-make-with-airbnb-in-bristol-uk-an/how-much-can-one-make-with-airbnb-in-bristol-uk-an-exploratory-analysis-7363f2f6452 | ['Lucas Machado Pontes'] | 2019-08-31 15:19:20.926000+00:00 | ['Python', 'Data Science', 'Exploratory Data Analysis', 'Airbnb'] |
[Chapter-1] The Machine Learning Landscape — Part-1🧙🏻♂️ | When most people hear “Machine Learning” or “Artificial Intelligence”, they imagine a robot: a deadly Terminator. But Machine Learning or Artificial Intelligence is not futuristic fantasy. By the way most people do not know that Machine Learning and Artificial Intelligence is not the same, Machine Learning is subset of Artificial Intelligence domain.
So, let’s start with the term definition…
What is Machine Learning? 🤖
Machine Learning is the science and we can say art also of programming computers so they can learn from data.
More of general definition
[Machine Learning is the] field of study that gives computers the ability to learn without being explicitly programmed.
— Arthur Samuel, 1959
And this is more of engineering-oriented one:
A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.
— Tom Mitchell, 1997
The examples that the system uses to learn are called the training set. Each training example is called a training instance (or sample). In this case, the task T is to flag spam for new emails, the experience E is the training data, and the performance measure P needs to be defined; for example, you can use the ratio of correctly classified emails. This particular performance measure is called accuracy and it is often used in classification tasks. | https://medium.com/analytics-vidhya/chapter-1-the-machine-learning-landscape-part-1-3590ede2d034 | ['Vishvdeep Dasadiya'] | 2020-12-28 16:38:27.810000+00:00 | ['Machine Learning', 'Python', 'Artificial Intelligence', 'Deep Learning', 'Data Science'] |
5 Steps to Becoming the Face of Your Company | Photo by Kazuky Akayashi on Unsplash
Being the “face” of a company is an intimidating role, and as a result, many CEOs and founders shy away from the spotlight. They either consider it a move toward humility or personal protection against possible future embarrassments — either way, it’s a safer play, but it forces their company to exist without a personal brand at the head of the organization.
In today’s world, having a strong personal brand leading your company is radically important — some would even argue essential. Consumers are becoming more and more distrustful of corporate brands and advertising, yet always inherently trust people to a higher degree. Case studies show that just the presence of a human face can increase conversion rates, and personal brands on social media always have an easier time attracting and communicating with followers.
If executed properly, your personal brand can provide a secondary outlet of traffic to your site, increase the influence and authority of your brand, and improve your consumer-brand relationships all at the same time. Still, becoming the “face” of your company isn’t a straightforward or easy process, and it’s going to take a lot of time. When you’re ready, get started with these five steps and move your company toward a more personal future:
1. Feature Yourself on Your Company Website. The first step is also the simplest. All you have to do is create a miniature profile for yourself on your company website. Include your name and a brief biography on your About or Team page, and include links out to your social media profiles (Twitter and LinkedIn are musts, as is a professional Facebook page for your personality — don’t just accept friend requests on your personal page). You can (and should) also set yourself up as an author on your company blog if you aren’t there already, and be sure to include your headshot. People prefer seeing faces to bland descriptions.
2. Publish Content — As Yourself. Next, start publishing content regularly, under your own name — not your brand’s. Stick to industry topics, and always write in the first-person perspective. Reveal bits of your personality throughout your writing process so people get to know you through your material. Aim for at least one new post every week, either on your company’s blog page or on a separate, personal blog that you’ve set up. A separate personal blog will be more effective in the long run, serving as an additional opportunity for conversions, but if you’re not ready for that step, stick with what you have.
3. Get Active on Social Media — as Yourself. Next, start posting more frequently on social media. Be sure to syndicate all your new posts, and work your old posts into a recurring rotation. Respond to anyone who reaches out to you, and thank people when they like or share your posts. You should also get involved in your industry by jumping into existing conversations and engaging with other influencers. The more you post on social media, the more visibility you’ll earn — again, just stay within the industry you want to be known for.
4. Network Frequently and Take Advantage of PR Opportunities. Whether in-person at professional networking events or online through LinkedIn Groups and one-off webinars, work to increase your network of contacts. Attract more followers to your personal social media profiles, and keep a running list of contacts handy for when you want to market a webinar or seminar or your own. Take advantage of any PR opportunities you can find, too — submit press releases, speak at events, or host free workshops to people in your area. It will attract a great deal of attention to your personal profiles.
5. Casually Layer in References to Your Company. When you start building a reputation, start making more references to your company. List it on all your social profiles for sure, and mention your company whenever you network or attend speaking events. You can even use your company as a major point of reference for examples and case studies as you start writing more content. Your goal here isn’t to advertise your company, but to make the association clear — remember, people trust you, and they’ll naturally trust whatever you’re associated with as long as you don’t try to jam it down their throats.
Building and managing a personal brand is an ongoing process that demands continued upkeep and dedication. The more time and effort you invest in this strategy, the more it’s going to pay off for you in the long run. As long as you maintain the quality of your content, respond appropriately to your followers, and keep reaching out to new people, you should be able to accumulate tens of thousands of followers and earn a line of new, relevant traffic to your corporate site. It comes with a bit of extra pressure, and one new thing to manage, but you’ll likely find that it’s well worth the effort. | https://jaysondemers.medium.com/5-steps-to-becoming-the-face-of-your-company-e1ac59973e32 | ['Jayson Demers'] | 2020-11-09 23:41:14.348000+00:00 | ['Branding', 'Personal Branding', 'Brand Strategy', 'Entrepreneurship', 'Startup'] |
Observing | Sign up for American Haiku Steamship To Writing History
By American Haiku
Writing takes practice. American Haiku is a great way to put your words from your fingers to your piece of paper. Don't quit, you can do it. Take a look | https://medium.com/american-haiku/observing-c285952ff1bf | ['Sean Zhai'] | 2020-12-19 14:44:30.804000+00:00 | ['Growth', 'Poetry', 'Mental Health', 'Psychology', 'Programming'] |
What is Product-Market Fit and Why Achieving It is Essential For Your Startup | Product-Market Fit and Startups
What is Product-Market Fit and Why Achieving It is Essential For Your Startup Madhur Dixit Follow Jun 13 · 5 min read
Image by Syda Productions
Failing to achieve a product-market fit is a major reason why 9 out of 10 start-ups fail. For a startup to be successful, it is essential to embrace the value of this least understood concept.
When customers are willing to stand in lines for hours outside a store to buy your product, or when your product runs out of availability on online stores, i.e., when you as a brand are unable to match customer expectations (demand) with the supply of your product, you have achieved a product-market fit.
Achieving product-market fit starts with understanding your customers, their needs, how they feel about your product, and whether they believe that hiring your product would help them with their job at hand (as I mentioned in my article on Jobs To Be Done Framework: https://medium.com/swlh/incorporate-jobs-to-be-done-framework-with-buyer-personas-up-your-marketing-game-f5b0414bd878 ).
Designing a product that no one needs is a sure short recipe of failure. That is why it is very important to divide your time equally between product development, customer research, collecting feedbacks and identifying key traction channels.
To get to a product-market fit it is essential to start with a ‘Minimum Viable Product’ as explained by Eric Ries in his book “The Lean Startup”. A minimum viable product is the very first draft of what you actually want to put out in the market. Testing this product with your customers, asking them for feedback and collecting all the meaningful data on how this product could be improved is the best practice that would lead you to create products which your customers actually want.
Achieving this product-market fit i.e. designing products that would blow your customers’ minds away is not a task limited to your business’s design or product development team. This task requires the product development team, design team and the marketing team to join hands and work on it collaboratively.
Another important thing to remember is that achieving a product-market fit does not mean that you cannot lose it. This changes as the market changes and therefore, constant upgrades must be made to your product in order to be well received by the market. In order to be able to make these upgrades, constant testing, iterations and analyses of both the product and customers’ needs is of high importance. In other, words it is a constant loop of gathering information and data and then validating your product based on this information via constant iterations and testing.
Image by Bruce Mars
Let us take a look at how one of the most loved apps was born and how it achieved a product-market fit. Daniel Ek and Martin Lorentzen were two close friends from Sweden who were frustrated with the limitations of finding and listening to music using a computer. The year was 2006, and the music industry was in a crucial state of flux with two extremes in the market. At one extreme was Sean Parker’s Napster which was highly popular but was suffering from tons of thousands of copyright infringements. At the other extreme was Apple’s iTunes which charged users as much as $2 per track. Ek and Lorentzon identified and carved out a niche between these two extremes and as a result, Spotify was born. Ek and Lorentzon believed in testing their product and started out by developing a peer to peer music sharing service like Napster. However, from their constant testing and feedback loop, they identified that piracy’s limitations existed beyond illegality (infringements): It took several minutes to download a single song. The audio quality of pirated tracks varied wildly. Even popular torrents were infested with viruses and malware.
What made Spotify so brilliant was that it, at its core, improved on the Napster experience in every way. Spotify delivered music instantly, with high-quality audio, no downloads, and completely legally. Ek and Lorentzon invested heavily in engineering to nail down every aspect of Spotify’s user experience.
“We spent an insane amount of time focusing on latency when no one cared because we were hell bent on making it feel like you had all the world’s music on your hard drive. Obsessing over small details can sometimes make all the difference. That’s what I believe is the biggest misunderstanding about the minimum viable product concept. That is the V in the MVP.” — Daniel Ek
Ek was obsessed with making the Spotify user experience so good that users would happily pay $10 per month and would not turn to the free music that existed out there. What both Ek and Lorentzen created with continuous testing and an insane focus on MVP and user experience is what we know as Spotify; the most loved and used music app worldwide.
Image by Omid Armin
Being open to feedback, asking customers questions and using data to create meaningful insights is a key for your startup to get to a product-market fit.
Change is possible, but for that you as a startup need to be open, flexible and willing to iterate and reiterate.
To conclude, let us view product-market in a very simple scenario. Imagine a kid trying to sail her paper boat in a river. The product market fit could be seen as the boat actually sailing with the flowing water. The paper boat here would be the product and the water current would be the market. This boat sails if it could withstand the water current (market). If it does, the kid reaches a product-market fit for her boat. If not, the boat sinks! | https://medium.com/cornertechandmarketing/what-is-product-market-fit-and-why-achieving-it-is-essential-for-your-startup-bd7cbe239359 | ['Madhur Dixit'] | 2020-06-16 00:29:07.679000+00:00 | ['Product Market Fit', 'Digital Marketing', 'Growth Marketing', 'Startup', 'Marketing'] |
AliExpress: Is Amazon under Siege in Europe? | AliExpress: Is Amazon under Siege in Europe?
AliExpress is attacking the uncontested market leader in Europe. Brands and sellers benefit by listing and advertising on the platform.
While most of the Covid-impacted Europeans have been home-shopping on Amazon, AliExpress has continued to silently yet aggressively grow into the market. AliExpress is a subsidiary of the Chinese behemoth, Alibaba Group. On the surface its marketplace allows consumers to buy mainly cheap Chinese merchandise. Its appearance is colorful and noisy to unaccustomed eyes.
There is more to the story. Up and coming Chinese brands, like Xiaomi, offer what might be called a brand experience to European taste. Products of first European brands that sell direct, like Kimbo Coffee in Italy and more than 10,000 mostly smaller Spanish companies, are selling on AliExpress. [1]
A lot seems like this is just an MVP, a minimum viable product, by a startup: Only sellers registered in Spain, Italy, France, Russia, or Turkey can list their products. Most features for merchants are still free or very cheap. Functionality is obviously not fine-tuned to the European consumer. Rather the lowest possible price is mostly the core differentiator. Consumers have to tolerate longer shipping times and the somewhat awkward user experience. Consequently, AliXBlog, features numerous articles on getting the best prices, cheapest products, and how to deal with issues of the platform. [2]
But when you hear there is a partnership with El Corte Inglés, the gigantic Spanish department store chain, this might get us thinking.
“Collaboration with AliExpress is a lever to boost online sales, and especially expansion in the international market. For its part, AliExpress will be able to take advantage of our logistics infrastructure in Spain.” — Eduardo Sotillos, Purchasing Director at El Corte Inglés [3]
More than a startup
Sales figures for AliExpress are hard to come by. Typical statistics only show Alibaba Group. This only gives a sense of the deep pockets behind AliExpress but no clue as to how big it really is.
Analysts expect AliExpress sales in Russia at $6bn in 2020. [4] It has been the number one e-commerce site in Russia for years. In November 2019, AliExpress was number two in Spain. [5] We know that compares to $2.6bn sales on amazon.es in Spain 2018. [6] Note, the two countries have about a similar GDP, although Russia has 3x more inhabitants.
In July 2019 AliExpress was already the number three marketplace in Italy — only weeks after allowing Italian companies to sell on its platform for the first time. Amazon, the number one, was almost 20 times as big. [7] Even without newer figures, it is clear AliExpress has left a considerable footprint in Italy in the meantime.
Disruption at work
Economically speaking, we are facing a disruption scenario. [8] AliExpress targets people who cannot afford to be choosy. More than 60% of its customers are under 35 years old. [9] They do not have a lot of money — a superficially unattractive target group. But these individuals will put up with long delivery times, an unknown platform, difficult search, etc. As AliExpress is gaining a foothold, it is learning about the market, improving the platform, and drawing more brands that are excited about the target group.
AliExpress has chosen the countries in Europe which are still expecting stronger e-commerce growth rather than those with a higher e-commerce penetration, such as Germany and the U.K. [10] It is always easier to grow in an expanding market. This is only the opening act.
“Given the popularity of the AliExpress marketplace in Spain, the Alibaba Group is using Spain as the jumping-off point to grow its marketplace in Western Europe.” — Peter Vahle, forecasting analyst at eMarketer. [11]
Curtains on the second act are already being lifted. E-commerce decision-makers at major European brands said that AliExpress is in talks with them to bring them onto the platform. Similarly, an AliExpress Russia spokesperson said it wanted to increase the share of local sellers to 50% by 2022–2023. And it is working to cut delivery times from China from 20 to 10 days. [4]
Initially, all this is likely not a (very) profitable endeavor for AliExpress, but the company can fight this. It has resources. Alibaba Group owns 58.2% of all retail e-commerce in China [12], contributing $49bn to its $75bn overall sales in fiscal 2020 (ending March 2020). [13] Alibaba has stated the intent to invest $15bn into its global expansion within the next 5 years. [14]
More important they are used to a competitive market, while Amazon is used to be the leader. A leader with a dent in its pride, as Amazon had to withdraw from China in 2019. [15]
“Our ambition is to always be the leader, although we do not see this as a competition. We believe the market is large enough for there to be two, three or more large companies of ecommerce.” — William Wang, CEO of AliExpress in Spain and Portugal [5]
The real play will be on sooner than we think. AliExpress has built access to a highly cost-conscious consumer base across Europe, it is adding major brands and driving higher spending to the platform. At the same time, the platform is continuously innovating and gradually gearing more towards local tastes.
Wang Mingqiang, President of AliExpress announced she intends to grow its global seller base 14fold until 2024. Expect her to follow through. [9]
The Perfect 10: AliExpress marketing
To get your products on stage and to turn the spotlights on, AliExpress offers a growing number of marketing tools. They show their heritage in Chinese e-commerce culture. Therefore, they look and feel different to Amazon and other familiar Western marketplaces. While this presents an entry barrier to some, it is also a great opportunity for everyone willing to try. It is easy.
1. Free Storefront
In China brands usually do not have their own online shops but rely on trusted marketplaces, e.g., Tmall or TaoBao. Like malls, some of those marketplaces offer individual stores.
AliExpress has a similar offering. Selling your products on AliExpress, therefore, is a lot more like creating your own website than just listing products. While this certainly means more effort, it also allows a brand to display itself. On AliExpress a brand is more than an item on a shelf. It is a message, value, content, and lastly, solution, i.e., product. | https://medium.com/swlh/aliexpress-is-amazon-under-siege-in-europe-f2de6e177474 | ['Jan Dominik Gunkel'] | 2020-11-11 17:02:39.968000+00:00 | ['Strategy', 'Ecommerce', 'Amazon', 'Aliexpress', 'Marketing'] |
How can brands keep customers engaged with science? | How can brands keep customers engaged with science?
The coronavirus pandemic has catapulted science into the cultural lexicon, shining a light on it like never before. You could even argue that it’s one of the few silver linings born out of Covid-19, as we place more emphasis on understanding how to separate information from misinformation.
Businesses find themselves in a similar boat, needing to offer authenticity and trust at a time where both are lacking severely. These traits have become just as important as shiny new product features, with the consumer experience evolving from mere A-B interactions.
Now, customers want more engagement from the brands they spend their money with. But that interaction only comes if it’s worthwhile, which means brands need to continue being relevant and find new ways to add value to the customer lifecycle.
Which leaves the question: how can brands keep customers engaged, and what role does science have to play? This is a story about brands and how they can improve relationships with customers through engaging, science-led content.
Why is customer engagement important for businesses with a web presence?
Everything about a brand comes down to the value it offers, and not just from a monetary point of view. Social media has blurred the lines for businesses, with customers expecting brands to get more involved in conversations.
Forty-four percent of consumers said they feel more connected to a brand when it creates and participates in conversations online. The more engaged a customer, the more they are likely to spend money on your company.
Brands should embrace this desire for connection, especially as they have an ability to create a tribal feeling amongst their audiences. You only need to look at Apple and Samsung to see how brand allegiances are forged.
That’s not to say you need to start a culture war with your competitors. But regularly engaging online with customers can lead to higher loyalty and a better bottom line.
What are the most common customer engagement mistakes businesses make?
With more than half the planet using the internet, reach has never been an issue for businesses online. Instead, it’s often a case of oversaturation. Too many options available for communication often leads to common customer engagement mistakes.
It’s a bit like logging onto Netflix, spending 15 minutes trying to find something to watch, only to end up doing something else entirely. There isn’t a lack of great shows; the problem lies with the sheer amount of options available.
Brands face similar problems online. You’re constantly being told you need to communicate on social media, send out regular email marketing newsletters, create on-site content, write an ebook, and it can all become a little overwhelming.
The result is often disjointed messages, as the focus on conquering the entire sphere of digital marketing, rather than excelling in one or two areas at a time, can leave muddled messages.
Data from multiple channels often doesn’t align and, consequently, the result is poor analysis, hasty decision making and a lack of customer engagement as you struggle to create content that resonates.
How to fix engagement issues with AI chatbots, customer intelligence and science content marketing tools
One of the primary issues around customer engagement comes from trying to understand just what it is that consumers want to see and interact with. And improving technology means there are smarter ways to hone your message. Take chatbots, for example…
Why AI Chatbots help with engagement
By 2021, it’s expected that chatbots will handle 85% of customer interaction. And 64% of internet users favour an approach that provides a 24/7 service — something you get with an automated chatbot.
AI chatbots are helping to resolve engagement issues, acting as the first port of call to decipher a customer’s needs. It’s a form of automation that leads to improved service from humans, who then take on the customer role at a more defined stage in the process.
How customer intelligence can help you better understand your audience
A chatbot can collect customer intelligence, such as data and valuable insights to their behaviours. Data is a primary source for brands to understand their target market better, and having insights on your customers from chatbots will help refine marketing approaches.
Why Science marketing tools provide compelling content
If you’re a brand who relies heavily on science, a chatbot can provide engaging, science-based content that captures your audience’s attention with informative, factual-led information. You can also use the data received from customers to create better content.
Use insights to craft compelling content in the form of emails, blog posts and reports in a much more concise way than if you were trying different marketing approaches across several channels. With the right data points, everything ties together more seamlessly.
How to convert website visitors into leads
Chatbots are proving their worth for customer engagement, helping move consumers to the next stage in the pipeline. But they can also act as a conversion tool, boosting revenues and increasing customer count.
Find out more about our Science Chatbots here. | https://medium.com/sparrho/how-can-brands-keep-customers-engaged-with-science-312a2bdfcf31 | [] | 2020-08-20 15:24:33.955000+00:00 | ['Science', 'B2B', 'Brand Strategy', 'Healthcare', 'Marketing'] |
Breakout Startups #20- Clarisights | About
Clarisights helps performance marketing teams by —
Unifying reporting — Modern marketers want to unify isolated data into a consistent structure. With Clarisights, they can centralize all of their marketing data from different sources (not only their advertising but analytical, attribution as well as custom internal sources) in one place which increases transparency, reduces manual effort and eventually helps them make better-informed decisions on their marketing initiatives. Granularity — One of the biggest advantages Clarisights has over all its competitors is the power to drill down the data until the maximum granularity level (even the ad creative level). Marketers can compare and see which creatives worked for them and which didn’t — all of these in real-time. Integration — Joining data from different analytics channels with advertising channels and backend data to give marketers a complete funnel view of their marketing enabling them to understand where they’re spending the money and what kind of ROI they get from their money spent.
Product
Currently, an enterprise Performance Marketing team relies on their Business Intelligence and Engineering team to get their analysis done —
Data Collection — The engineering team either does API integrations with the channels where the marketing team is running advertisements on or uses tools such as Supermetrics, StichData, Funnel.io to build the data pipeline. Once the data starts coming in, they store and transform this data in a data-warehouses such as Google BQ, Amazon Redshift Data Visualisation — And finally they use a generic Business Intelligence platform to build dashboards like Tableau (acquired by Salesforce for $15.3B), Looker (acquired by Google for $2.6B), etc
The main problem with this is that the marketing teams do not own their data. They need to be dependent upon either the analysts or the engineering team to get their data or even to maintain the dashboards. To top that, the dashboards on generic BI tools lack marketing context and require users to learn and understand SQL to use them properly.
Clarisights unifies all these features together in one platform for marketing teams so that they can get answers to all of their questions instantly without wrangling Excel scripts or depending upon an analyst.
Founding Team
The Clarisights Founding team comes with an immense amount of experience in working with Ad Products from teams such as Facebook, Google, and Walmart.
Arun Srinivasan — Founder and CEO, Clarisights. Marketer with more than 12 years of experience. Ex- Hostelworld, and Zivame. He was also behind the first-ever ad placed on Facebook. Ankur Gupta — Co-Founder of Clarisights. Ex-Google, Walmart. He holds multiple patents for his work in MarTech. Ashu Pachauri — CTO of Clarisights, Ex-Rocket Fuel, Facebook. He was a part of the DB and Scalability team at Facebook.
It wouldn’t be wrong to say that Clarisights is a result of the insights they developed and the problems they faced during their work. This very much makes them the ideal team to build a product such as Clarisights.
If you want to understand more about what the team’s motivation behind the product is, I would highly recommend reading this blog post 😄
Funding
To date, Clarisights has raised a total of $2.3 Million in seed funding and is backed by marquee European VCs including Signals Venture Capital, Cavalry Ventures and Techstars Ventures along with an incredible group of Angels. Additionally, they were 1 of 10 startups in the 2018 class of the SAP.iO Foundry, powered by Techstars Accelerator in Berlin. Their investors have invested in companies like Delivery Hero, DigitalOcean, SendGrid, Zalando, Algolia, and many more.
Clarisights in the Market Landscape
If you look around, the market space is extremely crowded with the likes of Nugit, Stichdata, Supermetrics, Funnel.io but no one is trying to solve the entire problem of reporting at once.
The company has been able to onboard big-ticket customers such as Delivery Hero (with a marketing budget of more than $300M) and high-growth startups (like About You, Mindvalley, Livspace) paying thousands of dollars monthly.
This blog post is a good read if you want to see how the product has evolved.
At this point, the company deals with an almost unbelievable amount of data for an early-stage startup.
The Marketing Analytics Industry has blown up in the past few years. We are seeing a new wave of tools ranging from Mixpanel to Looker(acquired by Salesforce), however, there has not been that many developments in how marketing reports are made.
Clarisights is looking to fill in on that gap in the ecosystem. This is what makes them the perfect rocketship to be at if you are looking to break into tech 🚀
Jobs at Clarisights
Clarisights currently is a 32 Member strong team and is actively hiring across Engineering, Product and Design Roles. | https://medium.com/the-spectrum/breakout-startups-20-clarisights-43f3077acc34 | ['Ankit Kumar Singh'] | 2019-11-22 19:24:15.966000+00:00 | ['Clarisights', 'Startup', 'Breakout Startups', 'Marketing'] |
Is It Game Over for the x86 ISA and Intel? | Now ironically the PC itself and Intel have become the new RISC workstations. They may hold the performance crown for now.
However smart phones, tablets and all sorts of other embedded devices is where the volume is at. Chip makers such as TSMC, is in large part due to ARM getting such volume that they are able to outspend Intel.
Now Intel are the ones trailing behind in the nano-meter race. A game they used to totally own, because they could outspend everybody else.
Does Intel Even Have a Performance Advantage?
It has become a truthism that ARM chips are weak. Yet in the laptop space we saw that Apple’s iPad Pro when they came out beat most of their own intel based laptops on performance. That was insane as those ARM chips cost a fraction of the Intel chips used in their laptops. Not to mention they where passively cooled.
That makes you wonder what performance you could get from an ARM chip with the same power/watt budget as an intel chip an an Apple laptop.
I don’t think it is a stretch to imagine that Apple’s ARM laptops will outperform Intel laptops. It depends a bit on what they aim for. Perhaps they only seek to match Intel performance but give superior battery life time. Either way, Intel laptops will likely have a hard time competing with ARM laptops in the future.
So what you say “Intel gets most of their income from Cloud Data Centers. Apple laptops are a drop in the ocean!” Ok not totally, but a very large and growing portion of their revenue.
Except this Intel stronghold is under siege by multiple ARM armies.
Amazon AWS has with the custom Graviton2 chips completely demolished intel on price/performance:
If you’re an EC2 customer today, and unless you’re tied to x86 for whatever reason, you’d be stupid not to switch over to Graviton2 instances once they become available, as the cost savings will be significant.
Ampere is coming out with 128-core ARM beast for cloud computing, twice the number of cores of any x86 offerings.
Yes they may not outperform x86 in every setting, but threat of ARM is at this point no longer a joke. x86 is being assaulted on home turf.
This process will begin to work in lockstep. As more people are seeing cost savings going with ARM cloud solutions, they are also going to want to have ARM laptops to develop on.
Running and testing on the same hardware platform locally as you deploy on is always an advantage. To quote Linus Torvalds:
That’s bull***t. If you develop on x86, then you’re going to want to deploy on x86, because you’ll be able to run what you test “at home” (and by “at home” I don’t mean literally in your home, but in your work environment).
With ARM based laptops Apple provide that final piece of the puzzle that threatens the x86 server dominance. As Linus points out, once you got viable ARM laptops, ARM in the server room starts to make sense:
my argument wasn’t that ‘ARM cannot make it in the server space’ like some people seem to have read it. My argument was that ‘in order for ARM to make it in the server space, I think they need to have development machines.’
This is where the advantage of ARM based Mac laptops really start to come together. Let us look at that in more detail.
The Triple Advantage of ARM Mac Laptops for Professionals
When I worked as a consulted doing mobile app development, we often used Mac for the simple reason that it meant that we had one computer which could be used for Java server development, Android and iOS development.
This advantage will simply be amplified with ARM. An ARM based Mac will be able to run both Android and iOS applications natively. There is no need for emulators, simulators or whatever. Mobile App developers are going to like that.
When you can throw in that the same laptop can also be used to test solutions for the AWS solution with best bang for the buck, the proposition only gets stronger.
But we are not done yet. These laptops will also likely have the best battery life.
Why on earth would you then go with a PC laptop to run Windows? Windows for ARM will most likely be available anyway. Sure it may not run every Windows app, but so what? The money for developers is not in making Windows desktop apps. The money is in cloud deployment and mobile Apps. Both areas where ARM will dominate.
How Apple Will Get the Ball Rolling
This is the point where you should realize that laughing at the importance of Apple’s 9–10% marketshare impact is premature.
They have a big part of the profitable premium segment. It is like their phones. They have a relative small part of the market but has 66% of the profit of the smartphone market. Apple gets 60% of the PC hardware profits, despite only selling 7.5% of computers. It is easy to forget just how big Apple is by just looking at marketshare.
With these advantages Apple will be able to crush the remaining competition in the premium segment and steal ever more of the total profits in the laptop market.
Not to mention anybody who want to run a Windows for ARM on higher performance hardware will have to get a Mac, as Microsoft’s ARM offerings have been pathetic.
Once this trend develops PC makers and Microsoft are going to start throwing bucket loads of cash at an ARM transition to make sure they are not experiencing an iPod or iPhone moment for the second/third time.
This will only further increase the ARM spending advantage over x86.
ARM is doing to x86 what PC did to everybody else in the 90s. By being an open platform with multiple chip makers you got fierce competition which will drive prices down and boost innovation.
x86 is Outgunned
ARM is coming at x86 from all corners. There is a wave of different companies throwing money at the problems. x86 is getting outgunned and like RISC workstations in the 1990s there simply is no quick fix or silver bullet to get out of this predicament.
If they try to become another ARM maker they will have to sacrifice way too much profit. Hence they face the same predicament NOKIA faced when deciding on whether to become an Android maker or not.
They didn’t want to take the hit, but the end result was that the whole company collapsed. Intel may risk the same if they double down on x86 instead of doing a strategic withdrawal and regrouping.
What About AMD?
AMD enjoys a certain niche dominance on game consoles and gaming rigs. I don’t see any ARM guys coming after that market any time soon.
Also AMD has the benefit of not manufacturing their own chips. They can utilize the big chip makers.
Thus in many ways AMD may survive longer than Intel. But it is a tough call to make. Intel sits on a large pile of cash and can sustain quite a lot of losses for some time. | https://medium.com/swlh/is-it-game-over-for-the-x86-isa-and-intel-5ce1b00fbd1 | ['Erik Engheim'] | 2020-12-20 11:47:27.525000+00:00 | ['Cloud Computing', 'Amd', 'Apple', 'Arm', 'Intel'] |
Covid-19 May Have Started Before Dec 2019, Increasing Evidence Shows | Covid-19 May Have Started Before Dec 2019, Increasing Evidence Shows
It may also explain its unusual early adaptation to humans, unlike other coronaviruses.
Home vector created by freepik — www.freepik.com
The first global identified case of Covid-19 was on December 26 in the Wuhan Hospital in China, where a respiratory physician suspected a new infectious disease owing to his previous experiences with the 2003 SARS outbreak. Then on December 31, Chinese authorities informed the WHO of pneumonia with an unknown cause.
But as more data is collected over the year, increasingly more evidence suggests that Covid-19 might have started much earlier than December.
The new CDC study in the U.S.
The U.S. surveillance team detected the first case of Covid-19 on January 19: a 35-year-old man who returned from China. But even this was not the actual first emergence of Covid-19. In the next sample of 12 early cases of Covid-19 in the U.S., two of them had symptoms that started on January 14. Taking into account the incubation period — the time gap between virus infection and symptom appearance — of about 5–6 or 14 days, the novel coronavirus SARS-CoV-2 might have been circulating in the U.S. earlier than January 14.
SARS-CoV-2 antibodies take 1–3 weeks to form following infection encounter…So, the true infection encounter in the CDC study might have even been three weeks before December 13.
Maybe even as early as November, hinted a study from the CDC published a few days ago in the Clinical Infectious Diseases journal, titled “Serologic testing of U.S. blood donations to identify SARS-CoV-2-reactive antibodies: December 2019-January 2020.” In this study, researchers collected leftover sera from 7,389 donated blood samples from donors without suspected viral or bacterial respiratory infection.
The CDC then performed antibody testing — with validated sensitivity and specificity — on the blood sera. Results detected antibodies specific for the spike protein of SARS-CoV-2 in 1.43% (106 out of 7,389) of samples. Of these 106 cases, 39 belonged to blood samples collected between December 13–16 from California, Oregon, and Washinton. The other 67 cases were sampled from December 30 to January 17.
However, the study cautioned that none of the 106 infections qualifies as true positives or true Covid-19 cases, which can only be confirmed via a positive RT-PCR test on respiratory specimens. Another caveat is that whether these 106 infections were transmitted by traveling or community spread is unknown. Nonetheless, “The findings of this report suggest that SARS-CoV-2 infections may have been present in the U.S. in December 2019, earlier than previously recognized,” the study concluded.
A concern the paper did not address is that SARS-CoV-2 antibodies take 1–3 weeks to form following infection encounter — the window period. This is because antibodies are made by B-cells that belong to the immune system’s adaptive arm, the second line of defense that requires time to activate. So, the true infection encounter in the CDC study might have even been three weeks before December 13. But this may be relatively rare given that the median window period for SARS-CoV-2 antibodies is 10 days.
Looking at other countries
Based on China government data the South China Morning Post examined, the earliest detected Covid-19 case is on November 17 in a 55-year-old person in Hubei. By the end of November, there were nine cases of Covid-19. This data corroborates a study published in The Lancet that describes a Covid-19 patient with symptom onset dated December 1 in China. But even in those nine Covid-19 cases in November, there’s insufficient evidence to pinpoint patient zero — the first carrier of the Covid-19 outbreak. So, it’s still possible that there were undetected cases of Covid-19 before 17 November 2019.
Researchers in Lombardy, Italy also did a similar study as the CDC, which was published in the Tumori Journal with the title, “Unexpected detection of SARS-CoV-2 antibodies in the prepandemic period in Italy.” Herein, the study caught antibodies specific for the SARS-CoV-2 receptor-binding domain (RBD) in 11.6% (111 out of 959 persons) of blood samples, of which 14% were sampled during September 2019. This interests the WHO, who has contacted the authors for further investigation.
Thus, “SARS-CoV-2 might have cryptically circulated within humans for years before being discovered,” researchers suspect.
There’re two pre-prints analyzing wastewater samples for traces of SARS-CoV-2 genetic material. One pre-print from Santa Catalina, Brazil, found SARS-CoV-2 RNA in two independent sewage samples collected on 27 November 2019. This data implies that people in Brazil might have been infected and shed the virus before December. The other pre-print is even more outrageous: Researchers from Barcelona, Spain, detected SARS-CoV-2 RNA in a sewage sample gathered on 12 March 2019. But note that preprints are not peer-reviewed, and there’s a critique that contamination may have occurred during sewage sampling and analyses.
Explaining the evolutionary leap
The early SARS-CoV-2 circulation theory also helps explain many odd facets of the pandemic. For one, SARS-CoV-2 binds to the ACE2 receptor with efficiency at least 10-times higher than SARS-1. This is despite that SARS-CoV-2 genomes have been relatively stable in early 2020 with low mutation rates. In contrast, rapid genetic changes happened in the genomes of SARS and MERS when they first spillover into the human population, which stabilize overtime.
Thus, “SARS-CoV-2 might have cryptically circulated within humans for years before being discovered,” researchers suspect. If this suspicion is correct, then SARS-CoV-2 may have completed its host-switching adaption in humans before December. This also explains why SARS-CoV-2 already has a very stable genome in early 2020 and why SARS-CoV-2 has an unusual binding efficiency for the human ACE-2 receptor.
And it may also explain why attempts to pinpoint the intermediate host of SARS-CoV-2 have failed so far, given that the real intermediate host (if it exists) might not be among animals sampled in December or early 2020.
Short abstract
A new study from the U.S. CDC found SARS-CoV-2-specific antibodies in donated blood samples between December 13–16. Given that SARS-CoV-2 antibodies take 1–3 weeks to form, the actual infection in this study may be earlier than December 13. Indeed, recent government data detected Covid-19 cases from November 17 onwards in China. Further, a study from Italy has also found SARS-CoV-2 antibodies in blood samples sampled from September. Using wastewater samples, two preprints have also found traces of SARS-CoV-2 genes in samples collected in November in Brazil and March in Spain.
These data suggest that Covid-19 may have jumped to humans before December. In fact, the theory of early SARS-CoV-2 circulation in humans helps explain some oddities of the pandemic. For instance, SARS-CoV-2 genomes were already stable in early 2020, which also enables highly efficient binding to the human ACE2 receptor. In contrast, SARS and MERS genomes underwent drastic genetic changes when they first adapt to humans. In sum, increasing clinical and theoretical evidence indicates that Covid-19 may have emerged earlier than presumed. | https://medium.com/microbial-instincts/covid-19-may-have-started-before-december-increasing-evidence-shows-61e280b0842f | ['Shin Jie Yong'] | 2020-12-17 00:15:14.198000+00:00 | ['Life', 'Coronavirus', 'Ideas', 'Science', 'Technology'] |
A “Historic Week” on Nightingale | A “Historic Week” on Nightingale
Well, historic in the sense that we published some amazing history-themed content.
Olivia Vane wrote “Strange Times: Visualising the Oddities of Time Data” which builds upon her experience visualizing museum collections and dives into the unique challenges of working with historic objects and data. So often, temporal data is inexact, uncertain, or inaccurate, which raises tricky questions for practitioners. This article covers some of these situations as well as the strategies designers can take to either conceal or expose gaps in data.
Attila Bátorfy analyzed political cartoons from communist-era postwar Hungary in “The Imperialist Dogs Bark, But The Communist Graph Goes On.” Charts were an important part of the communist regime, used as data-based evidence of skyrocketing production and efficient social programs, and they were a powerful tool for information warfare between the communist government and its enemies.
Additionally, Nightingale was proud to publish an excerpt from Catherine D’Ignazio and Lauren F. Klein’s upcoming book, Data Feminism, about two different maps of Detroit. One map stems from the racist and discriminatory practice of Redlining, and the other, “Where Commuters Run Over Black Children on the Pointes-Downtown Track,” was created by community members to shed light on the city’s inequality.
Where these pieces look back at historical moments in time, Stephen Spiewak’s piece looks back on a timeless moment: Super Bowl LIV. When it comes to Super Bowl tickets, price is probably the first data point you’ll want to know. But if we look deeper, there’s a hidden data point that can tell a much more compelling story. At Vivid Seats, they’ve developed a metric that has accurately predicted the last five Super Bowl winners based on ticket sales.
One of the main goals of looking to the past is understanding where we are headed. So naturally, Allen Hillery’s ongoing series on the future of BI was a fitting way to close the week. Allen spoke with Duncan Clark, the founder of Flourish, about his winding career path, data journalism vs. data storytelling, and Talkies!
As always, if you are interested in writing for Nightingale, please contact one of our editors or write to us at pitchnightingale@gmail.com. We are always looking for new writers and open to exploring new ideas, and we encourage you to get in touch! | https://medium.com/nightingale/a-historic-week-on-nightingale-baee70c7715 | ['Isaac Levy-Rubinett'] | 2020-02-14 20:27:29.764000+00:00 | ['Design', 'History', 'Information Design', 'Political Cartoons', 'Mapping'] |
How to Easily Write 1–3 Articles Every Day | Here’s What I Do
First, I sit down, and I get a new document open, and I start writing
I have my topic in my head, and I start writing everything I can think of.
This is how I get into the flow state that is so important when writing. I find that if I can get going, I can get into a flow and stay there long enough to make a dent in the article.
I turn off my inner editor and judge, I make sure I let them know that they will have their turn but just not yet.
I allow everything that needs to come out, come out, and I try not to interfere. I make spelling mistakes, grammar errors, and the story might not make total sense at this point, but that’s okay for now, it doesn’t have to. I get it all out, a creative, intellectual purge where everything that was in there spills out all over the page, and I’m okay because I know that I’ll be back later to clean it up.
Once I do that, I go back straight away and start editing
I don’t take a break, I get right in there.
I can do it straight away because the first thing I do is go through all of the “correctness” mistakes. These get underlined in red.
I go through and correct everything highlighted in red. I don’t even read it. Grammarly works two ways, you can either click on the word in the text, and it will take you to the correction that you have to click on or you can to the list of corrections and click on there, either way at this point I go through and correct the obvious spelling and grammar mistakes.
I love this part because it feels like a game, detect, and correct. You don’t even have to think about it, and you can get so much done quickly.
These corrections used to do my head in; the minutiae of all the commas, and semicolons. Figuring out where to end run-on sentences and finding every little needle in the haystack of tenses. Grammarly finds and suggests corrections for you on these, all you have to do is click.
After that, I go back to the green lines; these are engagement
I read the single sentences and correct these suggestions.
Sometimes they’re right, and sometimes the ideas don’t make sense, so you do have to pay attention. Make sure the words makes sense in the context of the sentence this time around.
After that, I look at the purple lines, which are delivery
A lot of this is taking out the extraneous words.
I seem to use the word “just” a lot. The purple corrections prompt you to take out many of these words, so your writing is more concise and confident sounding. These corrections are also fast and easy.
Then the blue lines, for clarity
This correction helps make things more concise. It points out run-on sentences and helps identify passages that are awkward or too long and wordy. I seem to get a lot of these. That is my biggest problem. So I go back, break them up, and make those sentences work.
After I do all of those corrections, I take a break, or if I feel up to it, I start the real editing
I begin my re-read after all that.
Making those corrections first allows me to avoid engaging my inner critic, which takes a lot of the emotional charge out of the process because I am just clicking and revising small parts.
It gets the bulk of the work done effortlessly and unemotionally.
I like processes that take the emotional charge out of things. I find that what usually holds us back from getting things done is the feeling of being overwhelmed or not knowing where to start. If you find a way to bypass those feelings, everything is more attainable.
With all of that work done, I can walk away and take a break.
I come back to it later with fresh eyes, and I’m able to cut it down.
Then I do what everyone says to do, which is to cut the crap out of it. At this point, every time I read it, I read aloud to hear how it sounds
Ashley Nicole has written a wonderfully helpful article called: “Measure twice, cut once: My guide to thoroughly self-editing your writing.” It is a quick read jam-packed with sound advice for editing. That’s my bible for editing beyond this point.
After that, I just read and re-read and cut cut cut.
I try to make it as clear and concise as I can. At this point, I let my inner judge and critic loose. I let them go, give them free rein, and, most importantly, listen to them. It’s easier to let them loose at this point because I already have a far better product than when I started, so it’s not as scary.
Your critic and judge can be handy helpers at this point if you welcome their input rather than fearing it
I can’t tell you how to say exactly what you’re going to say. You have your own unique voice and experience, but if you’ve found yourself compelled to write, then you have something to share. The goal is to get your thoughts organized so you can share them, and then practice every day. My method can definitely help with that.
I hope this article helps you get the words out of your head and onto the page as efficiently as possible so you can start uplifting your readers and shining your light out into the world. | https://medium.com/better-marketing/how-i-easily-write-1-to-3-articles-every-day-this-could-change-how-you-write-8726b14f0649 | ['Erin King'] | 2020-04-29 16:55:28.578000+00:00 | ['Time Management', 'Self Improvement', 'Writing', 'Writing Tips', 'Productivity'] |
Let’s Make REST Into a Protocol | What is a Protocol?
HTTP 1.1 is a Protocol.
If you want to use HTTP 1.1 you simply use a component like an HTTP 1.1 Web Server to serve your content. Then you use an HTTP 1.1 Browser to consume your content. Did you have to code HTTP 1.1 to do any of this? Nope. HTTP 1.1 as a protocol was coded for you by someone or likely a number of someone you never met. This is the beauty of using an existing Protocol. It just works. And you get to use it. And by now it has been accepted by everyone.
Why has REST never achieved this same status?
Why are so many programmers blindly coding all that REST Interface goop over and over and over?
Every single part of every single REST API is 100% the same except for the function dangling at the end of each and every RESTful Endpoint. And still in 2020 REST has not been elevated into being just a Protocol like HTTP 1.1, for instance. Why ? Or better, Why not?
Introducing the Dynamic Pluggable Microservice Framework !
This is a pre-coded Framework you can use, right now, to publish all your Microservices or Web Services with very little effort and very little code.
This Dynamic Pluggable Microservice Framework could make REST into a full-fledged Protocol !
Why waste all that time coding URL Specs?
What is a URL Spec? Huh?
This is a typical set of URL Specs for one of the more popular Web Frameworks known as Django:
from django.urls import include, path
urlpatterns = [
path('index/', views.index, name='main-view'),
path('bio/<username>/', views.bio, name='bio'),
path('articles/<slug:title>/', views.article, name='article-detail'),
path('articles/<slug:title>/<int:section>/', views.section, name='article-section'),
path('weblog/', include('blog.urls')),
...
]
Now if you had a bunch of REST Web Services to publish you would have to code one of these “path” statements for each and every endpoint, each and every function you wanted to expose to the REST Protocol. This might be a lot of work to code and maintain. How well do you know regex, anyway? Is this how you want to spend your professional time as a programmer? Is this where you want to spend your money as an Organization or Company? This is only part of the work required to get your REST Web Services up and running.
But wait, there’s more code to write.
Consider the following blob of code:
@api_view(['GET', 'POST', 'DELETE'])
def tutorial_list(request):
# GET list of tutorials, POST a new tutorial, DELETE all tutorials
@api_view(['GET', 'PUT', 'DELETE'])
def tutorial_detail(request, pk):
# find tutorial by pk (id)
# GET / PUT / DELETE tutorial
@api_view(['GET'])
def tutorial_list_published(request):
# GET all published tutorials
These are actual functions that could be exposed to REST but so far you have not even begun to head in that direction yet. There is so much more to do when coding a RESTful Interface. So much to do and so little time.
There is an easier way to do this !
You could use the Dynamic Pluggable Microservice Framework.
Take a look at a simpler way to get your functions exposed to REST by clicking here.
Sample code:
Step 1: Import the Python decorator.
from vyperlogix.decorators import expose
Step 2: Expose your function as a REST endpoint.
@expose.endpoint(method=’GET|PUT|POST’.split(‘|’), API=’hello-world’) def foo(*args, **kwargs): return {‘response’: ‘hello-world’}
This makes the function “foo” into a REST endpoint via the “GET, PUT and POST” HTTP 1.1 Methods. The “foo” function will be accessed via the API name of “hello-world” rather than the name “foo” however both can be used interchangeably. The function “foo” will return some data that will be magically turned into JSON for you with no need to be concerned about serializations. You will, however need to ensure all the data you want to return via JSON is composed of simple data types but this is how JSON works.
Step 3: Plug-in your Python Module.
Take a look at the plugins directory here.
There is no need to restart the web server when you plugin a new Module. This is all 100% dynamic. This means you can deploy a Development REST Server so you can do your development work. Then deploy a Staging REST Server where you can test your deployable REST APIs. Then deploy a Production REST Server where your customers will consume your REST API. Your CI/CD Pipeline will simply copy your Python Modules into the “plugins” directory for each environment and you can immediately test or use your exposed APIs with no downtime. Easy, huh?
How do I use this newly exposed REST endpoint, anyway?
Take a look at the sample unit tests for this framework by clicking here.
Take a look at a simpler test environment by clicking here. This “rest.http” file is more useful when you use vscode and the REST Client extension. And “yes” vscode extensions are dynamic plugins. See how useful modern programming techniques can be in the 21st Century? Even Microsoft knows how to leverage dynamic pluggable frameworks which is exactly what vscode is.
Get a directory of your plugins.
Notice that the URL Spec of “/rest/services/__dir__/” is baked into this framework.
What you get from this is some nice-looking JSON that shows you everything you need to know about your plugins.
Let’s use one of our sample APIs from the framework distro.
This calls the “/hello-world” API that has been exposed via the “foo” function.
Notice the URL Parameters “/1/2/3/4/5/6/7/8/9/10/”. These parameters are elastic and configurable with either default names, configurable names or mappable names. Do some of your REST APIs use up to 1000 Parameters? No problem, this framework handles all that.
Notice the Query Parameters “a=1&b=2&c=3&d=4”. These parameters are also elastic which means you can issue any number of them but this is built into the HTTP 1.1 Protocol.
How do these Parameters get into your function “foo” ? This is done via the **kwargs. In Python “**kwargs” is a dictionary of keys and values. All those named URL Parameters and Query Parameters get mixed into a single dictionary and you can consume them at runtime via the “**kwargs”. Take care not to reuse the same Parameter names in the form of Query Parameter names or you may lose some values but again, this is how this framework works.
Module Aliases and API Versioning
The following issues the very same REST API call using the Module Alias for module1.py:
This is the same as the following:
This means you can build and deploy versioned APIs for your users. The Module Alias can be a version identifier or the module files name can be a version identifier.
URL Parameter Mapping
This only works for HTTP 1.1 Methods other than “Get”.
This remaps the first 10 URL Parameters to use a dynamic set of names at runtime. Again this is configurable. This means you can use your own sets of URL Parameter names for your specific APIs the way you want to use them. This also makes your exposed modules more self-documenting especially if you were to add a docstring to each exposed function. Now you can generate programmer’s docs for your modules using existing Python tools. Self-documenting code.
Private Functions or Private Variables
What if you want to embed private functions in your exposed modules?
Easy.
Private functions are those beginning with “__” or those that are not exposed via the “expose.endpoint” decorator.
Python classes and variables are not exposed and not exposable unless you do so via an exposed endpoint.
You can also build and deploy your own Python Packages and put them on your own Python Path and then use them from your exposed endpoints using this Framework. You could plugin Modules that do not expose endpoints but doing so is not beneficial.
Want to read more… ?
Click here to read more.
REST has become a Protocol
You should be able to see how REST has become a Protocol much like HTTP 1.1 became a Protocol.
Automate your RESTful APIs by using this Dynamic Pluggable Microservice Framework.
Automate your URL Specs. Don’t waste your time coding them.
Automate JSON. Don’t waste your time using JSON Libraries, this has been done for you.
Automate API Versioning. Don’t waste your time building this code yourself.
Want to see this framework in action via a Docker Container? Click here.
docker pull raychorn/microservices-framework:0.7.0
This container has all the required modules. Update the git clone and give it a whirl.
Add functionality to this framework.
Become a sponsor and make new feature requests. | https://medium.com/swlh/lets-make-rest-into-a-protocol-76d8deddcd79 | [] | 2020-12-15 01:59:41.596000+00:00 | ['Microservices', 'Functional Programming', 'Dynamic Programming', 'Python', 'Django Rest Framework'] |
Startup Spotlight Q&A: Evolve Energy | Michael Lee is the CEO and Co-Founder of Evolve Energy. He’s worked in renewable energy for the past 10 years and earned his MBA from Harvard Business School. In 2018, Michael founded Evolve Energy to help consumers save on energy costs while also laying the groundwork for the energy infrastructure of the future. Evolve recently won Grind Startup of the Year award at Global 2020, an honor well-deserved by Michael and the Evolve team. Check out what Michael had to share about Evolve’s biggest moments, advice for founders, and what’s coming up next for Evolve.
— In a sentence, what does Evolve Energy do?
Evolve helps customers cut their electricity bill by 50% and decarbonize their footprint by unlocking the value of smart home products.
— What makes Evolve different in this market?
Other companies sell “fixed priced power,” which means every hour of the day is the same price. We sell wholesale power — this electricity is very cheap, but changes price every few minutes based on grid conditions. We then pair these signals with IOT-enabled products (EVs, smart thermostats, appliances, smart plugs) and optimize the timing they use electricity to align with the cheapest times.
— How did Evolve came to be? What was the problem you found and the ‘aha’ moment?
I’ve been helping build renewable energy projects for the past 10 years. I’ve seen these projects have a 70% reduction in cost over this period — but electricity prices that we all pay haven’t decreased as much. I realized the reason is that renewables create massive surplus and shortage of electricity on the grid. We could easily fix this if devices could respond to these wholesale prices.
— What milestone are you most proud of so far?
Getting to market and selling electricity is very challenging for startups. It traditionally requires large pools of capital for creditworthiness. We have a unique workaround to bypass this through a partnership with a large energy company. Finding the right type of partnership with established players is important to enable scale for startups. But it requires that one knows the nuance of the industry landscape since some of these partners could be competitors.
— What are people most excited by when it comes to using Evolve?
Customers will call thinking that we made a billing mistake and didn’t charge them enough. Yea, we can save people that much money that they think it’s an error how little they pay. Those are the best conversations.
— Have you pursued funding and if so, what steps did you take?
We raised a pre-seed. We’re loosely related to the “climatetech” investment trend since the lowest electricity times are when renewables are abundant. And so we’re effectively decarbonizing footprints without customers knowing. Raising a seed can be challenging — investors want to pattern match and we don’t fall under any existing patterns. The most important part is to get in front of investors and have a two-way conversation. Capital raising is a full-time process.
— What KPIs are you tracking that you think will lead to revenue generation or growth for Evolve?
We already have customers and so retention/renewal rate is an important KPI. It influences our LTV and decreases our CAC (through referrals) at the same time.
— What is one thing every founder should ask themselves before walking into a meeting with a potential investor?
Founders should ask “Does this investor have any companies in their portfolio that look like our model?” I’m not saying it needs to be the same company in the same industry. But if you’re a B2C company and the investor only has B2B in their portfolio, it’s going to be a short meeting. Create a spreadsheet CRM to track your conversations. It’s added work but helpful in the long-run.
— How do you build and develop talent on your team?
Finding great people is the first step. Look for people who are great at what they do yet open to new ideas. Specialists know their craft well and can limit your headaches down the road because you can trust them to make great decisions independently. Having a product that has positive externalizes (carbon reduction in our case) helps keep employees excited about what we’re all building.
— How do you manage growth vs sustainability?
Burnout is a risk for any startup. But growth while the window is open is also really important too. Having KPIs and goals helps keep the team focused on what they can control. The leader’s role is to push the team to achieve something that may feel just out of reach while also managing relationships of investors and stakeholders.
— What are the biggest challenges for the Evolve team?
By definition, in a startup all resources are scarce. The challenge is always about doing more with just enough resources. This is why rapid lean testing is so important — you don’t want to overspend just to find out it was the wrong path.
— What advice would you give to other founders?
Mental health is important. Eat healthy. Get a workout. Get some sleep. Doing this will enable you to ride through the highs and lows without the extremes.
— Have you been or are you part of a corporate startup program or accelerator? If so, which ones and what have been the benefits?
Urban-X (Brooklyn, NY).
— Anything else you’d like to share?We’re thrilled to win the “Grind Startup of the Year” award. We see a new wave of companies that are using capitalism to unlock benefits for society. Carbon reduction is the next trillion dollar industry and startups have a huge role in this! | https://medium.com/startup-grind/startup-spotlight-q-a-evolve-energy-aa055e834707 | ['The Startup Grind Team'] | 2020-06-03 18:43:28.193000+00:00 | ['Energy', 'Entrepreneurship', 'Startup Spotlight', 'Startup', 'Startup Lessons'] |
End of Day ceremonies: the key to sleeping better and staying focused during quarantine | End of Day ceremonies: the key to sleeping better and staying focused during quarantine Joshua Mauldin Follow May 18 · 3 min read
Remember the last time you couldn’t get something out of your head and you woke up thinking about it in the middle of the night? The last few weeks of quarantine have been one continuous stretch of undifferentiated time and it’s become hard to turn things off. So I’ll share what I did about it in hopes of it helping you too. After putting this self-care routine into practice, I’ve been much more effective and focused during the day. In these times, self care is key.
I had been waking up every night at 2:30am thinking about something going on at work, the industry, my family, or the world we’ll be living in after this passes. It’s not that what’s going on is bad necessarily; it’s that I can’t shut it off.
I hoped if I could take a few minutes at the end of each day to shut my brain down, I’d be able to sleep better. So I came up with an experiment: an End Of Day ceremony where I empty out my head so there’d be room to sleep. Experiment #1: journaling.
Here’s what I write about
Anything notable from today How’ve I’ve been feeling What I’ve been wondering about What I need to do next
I write about the good, the not so good, the seemingly banal (because that’s the stuff that’s caught in my head in the middle of the night). If it’s in my head, it’s going to become text in this journal entry.
Here’s how to set it up
Find a journaling tool.
Invest no more than 20 minutes here. It’s important that your choice of software gets out of your way and makes writing pleasant. I use Day One (but something as simple as Google Docs or Notes could be enough to get you started.
Set up a reminder each day at 5 PM to write.
Day One, thankfully, handles that for me.
Create a template that includes all the topics you want to cover.
If you’re a director like me, this could be how your direct reports are doing, or initiatives you need to track. If you aren’t, some ideas could be tracking how you’re feeling, or noting tiny wins you’ve achieved. Just keep it simple.
Ten minutes later, I’ve emptied out my mind, leaving me free to enjoy a more relaxed sleep.
Delightful Surprises
This approach already yielded a few unexpected gems.
The first is that every morning I can start off knowing exactly what I need to focus on. I don’t spend any time looking for anything to do because Yesterday-Josh already figured that out and kindly set it in front of Today-Josh. It’s a huge boost to my focus.
Another bonus is the record of what I’ve been up to. It’s much easier to recall interesting insights from a project or decisions that I made so I don’t have to backtrack.
And finally, it helps me be more intentional about setting and accomplishing my goals.
In closing
After two weeks of this End of Day ceremony, my experiment in self-care did exactly what I wanted it to: I sleep much better than before. My brain is a little less frenetic and I’m able to focus more at work. Turns out, just taking time to purge my brain was the key to it all.
—
Photo by Alejandro Escamilla on Unsplash | https://medium.com/thisisartium/end-of-day-ceremonies-the-key-to-sleeping-better-and-staying-focused-during-quarantine-7902b8e01339 | ['Joshua Mauldin'] | 2020-05-18 16:37:52.778000+00:00 | ['Self Improvement', 'Productivity', 'Entrepreneurship', 'Advice'] |
8 Tips For Writing A Fantasy Novel | Photo by Tim Rebkavets on Unsplash
I’m no expert when it comes to writing in the fantasy genre. In fact, I write and read mostly in the mystery department. I do love a good fantasy now and then and have even written a few stories myself in the fantasy genre.
When I think of fantasy, a few things pop into my head.
The Lord of the Rings
Harry Potter
Marvel and DC comics
Dungeons & Dragons
These are vastly different from one another, don’t you think? Yet, they all have quite a few things in common. Magic, adventure, mythical creatures, superpowers, and more. Fantasy is a world we can only find in our wildest dreams.
Yet, we all want it to be real. Who doesn’t want superpowers? Who is still waiting for their Hogwarts letter to arrive? At this point, I’m waiting for Gandalf to knock on my door.
Everyone has a different definition of fantasy. If you can make it up and have the world and rules make some sort of sense to your readers, then congrats. You’re a fantasy writer.
Whether you’re writing superheroes or magical begins going on a quest, here are a few things to keep in mind.
1. Keep it “real”
Fiction is fake and fantasy is out of this world. There’s still a little bit of truth in everything we write. We base characters off of ourselves or someone else we know — even if only slightly. We take real-life places we know and love and put a spin on them making them fantastical.
Sometimes ideas come out of thin air, but most of the time you find inspiration in something or someone else. Take what you already know — a person, place, thing, or even another story — and add a twist to it.
It won’t be non-fiction and yet, there will be something familiar about it to your readers.
2. Mythical creatures
In addition to my first point, you can do a lot with real-life people, places, and creatures. Unicorns and dragons don’t exist, but they do in the fantasy world. Where did they come from? Unicorns are horses with a horn slapped on their forehead. Boom. Cool mythical creature.
You can take animals you know and love from the real world and turn them into your own. You can even take mythical creatures and turn them into your own as well. Look up the lore behind some creatures and do some research on real-life animals. Make your own lore behind the creatures in your story.
Did you know mermaids are not actually like Ariel in The Little Mermaid? They’re actually quite nasty. Your childhood may be shattered, but that’s some interesting information to use to your advantage.
3. Magic
Magic is everywhere. The spells in Harry Potter are derived from the Latin language. J.K. Rowling twisted a well-known language into something magical.
I’m not saying you need to create a magic system like Rowling did. You can name your spells and potions whatever you want. However, if there’s going to be magic in your story, keep a few things in mind.
It should have rules
It’s no fun when everyone can use magic for everything all the time. Magic should have some sort of boundaries. Can only certain people use magic? Does it need to be learned or are people born with the ability? Can spells be said in your head or do you need to speak it out loud? Can do you snap your fingers or do you need to use a wand?
If there are no rules to your magic, it’s not going to make sense to the readers. Fantasy is all about not making sense, but there a line needs to be drawn somewhere.
It should be easy to pronounce
Spells and types of magic are often weird words. At least, to us. They don’t need to be super long to make it seem fantastical. They also don’t need to have a jumble of letters to make it sound important. You can do whatever you want as long as readers can pronounce it. Or else, they’re not going to understand what it means.
4. Create character names that can be easily read and pronounced
How do you pronounce Flbergsted? What about Hyckls? Or Abcdef?
Yeah, I don’t know either. Like with the magic system, have your characters and places be easy to read and pronounce. No one wants to get stuck halfway down a page because they’re trying to pronounce the protagonist’s name.
There are plenty of fantasy name generators out there. My personal favorite? Fantasy Name Generators. That website has all the generators you could ask for.
Or, you can make something up yourself, but use your vowels wisely. Jumble letters around from already existing names. For example, my name is Rachel and I often use the fantasy name Lehcar because that’s my name backward. Some names won’t be easy to pronounce like mine, but you can move letters around. Work with what you have.
5. Know your world inside and out
If you’re creating a fantastical world, then you need to know it inside and out. You need to see it as though you’ve traveled there before. You need to know it like the back of your hand as though you’ve lived there your whole life.
Figure out the logistics of how the world works. Ask yourself a lot of questions you already know about the world we live in.
What language(s) do they speak? How do they communicate?
What sort of currency do they have?
What foods do they eat?
Are there different biomes? Do they have the four seasons?
There are a lot of other questions to be asked, but these are just a few. You may not need to know all the answers, but it’s helpful to figure out anyway. You need to know your world better than your readers.
6. Use a map
Piggybacking off the previous tip, use a map. Maps are important. Your novel may not need a map in the front of the book and your readers may not necessarily need to see it, but it’ll be helpful for you in any case.
You’ll be able to keep track of where your characters go, especially if they’re on an adventure and split up. Plus, it’s a handy list of all the place names in your world and which place is next to another.
7. Know your fantasy genre and subgenre
Fantasy is a vast genre and there are so many sub-genres to it. The Lord of the Rings and X-Men are totally different from one another. Are you writing superheroes saving the world or talking about a Hobbit going on an adventure?
Research different sub-genres of fantasy and know which one your story falls under. This will not only allow you to understand your novel better, but it’ll help you with your target audience as well.
8. Do your research
Research never hurts. There are a lot of fantasy writing craft books out there. The Internet is a vast place. No, unicorns are not real, but there is information about them in various places.
Read other fantasy books in your genre and sub-genre. What do those authors do? How do they handle their magic system or how do they portray unicorns?
Brush up on your fantasy knowledge. There’s no wrong way to write a book and, in the end, the story is yours. But it never hurts to be prepared.
Overall, writing a fantasy novel is a lot of fun. I’ve written Dungeons & Dragons-like stories and I’ve also written about a team of superheroes in Marvel fashion.
It’s fun to dabble in all these sub-genres, though writing fantasy is not a walk in the park. It’s a fun challenge and always a good idea to immerse yourself in a brand new world you created.
This article has been updated and was originally published on RachelPoli.com. | https://medium.com/swlh/8-tips-for-writing-a-fantasy-novel-133d9e7c8a9f | ['Rachel Poli'] | 2020-06-15 14:25:31.049000+00:00 | ['Novel Writing', 'Books', 'Writing', 'Fantasy', 'Writing Tips'] |
Can Your Partner Be Your Best Friend? | Relationships
Can Your Partner Be Your Best Friend?
I didn’t think so. And then I did. And now I don’t.
Photo by Kirill Vasilev on Unsplash
In my first marriage, my husband and I told each other we were best friends. I felt like such a liar. Was I lying? I wanted it to be true. I really did. Your husband is supposed to be your best friend, right?
But he did things and said things to me that none of my friends ever would have. I remember some of our fights where I was sitting there just sobbing and saying, “You’re not my friend. You’re not even close.” He’d just call me a drama queen.
I remember feeling (somewhat) relieved when some sort of “expert" would say that “Your husband is not meant to be your best friend, ladies. That’s what your girlfriends are there for.”
One even said, “Your husband’s only obligations to you are food, shelter and clothing. Everything else is a bonus.” Fuck you, too, sir. You’re an asshole. My only obligation is to get my ass out of there, because I am his bonus.
But honestly, at the time, I took that man’s words to heart. I used his words to keep me going in my marriage and tell myself that what was happening was okay.
I became fine with him not being my best friend. I didn’t even hope for that anymore. And every time he said things that cut me, I told myself, He’s only obligated to provide for me. And obviously, since I refer to him as my first husband, I left him…after 21 years.
My current husband I love more than I ever thought I could love a man. And I truly felt we were best friends. The reason is, because I trusted him with more of myself than I ever have with anyone, ever, even after I decided to never trust anyone with that part of me again.
This is also the reason why I now do not believe your husband or partner can be your best friend.
Because I trusted him with more of myself than I ever have with anyone. He held that in his hands. It was sacred.
Your best friend — you don’t have that fragile piece of the puzzle. That’s why you’re still best friends after 43 years. With something that sacred, when it’s fucked with, the scars are great and deep.
Photo by Yannick Menard on Unsplash
And if you stay together after that one person shattered that fragile piece, and it’s because you truly do love that person, and not out of obligation or because you’re scared to leave, but because you forgive the fact that you couldn’t trust them, that takes incredible strength. That’s a shitload of pain and vulnerability on both parts.
No, you’re not best friends. You’re so much more. There is so much more at stake. Best friends are amazing and soul-deepening. You add the kind of intimacy that comes with being together as life partners and it’s a whole other level.
For an “expert” to simplify it the way he did by saying, “Your husband was never meant to be your best friend,” and, “his only obligation is food, shelter and clothing,” and we should be grateful for anything else — I feel sorry for them.
And I feel sorry for anyone who accepted that and never went looking for more. I hope they somehow got the chance to experience something deeper. As much as it can hurt, I think it’s better than not knowing. | https://medium.com/survivors/can-your-partner-be-your-best-friend-9bf3c53b213b | ['Tracy Busby'] | 2020-10-04 01:53:19.582000+00:00 | ['Life', 'Life Lessons', 'Psychology', 'Relationships', 'Mental Health'] |
The gentle cycle’s hidden menace: Or why how you do laundry matters to the earth | The gentle cycle’s hidden menace: Or why how you do laundry matters to the earth
Micro-plastics in highly unusual places
First it was the tea bags, and now…. the gentle cycle?
Researchers at Newcastle University ran tests with full-scale machines to show that a delicate wash, which uses up to twice as much water as a standard cycle, releases on average 800,000 more microfibres than less water-hungry cycles. “Our findings were a surprise,” said Prof Grant Burgess, a marine microbiologist who led the research. “You would expect delicate washes to protect clothes and lead to less microfibres being released, but our careful studies showed that in fact it was the opposite.”
Colour me suitably bleached. Assuming bleach adds colours, which, given the state of laundry research, it just might.
Apparently what makes the difference is the amount of water used. The more water, the worse the effect on clothes, as far as microplastics are concerned.
Well, damn. I don’t own my washing machine. It comes with the apartment I rent. It’s a small, apartment-sized stacked unit. It has one setting for water level. Full. I’m stymied.
Are you, like me, wondering where plastics in the clothes come from?
The clothing industry produces more than 42m tonnes of synthetic fibres every year. The vast majority, about 80%, are used to make polyester garments. Previous tests have found that washing synthetic items can release between 500,000 and 6m microfibres per wash. Because many washing machines lack filters that can remove microplastics from their wastewater, the fibres are carried into water treatment plants and can eventually reach the seas. The particles, which come from a variety of sources, are now ubiquitous in the environment, from the deepest marine trench in the Pacific Ocean to the pristine wilderness of Antarctica. Scientists have found the plastics in organisms at every level of the food chain from plankton to marine mammals.
Gah.
Washing in small amounts of water (or waiting until I have enough dirty clothes for a full load), then, and not use the delicate cycle. What else?
Thankfully, the New York Times came up with some fine eco-laundry tips, and a few scary stats. U.S. households do on average 300 loads of laundry a year. This amounts to 179 million metric tons of carbon dioxide, equal to the total annual energy use of more than 21 million homes.
They say you should use cold water because “about 90 percent of the energy a washing machine uses goes toward heating water.” For reasons that are explained but that I still don’t understand, using cold water counter-intuitively involves water heating (I know, I know) so if your machine has a “tap cold” setting on it, use that.
Mine does, yay!
Not only will you save money using tap cold water to wash most of your laundry, but you’ll help save the planet, too:
One calculation from the cleaning institute, using Energy Star data, estimated that a household could cut its emissions by 864 pounds of carbon per year by washing four out of five loads in cold water.
Using cold water also apparently – bingo, full circle – helps with the microplastics thing. As for drying the clothes, you already know that, the best way to save on energy and carbon emissions is to air dry as much as possible. | https://brigittepellerin.medium.com/the-gentle-cycles-hidden-menace-or-why-how-you-do-laundry-matters-to-the-earth-bd4b68d1f0a4 | ['Brigitte Pellerin'] | 2019-10-29 22:44:21.507000+00:00 | ['Driving', 'Sustainability', 'Laundry', 'Environment', 'Microplastic'] |
3 Existentialist Lessons to Help You Deal With COVID-19 | 1) Don’t waste time searching for a “meaning”, there isn’t any.
On social media, you surely have encountered people trying to make sense of the situation. Whether it is Earth taking its revenge on polluters, God punishing us for our sins, or Capitalism finally collapsing, everyone seems to come up with their explanation to the events of these last weeks. You certainly are trying to make sense of the situation yourself, either by adhering to one of the aforementioned ideas, or by coming up with your own. And it is normal to act that way: to quote Camus, “Plagues are a common thing, but you hardly believe in plagues when they fall on your head”[2].
You thus want to understand it and that search for meaning is a perfectly human mechanism — but is it what you should spend time on? No, existentialists would tell you.
Indeed, one of the first existentialist postulates is that there is strictly no sense to our world. God, destiny or whatever you may call it — there are no such things. No superior entity is driving your life and giving it meaning. We, humans, are truly left by ourselves in a world that is thus absurd. Hence, the first existentialist lesson on how to deal with the pandemic is to stop wasting time searching for its meaning, since there isn’t any to anything anyway.
Quite destabilizing, isn’t it? The feeling caused by the realization of the world’s absurdity is what Sartre would call the Nausea — but don’t worry, you’ll overcome it thanks to the second existentialist lesson.
2) You’re completely free, be responsible with that liberty.
Existentialism just freed you from the search for meaning in which you were stuck. But the meaninglessness of the world should not lead you to extremes — such as suicide, which Camus tackles in Sisyphus Myth [3]. On the contrary, as explained by Camus in the same book, you should enjoy the fact that this absurdity gives you complete freedom. Indeed, because the world is absurd, you are completely free to act according to your own will!
However, does this mean you should do anything you want? In the present day, disrespect confinement rules, hang out with your friends, and ransack the toilet paper stocks of your local shop? No. True existentialism means acting as you would like others to act: in a way that enables society to function. So, even though you are free to do all the things mentioned previously, you should consider not doing them, as they would cause the virus to spread and shortages with it.
Think of the Kantian categorical imperative: you should act only as if your action was to become a universal rule!
The strength of existentialism is thus to stress how each one of us is responsible for their action. As the existentialist thinker Jean-Paul Sartre puts it [3], “man is nothing but what he does of himself” and is thus “entirely responsible for what he is”. Yes, you are free to act, but it should be in a conscious way. Because by acting in a certain way, one agrees with the values behind their action. The existentialist man is thus also “responsible of all men”: the values he defends through his actions are ones he would accept to become universal.
Hence during this pandemic, you are free to act. But bear in mind that all your actions are impactful. And they reflect the values you defend: either you help the virus to spread, or you fight against it.
3) You have the power to act, use it.
That is why, by confronting us with our responsibility, existentialism might be frightening. The burden of our total liberty can seem unbearable, creating anguish in ourselves. “Anxiety is the dizziness of freedom”, says the existentialist pioneer Kierkegaard. However, existentialism needs to be rather understood as a philosophy of empowerment; something particularly helpful during the current sanitary crisis.
In The Plague, written by Camus during his existentialist phase or “cycle of revolt”, the doctor Rieux indeed keeps trying to heal people that will die from the disease, even though it is meaningless and absurd. He does what he believes to be right and fights for it at all costs. Going back to existentialism at the time of COVID-19 can thus help you find the strength to fight for what you think is fair. Even though you do not need to be as brave as Rieux, there are some small actions you can do. If you want the end of this pandemic, of its deaths, isolation and constant fear, you should act accordingly. | https://medium.com/inspired-writer/dealing-with-coronavirus-three-lessons-from-existentialism-7a1e3865d08b | ['Jeanne Briatte'] | 2020-06-12 11:29:01.467000+00:00 | ['Albert Camus', 'Existentialism', 'Covid 19', 'Psychology', 'Coronavirus'] |
This is One of the Rare Habits That Profoundly Changed My Life | This is One of the Rare Habits That Profoundly Changed My Life
And is responsible for much of my success
Every single day, you see approximately 1,023,872 articles about habits that will make you more successful.
These articles aren’t bad, per se, but most of them miss the fundamental point in writing articles about habits.
Habits in and of themselves will never make your life better. If you become the type of person who adopts habits just to become Mr or Ms. Habit then you’ll never move the needle in your life when it comes to your actual goals.
You can use certain habits as a means to an end.
The habit I’m about to share with you has helped me build a life and business I love.
It helped me overcome major challenges in my life. And I credit much of the ‘wisdom’ I’ve accumulated to this habit.
I won’t buy the lede here. Keeping a personal journal has had a more profound impact than most of the other habits I’ve ever tried or adopted.
Why Journaling is So Effective
They say if you want to reach your goals, write them down.
If you want to remember something, write it down.
If you want to discover what’s really going on in your mind, write it down.
I use journaling to serve all three of those purposes.
I don’t know the science behind journaling, but there seems to be something special about the connection between your brain and your hand physically writing something down.
Also, if you’re looking for new ideas or you want to get to the bottom of something that’s bothering you, journaling helps you tap into your subconscious and discover some of the issues that were in your blindspot.
The act of journaling — having to move a part of your body — seems to signal a real effort toward the end you want it to serve. It’s a step above thinking and daydreaming.
And if you can turn it into a habit, you’re subtly telling yourself, “I have committed to doing something.”
Commitments build confidence, self-esteem, and make it more likely to reach whatever goals you have.
Each positive little commitment or habit you adopt, you’re saying “I trust myself.” That’s key. That’s huge. It’s pretty much what self-help boils down to.
How you journal doesn’t matter much but here are some ideas if you’re feeling stuck.
My Journaling Routine
My routine is pretty simple.
Every morning, I write down three things I’m grateful for. I do this because I’m very ambitious and have a hard time being content with my progress. I use this gratitude exercise to realize how many good things have happened in my life. It keeps me grounded — for about a day — then I have to do it all over again to refocus. It helps.
Then I use James Altucher’s idea-generating technique.
Here’s how it works. You write down 10 ideas per day. These ideas can be about anything you want. You can create ideas to improve your own life.You can also create ideas for other peoples lives and businesses. James says he often uses his ideas as a networking technique. He’ll create ideas for others and send them (tactfully) as suggestions.
I usually write ideas for articles, books, and ways to reach some of the goals I have.
You can use this technique to build your “idea muscle.” Most of your ideas will be bad, but some will be good. If you do this every day for a year, you’re bound to have a great idea or two out of 3,650 tries.
For those of you looking for a journaling routine you can use without having come up with everything yourself, there are authors and entrepreneurs who’ve created journals with pre-defined sections you can use to improve your life.
The Daily Stoic Journal
Ryan Holiday is famous for bringing the ancient philosophy of stoicism into the modern mainstream. His book, The Daily Stoic, teaches one lesson per day from stoicism and uses examples from the real world to illustrate them.
What is stoicism? It’s the art of keeping yourself sane in an unfair and chaotic world.
The Daily Stoic comes with a companion, The Daily Stoic Journal, which has an accompanying section for each lesson where you can write down your own thoughts.
If you’re feeling stuck, anxious, or afraid and full of doubt, this is the journal for you.
The Self Journal
The Self Journal, created by Cathryn Lavery and Allen Brouwer, helps you reach your goals and come up with cool ideas.
It provides a systematic approach for both setting goals and tackling them.
It includes items like:
Major goals
Daily targets
Lessons learned
Daily activities
Morning gratitude
Evening gratitude
Daily quote
You can even get a pdf version of the journal for free right here.
These are the ones I’m familiar with, but there are many more you can find online.
Famous Journaling Routines
Julia Cameron, author of The Artist Within, created an extremely popular morning routine called morning pages.
Morning pages involve free-writing for three full pages about anything you want. Free-writing that number of pages usually elicits creativity. Also, many have attested to the routine leading to major emotional breakthroughs.
It makes sense. If you’re feeling a little bit down but don’t know why and free-write about it for three pages, somethings going to come up.
Try it and see if you like it. I’ve done it before, but I like my short and concise routine.
Benjamin Franklin — one of the original self-help gurus — created a ‘virtues journal.’ It contained thirteen virtues charted on the page for each day.
He’d focus on one virtue per day and try to maintain the others as well. If he failed to be virtuous in one area, he marked an x on the cart.
In the beginning, the chart was filled with x’s. After a time, there were less. He credits the journaling technique for making him a better person:
Tho’ I never arrived at the perfection I had been so ambitious of obtaining, but fell far short of it, yet I was, by the endeavour, a better and a happier man than I otherwise should have been if I had not attempted it.
With Journaling, the Possibilities Are Endless
Certain habits and routines get promoted too much, like journaling and reading, but I don’t mind because they’re life-changing habits that I hope people adopt.
You can structure your journal any way you want. Keep a journal for six months and I bet you’ll improve your life in some shape or form.
Why is it so powerful?
Again, the commitment alone builds credibility with yourself. Also, there’s power in monitoring yourself on a daily basis.
A great example of the power of monitoring — one of the best ways to eat less is to start tracking your food. Don’t even try to change your habits at first, just track what you’re putting into your body and it might inspire you.
The same goes for your finances…
…and your goals.
…your happiness.
…your life.
If anything, journaling helps you address what’s going on in your life. That’s a start. A great start. | https://medium.com/curious/this-is-one-of-the-rare-habits-that-profoundly-changed-my-life-67e4894c344c | ['Ayodeji Awosika'] | 2020-10-08 20:01:34.975000+00:00 | ['Self Improvement', 'Life Lessons', 'Psychology', 'Productivity', 'Advice'] |
My data science template for Python | I’ve been learning data science and AI for the past year, during this time my way of working was to search for the code I needed at every step of my data science projects, copy-paste it and adapt it to my project. I thought it would be really useful for me to have some kind of template containing all the code I could need for a data science project.
In this post I will show my data science template. It is a Python file with most of the code needed for a data science project, structured in a way that makes it super easy to follow through.
Let’s begin by the ending part. You can find this template in my Github:
Now that you have easy access to the code, I’ll explain you how it is structured. Keep in mind that I’ll keep updating this template on Github but I won’t update this medium article, some parts of what I write here might become outdated.
First of all, I followed the structure for a data science project that you can find on the Appendix B of the book Hands-on Machine Learning with Scikit-Learn and TensorFlow by Aurelien Geron (https://amzn.to/2WIfsmk)
After creating an empty file, following the structure outlined on the book and adding most of the text on Appendix B as comments to structure the code, I started filling every part of the document with relevant code snippets (still working on it). The snippets come from many different sources, from code I wrote for competitions I participated, from friends, from examples on the internet, from books, etc.
While making it I was participating on the CareerCon 2019 — Help Navigate Robots Kaggle competition. While doing my first tests, i decided to go for the fast.ai strategy of launching a model as quick as possible and getting a baseline metric. While doing that I tested a random forest model and got a 39% accuracy. Then I started following this template and achieved a 65%!!!
Right now I’m looking to add even more snippets to the template and make it useful for different kinds of data (right now it only has code for tabulated data).
Let’s dive deeper into the code.
This is intended to be a summary of the structure, commenting the most important parts:
As always, we’ll begin with the necessary imports:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm import tqdm
sns.set()
Most of them are the typical data science imports. The ones it’s worth talking about are seaborn which is a data visualization library which works on top of matplotlib, adding extra functionality to it, different kinds of plots and overall prettier visuals. Also it’s worth noting tqdm, a library which gives you progression bars so you can see how much your functions are taking to run.
A quick tqdm example
Then we load our data, there can be many variations of it depending on how your data is structured, we won’t cover those here.
df = pd.read_csv(‘file.csv’)
Now we visualize our data in order to get a quick glimpse of what we have in our hands:
#Visualize data
df.head()
df.describe()
df.info()
df.columns
#For a categorical dataset we want to see how many instances of each category there are
df['categorical_var'].value_counts() #Exploratory Data Analysis (EDA)
sns.pairplot(df)
sns.distplot(df['column'])
sns.countplot(df['column'])
Example of a pairplot result
Data pre-processing
The first step after loading and visualizing the data is to pre-process it and give it an appropriate format for passing it to the machine learning models.
First let’s check for errors in our dataset and fix them, let’s check for NaN’s, infinite numbers, duplicated values, etc.
#Fix or remove outliers
plt.boxplot(df['feature1'])
plt.boxplot(df['feature2']) #Check for missing data
total_null = df.isna().sum().sort_values(ascending=False)
percent = (df.isna().sum()/df.isna().count()).sort_values(ascending=False)
missing_data = pd.concat([total_null, percent], axis=1, keys=['Total', 'Percent']) #Generate new features with missing data
df['feature1_nan'] = df['feature1'].isna()
df['feature2_nan'] = df['feature2'].isna()
#Also look for infinite data, recommended to check it also after feature engineering
df.replace(np.inf,0,inplace=True)
df.replace(-np.inf,0,inplace=True) #Check for duplicated data
df.duplicated().value_counts()
df['duplicated'] = df.duplicated() #Create a new feature #Fill missing data or drop columns/rows
df.fillna()
df.drop('column_full_of_nans')
df.dropna(how='any')
Then we pass to a feature engineering phase. I’m not going to copy any code here because this section will be totally different for every project you work on, in the template there are all the feature engineering stuff I’ve used in previous projects, of course there are only a few examples of feature engineering since the amount of different feature engineering that can be done is almost infinite and will vary completely depending on your project and kind of data.
Model selection and evaluation
After data pre-processing is done and we have the data in the required format, we can start working with models.
We must define a validation strategy like K-Fold Cross Validation or dividing the dataset in train/validation sets. Depending on your dataset and your objectives you might opt for one option or other. Here’s the code for some of them:
#Define Validation method
#Train and validation set split
from sklearn.model_selection import train_test_split
X = df.drop('target_var', inplace=True, axis=1)
y = df['column to predict']
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size = 0.4, stratify = y.values, random_state = 101) #Cross validation
from sklearn.model_selection import cross_val_score
cross_val_score(model, X, y, cv=5) #StratifiedKFold
from sklearn.model_selection import StratifiedKFold
skf = StratifiedKFold(n_splits=5, random_state=101)
for train_index, val_index in skf.split(X, y):
X_train, X_val = X[train_index], X[val_index]
y_train, y_val = y[train_index], y[val_index]
Finally we jump to the model fitting section, we can try many different models and evaluate their performance comparing them to one another and so we can choose the most promising ones. On the template there are shown implementations of many different algorithms. I’m not gonna show them all here since that would be 100+ lines of code. However I will show as an example the implementation of Random Forest which is one of the most versatile algorithms used in Machine Learning.
#########
# Random Forest
#########
from sklearn.ensemble import RandomForestRegressor
rfr = RandomForestRegressor(n_estimators=200, random_state=101, n_jobs=-1, verbose=3)
rfr.fit(X_train, y_train) #Use model to predict
y_pred = rfr.predict(X_val) #Evaluate accuracy of the model
acc_rf = round(rfr.score(X_val, y_val) * 100, 2) #Evaluate feature importance
importances = rfr.feature_importances_
std = np.std([importances for tree in rfr.estimators_], axis=0)
indices = np.argsort(importances)[::-1]
feature_importances = pd.DataFrame(rfr.feature_importances_, index = X_train.columns, columns=['importance']).sort_values('importance', ascending=False)
feature_importances.sort_values('importance', ascending=False) plt.figure()
plt.title("Feature importances")
plt.bar(range(X_train.shape[1]), importances[indices], yerr=std[indices], align="center")
plt.xticks(range(X_train.shape[1]), indices)
plt.xlim([-1, X_train.shape[1]])
plt.show()
We should decide what performance metrics we will use to evaluate the model. There are many different metrics and as always, depending on your problem, you might choose one or another or maybe many of them. I won’t post any code here since there is a huge amount of them.
To end up with a great algorithm, we can add hyper-parameter tuning on top of the chosen algorithms. Here’s an example of doing so by using a Grid Search algorithm.
from sklearn.model_selection import GridSearchCV
param_grid = {'C':[0.1,1,10,100,1000], 'gamma':[1,0.1,0.01,0.001,0.0001]}
grid = GridSearchCV(model, param_grid, verbose = 3)
grid.fit(X_train, y_train)
grid.best_params_
grid.best_estimator_
Conclusion
With this I’ve covered all the steps you’ll need for most of your data science projects. Every section should be expanded with code to treat your specific dataset and you should use your expertise to decide which steps you should follow and which steps you shouldn't. | https://medium.com/saturdays-ai/my-data-science-template-for-python-59a67cba4290 | ['Albert Sanchez Lafuente'] | 2019-04-14 21:16:54.029000+00:00 | ['Kaggle', 'Machine Learning', 'Data Science', 'Python', 'Technology'] |
The Mystery of Madness Throughout the Ages | Madness can never be truly understood, and madness is something that is still very mysterious, unique, and belonging to the other world, a world that only few of us have access to.
By Ekaterina Netchitailova, PhD
It was Michel Foucault, a French philosopher, who claimed that madness is a social construction, and that how we look at it is a direct result of the social forces at any given time.
Thus, during the Renaissance period, according to Foucault, madness was sometimes perceived as possession of a different kind of wisdom, where the mad were viewed as interesting people, deemed of admiration by some artists.
The famous painting by Bosch, The Ship of Fools, or The Satire of the Debauched Revelers, clearly shows this different view on madness.
In it we can see the debauchery caused by some distinguished members of society. The two figures in front are a Franciscan friar and a nun, quite unthinkable at the time of the painting (1490–1500).
But this painting, in particular, has an additional meaning. The ship itself holds the biggest symbolism. Because it was on this kind of ship that the mad were put and sent into the fools’ paradise (into nowhere) in the Middle Ages.
In this painting, however, there is only one fool, who is put there with a purpose: to remind the viewers that it is the ship of fools indeed which is depicted. But by placing other characters, so called ‘sane’ members of the society in it, Bosch made his view on madness quite clear:
It is not the mad who should be sent away or treated but all the hypocritical members of the society who harm others in the name of God.
Moving from the Renaissance to the Age of Enlightenment, the view of madness started to change. The Age of Enlightenment was characterised by the predominance of reason, where all manifestations of weirdness started to be frowned upon. During the Enlightenment mad people started to be institutionalised, put away in secure facilities but still depicted as curiosities, where the public seeking entertainment could get it through the visits to asylums. In asylums the ‘normals’ could watch the mad, laugh at them, and think perhaps that they were lucky to have escaped such a predicament.
The art of that time still shows though the dilemma of madness, where the artists can be seen to reflect on the existential question: “Should we lock away these people, or should we instead admit the possibility of being different, look at it as something mysterious, something which can never be understood?”
One such painting by Tony Robert-Fleury depicts this puzzle of the question of madness, called La Salptrière (1795):
In this painting we can see a famous clinician, Philippe Pinnel, who was a chief physician at the famous Hospice de la La Salpȇtrière, an asylum for the insane in Paris. Philippe Pinnel advocated a more humanistic approach to the treatment of psychiatric patients, and there are some rumours that he even managed to liberate some inmates from their ordeal.
In this painting the artist shows how Pinnel orders the chains to be removed from a patient, which also demonstrates the growing power of psychiatry. Give the chains to one kind man in power, and he will liberate the oppressed. Give it to someone who wants to abuse the same power, and you are chained for life.
Moving back to the modern age, we don’t see psychiatric patients in physical chains anymore, but the power is still strongly in the hands of the psychiatrists, who can, by the act of simple words, deny a person of his freedom, independence, and joy of living. There are no physical chains, but there are still walls: walls of the psychiatric hospitals, walls of the forced injections, walls of diagnoses which create stigma and put a mental burden on one’s mind.
The story of modern psychiatry really begins in the middle of the last century, but its roots go back to the age of reasoning when madness was reduced by scientists to an ‘object’ of mind — an object which could be studied, analysed, and as some of them claim nowadays, even understood.
We can see it in the famous painting by Pierre Aristide André Brouillet (1887), called A Clinical Lesson at the Salpêtrière. In it we can witness a clinical demonstration given to postgraduate students by the famous neurologist Jean-Martin Charcot.
The patient, who is being studied, is depicted as an object and nothing more. She is there for demonstration purposes, reduced from being a person to an object of curiosity, an artefact on display. Her name is Blanche, and it is only the field of academic studies called ‘mad studies’ which calls for attention to her. The academics in this area want to know about Blanche, they are not interested in what Charcot tries to say.
Modern psychiatry reduces all humans coming under its attention to these objects of study, entities which are lost in the background behind the diagnoses assigned to them. Nowadays they are called ‘bipolar’, ‘schizophrenics’ or ‘schizoids’ and it is only when the individuals behind these labels start to speak that we see a person. We can learn then that every ‘bipolar’ is a different ‘bipolar’, that ‘schizophrenia’ and ‘bipolar disorder’ are almost the same thing, and that behind the diagnoses there are fascinating lives, spiritual journeys, but also confinements to psychiatric hospitals for life.
Psychiatry reduces us to objects of study, to victims of some mysterious ‘brain disease’, by its narrative of ‘mental illness’ which claims that “mental illness is like any other illness.” Not only does this make madness a purely scientific domain, deprived of its mystery, it also makes it extremely boring.
But madness is never boring. And it shouldn’t be boring. By reducing it to purely biological explanations (for which psychiatry has failed so far to provide any concrete proof), it removes the personal narratives behind it. It reduces Blanche to an object, it gives Beethoven and Gogol a diagnosis, it sees the whole oeuvre of Van Gogh as a battle of ‘mental illness’.
But as a mad person myself, as a person who doesn’t believe in the narrative of ‘mental illness’, I am interested in Blanche. I want to hear her story. I see the paintings of Van Gogh as the paintings of a true genius. I hear in the music of Beethoven the glory of an unusual and incredibly beautiful mind. I read Nietzsche with curiosity at the marvels of his unique mind and perception, and I devour Gogol for what he truly was, a talented writer, one of the best.
They say today that these artists would have a better life if they were on the medications that are available now — that they would enjoy better health if they had access to modern ‘treatment’.
But it is a big question whether they would be better off. From my personal experience, I know that under the medication that psychiatrists prescribe, one can’t really function, let alone create, write or paint. All the artists I know actively refuse the treatment on offer. Or they actively re-negotiate the terms and conditions of its use.
Would these marvelous people leave us the works of their genius minds if under a heavy dose of ‘antipsychotics’?
They would probably go less mad, but I doubt that we would then enjoy so many paintings of Van Gogh, or listen to Beethoven.
It is without a doubt that many people say medication made their lives better. Some welcome even their ‘diagnoses’ because it gives some explanation, it gives a reassurance.
But behind the diagnoses, and the narrative of ‘mental illness’, continues to lie the mystery of madness. Psychiatrists can resort to their ‘medical’ explanations as much as they want, but the truth remains the same as in the previous centuries.
Madness can never be truly understood, and madness is something that is still very mysterious, unique, and belonging to the other world, a world that only few of us have access to. | https://medium.com/mad-in-america/the-mystery-of-madness-throughout-the-ages-29370ad1c485 | ['Mad In America'] | 2019-01-02 22:21:14.041000+00:00 | ['Bipolar', 'Psychology', 'Depression', 'Medicine', 'Mental Health'] |
Middle East tech: What does 2019 have in store? | via ZDNet | Jawad Abbassi, head of Middle East and North Africa region, GSMA
The Gulf states are set to be global leaders in the deployment of 5G networks, with many expected to launch commercial 5G services in 2019. According to a recent GSMA Intelligence report, 5G will account for 16 percent of mobile connections across the six Gulf states by 2025 — slightly ahead of the global average.
Early 5G offerings in the region are likely to focus on enhanced mobile broadband services and 5G-based fixed-wireless — especially in regions with limited fiber penetration.
However, there is also an opportunity to use 5G to drive developments in immersive reality, e-sports, and enhanced in-venue digital entertainment.
On the enterprise side, 5G will enable operators and governments to collaborate on smart city initiatives, focused, for example, on addressing population-related challenges. Oil and gas, mining, and tourism — each particularly relevant to the region’s economy — could also benefit from 5G networks.
Key to success will be the availability of the right type of harmonized 5G spectrum. The recent decision by the Arab Spectrum Management Group (ASMG) to release the use of the 3.3GHz to 3.8GHz band for mobile broadband was therefore an important step in accelerating 5G rollout across the region.
Encouraging investment at home and abroad
Sevag Papazian, partner at Strategy&, part of the PwC network
Last year, the region experienced several developments, especially in terms of setting the foundations of its national digital transformation. Some examples include:
Investments in infrastructure — Saudi Arabia deploying fiber broadband to more than 700,000 new households; 5G deployment in Saudi and UAE. In Saudi, Al Khobar was the first city in the region to test a 5G network.
Investments in talent — Misk Foundation’s ‘Saudi codes’ program training and Dubai’s ‘One million coders’ initiative. The Hajj hackathon has broken the Guinness World Record for the highest number of software developers in a single location.
Government digital transformation — Abu Dhabi launched the TAMM platform to offer omni-channel government services across digital and physical channels. Several end-to-end journeys are being redesigned to improve the user experience.
However, to enable large-scale socio-economic transformations, the region needs to see inorganic growth by having the large players invest in the region.
The region is investing abroad, for example, the Vision Fund — a partnership between Saudi’s Public Investment Fund and Softbank — has invested in more than 65 companies including $4.4bn in WeWork, and $2.5bn in Flipkart. But the region needs to use such investments to establish capabilities in-country, to expand operational, R&D and innovation capabilities.
There were initial discussions last year with large tech multinationals, such as Google and Amazon. These have slowed down lately because of the geo-political situation. They will have to resume, as the region needs new types of capabilities that can help it step up services and solutions at large scale, and help drive the innovation agenda.
Dawn of digital payments
Racha Ghamlouch, Innovation and Business Adviser
2018 saw the dawn of online payments in the UAE, Saudi and Egypt. Even countries like Morocco and Jordan are witnessing a fintech awakening.
Financial technology sandboxes in Abu Dhabi, Manama, Dubai, and Doha have allowed for experiments with the possibilities and limitations of financial technology, easing regulators into legalizing previously banned services.
Global partnerships have also been aiding this growth, such as: Dubai’s Fintech Hive’s partnership with Cyberport, Hong Kong; and Bahrain’s BEDB partnership with the Maharashtra government in India.
While Saudi Arabia doesn’t have a similar sandbox, Mada — governed by Saudi Arabia’s Monetary Authority — has been working to get SpuxeApple Pay into the kingdom, which is already launched in the UAE, and has launched a Mada-supported payment gateway to ease previously banned online payments.
SEE: Digital transformation: A CXO’s guide (ZDNet special report) | Download the report as a PDF (TechRepublic)
In turn, this move has stimulated the private sector. Jordan’s Al Ahli bank fintech accelerator and Arab Bank launched a fund and public API, followed by UAE’s Emirates NBD launch of a public API.
Emirates NBD announced support for Fitbit and Garmin Pay, shortly after Google Pay launched in the UAE supported by UAE’s Network International, whom in turn now supports AliPay, an indicator of the increased Chinese footfall in the country,.
Egypt’s tech sector is being revitalized. Its central bank has rolled out support for online payments via the locally issued Meeza card, followed by a supporting payment solution. Egypt is also getting its own Startupbootcamp fintech accelerator.
The real relief the region has been waiting for is P2P payments: Saudi Telecom Company (STC) STCPay is a digital payment and P2P wallet, a regional first. However the real winner is Careem, by launching CareemPay which allows users to send P2P credit, given its mass scale, it is effectively the first cross-border P2P wallet in the region
The competition is fierce and the players are ready for 2019, so it will be exciting to watch.
AI, Cloud and plugging the skills gap
Jaime Galviz, COO and CMO at Microsoft Middle East and Africa
Over the past year, we’ve seen tremendous growth of intelligent cloud and AI solutions across the region. In fact, it’s predicted that AI could further increase the UAE’s GDP by $96bn by 2030.
However, 2018 was the year that not only demonstrated the infinite applications for AI in the Middle East but also highlighted the need for more qualified workers in the field. Indeed, 2018 was also the year that showed us how advanced technologies are creating new and different jobs, rather than eliminating jobs as many feared.
According to recent research conducted by IDC and Microsoft, cloud computing will potentially generate more than 515,000 jobs across key markets in the Middle East and Africa between 2017 and 2022, and these are not confined only to the IT profession.
Moving forward into 2019 and beyond, we must continue to take an active role in equipping the region with the skills needed to fill these jobs and evolve along with the new world of work.
This goal requires collaboration at national and regional levels to encourage governments and schools to provide all students with access to computer-science education to ensure that they are adequately prepared for jobs of the future.
SEE: IT jobs in 2020: A leader’s guide (ZDNet special report) | Download the report as a PDF (TechRepublic)
For example, in the UAE, the One Million Arab Coders initiative is helping a million young Arab programmers develop digital and coding skills in areas like AI, robotics, cognitive and biological sciences, and programming.
Recognizing the unprecedented opportunity for digital transformation in the region, in 2018 we also announced plans to open data centers in Dubai and Abu Dhabi, the first in the Middle East, empowering organizations, governments, and businesses to achieve more.
This announcement marked the second data center expansion for Microsoft in the Middle East and Africa in less than a year. We see enormous opportunity in the region for cloud technology to be the key driver of economic development, while providing sustainable solutions to many pressing issues such as youth employability, skills development, education and healthcare.
We will continue to work with governments and organizations across the region to equip the workforce with the skills needed to accelerate digital adoption, and we are excited about the role these new data centers will play in this transformation.
Middle East’s startup scene continues to expand and mature
Christopher Schroeder, co-founder Next Billion Ventures and author of Startup Rising: The Entrepreneurial Revolution Remaking the Middle East
This has been a fascinating year in startups in the Middle East. More money has entered the early stages with angel networks like Dubai Angels expanding and deploying, and 500 startups closing their fund and aggressively investing.
The region’s anchors, Wamda, Beco, MEVP, have deployed most of their capital and are raising and warehousing in parallel; STV (Saudi Telecom Ventures) has made a big splash in the size of their ability to fund later stages, and serious investments, most recently in Careem, Unifonic and Vezeeta, among others.
Rapidly growing companies, like Property Finder and Swvl in Egypt, have found interest from global investors. Growth capital — B rounds and later — remain a need. Saudi Arabia, always a coveted market, is the great question as some very interesting startups, and investment capital, are rising there as well.
Entrepreneurs have become more sophisticated, based on five to seven years’ experience and the combination of greater access to the newest technologies in blockchain and AI, solving problems not only for the region but for nearby rising markets.
And investment is crossing borders, such as Wamda’s investment in Nairobi logistic tech company Twiga, not only as these markets grow, but such companies will also seek opportunities in the region’s markets with time.
Similarly, China has begun to look closely at tech startups in the region. E-commerce juggernaut Jolly Chic has had a significant value as a customer and investor in Fetchr, and Chinese venture capital has come to tour the UAE and more. Beyond Amazon’s acquisition of Souq, AWS and other cloud providers have made real inroads in the region.
The story remains success breeding success and the significant market and massive mobile penetration attracting investment from within the region, with more global tech companies exploring ways to enter.
Previous and related coverage
Middle East youth and tech: What’s happened since the Arab Spring?
The Arab Spring showed the potential of young people in the Middle East armed with technology. But the picture since then has been patchy.
Google, YouTube, Samsung are world’s top brands, but how do they do in Middle East?
Although tech brands dominate everywhere, in the Middle East there are some key differences from the global picture.
Mobile in Sub-Saharan Africa: Can world’s fastest-growing mobile region keep it up?
Sub-Saharan Africa has led the world in mobile take-up growth in the past few years, but a range of issues now threaten that momentum.
Where next for mobile in the Middle East? Big changes are coming
The Middle East and North Africa is a complex region, but mobile usage and services are changing fast.
What’s driving Middle East’s rush to social media?
The rise of visually orientated social networks, video, and messaging apps is helping shape usage.
Cybercrime: Why can’t the Middle East get to grips with the threats?
The region’s been investing heavily in tackling cybercrime but remains disproportionately affected.
Skype banned, WhatsApp blocked: What’s Middle East’s problem with messenger apps?
Some Middle Eastern countries seem to have a difficult relationship with VoIP services and messenger apps.
Get ready for Africa to emerge as a cybersecurity powerhouse TechRepublic
Africa’s growing technology adoption and economy means increased potential for impactful cyberevents, says IBM Security’s Caleb Barlow.
New Google Go app tackles slow internet speeds in Africa CNET
Google Go reduces the amount of data needed to display search results by 40 percent. | https://medium.com/damian-radcliffe/middle-east-tech-what-does-2019-have-in-store-zdnet-b4acbd9c0d91 | ['Damian Radcliffe'] | 2020-01-13 19:18:57.800000+00:00 | ['Tech', 'Damianradcliffe', 'Journalism', 'Zdnet', 'Startup'] |
Personality Traits, Introverts, Extroverts and Everything in Between | Personality Traits, Introverts, Extroverts and Everything in Between
How to build your own recharge ritual and learn to choose yourself
From a young age, we are taught how to be kind, how to love and how to show up… for everyone but ourselves. We are taught that putting ourselves first is selfish. For some reason we are not taught to build a relationship with ourselves, to get to know yourself so deeply that we know what we want and more importantly, what we need.
At the age of sixteen we are asked to make a decision on what we want to do for the rest of our lives, and not knowing is not an option. This is usually the first time we are asked to make a decision for ourselves, by ourselves, that will shape the rest of our lives — and wow, what a heavy decision it is.
I see the world fundamentally different from other people, for many reasons. What people say and how they interact has always intrigued me. I have known from an extremely young age what my passion is, what I need as a person to thrive and recharge. Still, being headstrong and adamant, at the age of sixteen I had to go against everything I had been taught and fight for my choices, and for myself. I had studied design and psychology from a young age, in and out of school and wanted to go into design as a career, which at the time I didn’t know would progress into UX - but my school had other ideas. I had to have meetings upon meetings with my parents and my school to fight to do the subjects that I wanted to do, as they saw it as an unstable career… if I even manage to get a job at all.
It’s worth adding that I didn’t have a bad school, however, they are taught to make sure that the students are vaguely equipped to go into the world and have a fighting chance. This is a lesson to be learnt — most people that try to stop you in life have their own set of reasons. Realistically, they should have done their research on me and the career I knew I was going into, and educated me on the statistics and work required to succeed instead of brushing it off as high school arrogance and ego. This wouldn’t have changed the outcome for me, but it would have for a lot of my fellow students.
I was brought up with strong female and male role models, my Mum and my older brother raised me to a teen, when my Dad* stepped in and continued to raise a headstrong female without the constraints of gender stereotypes. I was well equipped with the skills I needed to know myself, to know what I wanted, what I needed and know that I was able to fight for it. And no, I didn’t have a ‘perfect’ childhood, but the resilience in my role models through uncertain times taught me to be stronger and love myself no matter how hard it might seem.
Okay, so I want to add that my Dad mentioned is my technically my step Dad, or as I like to say… my chosen Dad. I am a firm believer that DNA doesn’t give you the right to be in someone’s life. If someone is a toxic influence or detrimental to your self-love, they don’t deserve you. Likewise if someone in a genuinely amazing person and a positive influence, you can choose them. If you learn one thing when learning to love yourself, let it be that.
A mother who radiates self-love and self-acceptance actually vaccinates her daughter against low self-esteem - Naomi Wolf
Self-worth comes from within and we cannot rely on others to validate us, but self-love needs to be nurtured at a young age to blossom. It takes time to realise your worth for yourself, not just because someone else has said it. Self-love means getting to know yourself as well as you know other people — which may seem silly but you would be surprised how much there is that you don’t know or realise about yourself.
When you take time to replenish your spirit, it allows you to serve others from the overflow. You cannot serve from an empty vessel. - Eleanor Brown
One of the most important things to remember when you are learning self-validation is that, most people are faking it. Yep, that’s right, most people aren’t as confident as you see them to be. This is one of the main reasons that you can’t compare or validate yourself against other people — you are only seeing what they are showing.
Personality traits
According to psychologists, there are five main underlying traits that define personality, including:
Openness — Which indicates how open-minded a person is.
— Which indicates how open-minded a person is. Conscientiousness — A person scoring high in conscientiousness usually has a high level of self-discipline. These individuals prefer to follow a plan, rather than act spontaneously.
— A person scoring high in conscientiousness usually has a high level of self-discipline. These individuals prefer to follow a plan, rather than act spontaneously. Introversion/Extraversion — These personality traits cover how outgoing, talkative and energetic, or reserved and solitary a person is and how they recharge and relax.
— These personality traits cover how outgoing, talkative and energetic, or reserved and solitary a person is and how they recharge and relax. Agreeableness — This trait usually indicates how warm, friendly, and tactful a person is.
— This trait usually indicates how warm, friendly, and tactful a person is. Neuroticism — A person who has a high level of neuroticism is more likely to be moody and to experience such feelings as anxiety, worry, fear, anger, frustration, envy, jealousy, guilt, depressed mood, and loneliness.
Personality traits are, for the most part, categorised into introversion and extroversion. The thing to remember about introversion and extroversion is that it’s not black or white, one or the other — it’s a spectrum in which you can sit anywhere in-between. There are plenty of tests you can take online to tell you if you are an introvert or extrovert, remembering that these are very generalised and don’t show the spectrum. For example, the Psychologies test has 14 basic questions to answer — I didn’t really connect with a few of them and just ended up picking answers randomly, and at the end was told I am more of an Introvert.
Some of the traits associated with being an introvert and extrovert are:
Extrovert
Recharges and gains energy in social situations
Makes quick impulsive decisions
Can be seen as outgoing and enthusiastic
Thrives in a team setting
Introvert
Enjoys spending time alone to recharge
Thinks before speaking and acting
Can be seen as more reserved
Prefers working independently
I am technically an extroverted introvert, I have strong traits from both ends of the scale. These traits apply to everyone who isn’t in my select group of friends and family that actually recharges me. Some of these traits include:
Generally finding people both intriguing and exhausting — in equal measures
Interactions with new people in new situations completely drain me mentally and physically
I am very selectively social
I have absolutely no interest in trying to stand out in a crowd, and not because I am shy
People tend to assume I am an extrovert
I constantly feel the need to do something
I need nature to feel grounded
I see the world in a fundamentally different way to other people — or so I am told
Self-care routine
Building a personalised self-care routine is the most important thing that we just aren’t taught to do at a young age. In fact, for a lot of people, they are told that they shouldn’t ‘spend so much time alone’ or ‘focus so much on their friends’. Most people are taught a routine, but it’s someone else’s and doesn’t work for them. As an example I hate running, like seriously… hate! But as a young teen, that’s what I was taught will help me recharge, as that’s the experience other people have.
The way that I am means that my brain doesn’t stop, I don’t shut down my work self at the end of the day. I don’t stop feeling driven when I am at home ‘chilling’. I can’t just switch off. My brain is always firing 1000 thoughts and it’s a lot. When I am around people my brain is on fire — it’s attention is drawn to micro-expressions, body language, word analysis, taking in every morsel of visual stimulant… as well as being constantly over aware of all of my surroundings. While being so intently focussed on what’s directly in front of me, my brain consumes itself with simultaneously tracking how close people are, what’s going on around me and the surrounding structures. Sometimes I struggle to hold a conversation or even hear someone talking — my brain is simply overwhelmed by the number of stimulants.
Breathe
I was around 17 when I passed my driving test and this for me was a big step towards finding my self-care routine. It opened up experiences I couldn’t have before and I quickly found that one of the things that made me recharge was being alone, truly alone.
I found my happy place, my solitude, the place I could breathe again. I found this about a 20 minute drive from where I lived and visited it regularly for around 6 years.
Over the years I added a few places, mostly spots to take my labrador, Bracken — and then I moved to Wales. Well, Wales is not short of blissful solitude. My favourite of which is Moel Famau in the colder months, pictured here. You can walk for miles or drive to the top car park, walk 5 minutes and find a bench right on the edge of a hill where you can sit and watch the world pass by.
Things change
I have always been a person who doesn’t have a large group of friends — and that’s fine. I always end up with single friends from different aspects of my life, and they are mostly male. My personality just seems to mesh more with the male personality and always has, it may be because I grew up with brothers or that’s just the way I am. This seems like an issue to a lot of people — even in my school reports from primary school my friendships were mentioned as being odd and not with girls which seems to stem from the strong gender role stereotypes forced upon people. I am inclined to believe this even more as my female friends have this ‘male’ persona as a strong part of their personality.
I can say with ease and content that I am extremely lucky with the few true friends I have, and this changed me as I grew up. I was always the type to need time on my own to recharge, finding excuses to avoid social interactions — which played a big part in my relationships. I find adults completely draining mentally and psychically. While this hasn’t changed for the most part, there are certain people I can breathe around. When I say ‘can breathe’ that’s literally how I feel around these truly amazing people.
I was never the type to go out drinking all of the time, and honestly, this is mostly because of the people I was with. For me any experience being a positive one, including a night out, 100% depends on the people I am with. Around these people, I am honestly a completely different person, with a completely different energy. It’s not about how long I have known someone or how much time I spend with them, or even how much we have in common — it’s solely the connection. Once I connect to people that connection will always be there, it doesn’t depend on how often I see them, how many good or bad times we have had or how much time I spend talking to them each day.
Learn to breathe
We all work differently, relax differently and recharge differently. The things that make us happy may not be the things that recharge us. The people we love may not recharge us and that’s okay! You have to figure out what it is that recharges you, this may mean trying something new. Be okay with the fact that the things you do to take care of yourself may change, they may evolve, they may include people or not.
If you don’t know what makes you happy — why not try some self-care tips from Women’s Health for 30 days to build yourself a new routine. You will soon realise what works for you and what doesn’t.
Try 30 days of improving your mental and psychical health by
Drinking some water first thing in the a.m.
Write down five things every day that didn’t totally suck
Meal plan! Make a menu for the week
Try that new yoga/gym/boxing class
Have a mini dance party every day!
Move for at least 30 minutes a day
Sleep 8 hours a day
Start a journal and write down everything that happened that day
Sit up straight (no, really)
Plan a workout date with a friend
If you don’t like kale… don’t eat kale!
Exercise your right to say no!
Learn what helps you breathe and choose yourself | https://medium.com/curious/personality-traits-introverts-extroverts-and-everything-in-between-6ad25beec668 | [] | 2020-11-28 03:55:16.655000+00:00 | ['Mental Health', 'Self Love', 'Mental Health Awareness', 'Personality', 'Psychology'] |
Dear writers, stop pinning your stories on Medium! | Dear writers, stop pinning your stories on Medium! I noticed while browsing through some profiles I am unable to find the newest pieces. When a writer posts new content, Medium provides a notification to the reader on their homepage. When the reader clicks on a writer’s picture, it leads the reader to the writer’s profile. However, the reader has to scroll past many pinned stories until the user finally reaches the article published a few hours ago. The more pinned stories you have, the more a reader struggles to find your newest content when viewing your profile. I encourage writers not to pin more than one story, whether it is your most-read story or the one you are proud of the most. Remember, writers should provide a simple user interface for readers to find the newest content. Keep it minimalist and clean!
Photo by Philipp Berndt on Unsplash | https://medium.com/illumination/dear-writers-stop-pinning-your-stories-on-medium-e1bba4eed668 | ['Aj Krow'] | 2020-12-28 08:12:34.043000+00:00 | ['Self Improvement', 'Writing', 'Advice', 'Ideas', 'Design'] |
Mistakes You Shouldn’t Make While Pitching via Email | Mistakes You Shouldn’t Make While Pitching via Email
Avoidable mistakes that made you lose a potential customer
According to DMR Business Statistics, we send and receive 121 emails every day — that’s about 15 emails in an eight-hour working day. Indeed, the number is more significant if a person has more authority and responsibility in an organization.
This is why attracting someone’s attention with an email sales pitch has been becoming harder and harder. Especially nowadays, due to the COVID-19 pandemic, where everyone is desperate to make sales and email remains the most reliable option to engage with your audience.
Every morning I come to the office to at least 40–50 new emails. These can be newsletters to publications that I subscribe to, meeting invitations, internal email correspondence, invitations to events, awards, or other activities, and some of them are also random sales pitches from other companies that try to offer their solution to my company.
I noticed that COVID-19 had brought more of the sales pitches than usual. No doubt, the easiest way to connect now is via email. Also, many companies lose clients due to budget cuts, and they still need to sustain their business. Thus, they are looking for new clients. All normal, as expected.
However, what is astonishing is the quality of the email sales pitches that I’ve been receiving. By all means, I’m not an expert in pitching, but here are some examples of what to avoid. | https://medium.com/better-marketing/mistakes-you-shouldnt-make-while-pitching-via-email-5ef502e8eef6 | ['Edgaras Katinas'] | 2020-10-21 19:13:44.755000+00:00 | ['Marketing', 'Email Marketing', 'Sales', 'Startup', 'Enterpreneurship'] |
A Basic Perceptron Model Using Least Squares Method | Just like the billions of neurons that make up the human nervous system, the perceptron is the basic unit of artificial intelligence. Every thought, action, emotion or decision that we make reflect the activities of the nervous system which is a master system that controls and communicates with every part of your body. Biological intelligence relies on this complex mechanism of billions of neurons organized in different layers that communicate with one another through electrical and chemical signals.
To understand how biological intelligence is produced, it's important to understand how the basic building block called neuron functions.
The biological neuron has 3 main functions:
Sensory input. The neuron uses its dendrites (receptive regions) to monitor changes occurring both inside and outside the body. The information gathered is called sensory input. Integration: The cell body system processes and interprets sensory input and decides what should be done at each moment, a process called integration. If sensory input is below a certain threshold, the sensory signal is not activated. Motor output. If the sensory input signal is above a certain threshold, the neuron produces an output that is transmitted via synaptic gaps by neurotransmitters. The neurotransmitters will either excite or inhibit a nearby neuron.
Similar to biological intelligence, artificial intelligence is produced by a complex network of basic building blocks called perceptron. The perceptron functions using the same principle as a neuron:
Input Integration Output
We shall focus here on how to build a basic perceptron model using python. This knowledge is fundamental for understanding more advanced models such as neural networks, which are complex systems of thousands and billions of perceptrons, with the capability of producing artificially intelligent systems such as self-driving cars.
Basic Perceptron Model
Python’s sklearn package contains several classifiers such as the Perceptron, SupportVectorClassifier, LogisticRegressionClassifer, DecisionTreeClassifier, RandomForestClassifier, and KNN classifier. While it is important to use these ready-made machine learning algorithms, every beginner in the field must master the basics of how these algorithms work. A good place to start your journey into neural networks and deep learning models is by considering the perceptron.
In this example, we build a simple perceptron model in which the learning weights are calculated using the least-squares method.
The perceptron model has the following four main steps:
Training Activation Quantization Prediction
X represents the attribute or predictor matrix, and y represents the class. We shall illustrate our model using the Iris dataset. The dataset contains the following attribute information:
sepal length in cm sepal width in cm petal length in cm petal width in cm
The three classes are
Iris Setosa Iris Versicolor Iris Virginica
For simplicity, we perform binary classification. We use the two flower classes Setosa and Versicolor for practical reasons. However, the perceptron algorithm can be extended to multi-class classification — for example, through the One-vs.-All technique.
Model Implementation Using Python
This code applies the perceptron classification algorithm to the iris dataset. The weights used for computing the activation function are calculated using the least-squares method. This method is different from Rosenblatt’s original perceptron rule where the weights are calculated recursively. For more information about the implementation of Rosenblatt’s perceptron algorithm, see the following book: “Python Machine Learning” by Sebastian Raschka.
Import Necessary Libraries
import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split
Define Perceptron Classifier Object
class Perceptron(object):
"""Perceptron classifier using least-square method to calculate weights.
Attributes
-----------
w : 1d-array
Weights after fitting.
"""
def fit(self, X, y):
"""Fit training data.
Parameters
----------
X : {array-like}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : object
"""
self.w = np.dot(np.linalg.inv(np.dot(X.T,X)),np.dot(X.T,y))
return self
def predict(self, X):
"""Return class label after unit step"""
return np.where(np.dot(X,self.w) >= 0.0, 1, -1)
Import Iris Dataset
df = pd.read_csv('iris.data.csv', header=None)print(df.tail())
y = df.iloc[0:100, 4].values
y = np.where(y == 'Iris-setosa', -1, 1)
X = df.iloc[0:100, 0:4].values plt.scatter(X[:50, 0], X[:50, 2],color='red', marker='o', label='setosa') plt.scatter(X[50:100, 0], X[50:100, 2],color='blue', marker='x', label='versicolor') plt.xlabel('sepal length (cm)')
plt.ylabel('petal length (cm)')
plt.legend(loc='upper left')
plt.show()
Training, Testing, and Evaluation
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=21, stratify=y)
ppn = Perceptron()
ppn.fit(X_train,y_train)
y_pred=ppn.predict(X_test)
accuracy = 100*np.sum(y_pred==y_test)/len(y_test)
print("accuracy of the model:= " + str(accuracy))
In summary, we have demonstrated how a basic perceptron model can be built in python using the least-squares method for calculating weights that are then used for calculating the activation function. The perceptron model is the basic building block for more advanced neural network systems. Every beginner in the field of deep learning and artificial intelligence should master the basics of the perceptron model.
The code and dataset for this article can be downloaded from this Github repository: https://github.com/bot13956/perceptron_classifier. | https://medium.com/towards-artificial-intelligence/basic-perceptron-model-using-least-squares-method-17900e0d1eff | ['Benjamin Obi Tayo Ph.D.'] | 2019-09-16 15:45:27.320000+00:00 | ['Machine Learning', 'Python', 'Perception', 'Neural Networks', 'Artificial Intelligence'] |
Don't be Digital-Only. | When and How you Should Go Physical
Oversized Burn Down Chart
Here’s something you’re probably surprised to hear a Director of Product at a tech startup say: Digital isn’t always the answer.
How can it be?
Technological Progress = Digital, right? Everything must be faster, more integrated, more automated, and less physical!
Not true.
In a world of never-ending Slack messages, a smartphone that you check over 200 times per day, and screens everywhere you look (literally!), physical has regained a LOT of power to attract our limited attention.
5 Powerful Ways to Leverage Physical
There are many simple ways to leverage the uniqueness and attention-grabbing nature of physical today.
Here are 5 easy (and powerful) ways you can leverage physical for your startup, company, and personal life.
Handwritten Thank You’s: At Upside Travel, we’ve been doing handwritten thank you’s since we started and they have had an incredible impact on our customer’s experience with us!
Even better, handwritten notes have a 3–4x open rate compared to form letters, which makes the time creating them well worth it.
(http://www.digitaldogdirect.com/handwritten-mail-and-direct-mail/)
Handwritten Thank You’s FTW!
Add swag for a +1!
Don’t have time to write the notes yourself?
No problem, check out these sites for affordable help with handwritten notes:
https://www.handwrytten.com
2. Oversized Physical “Demos”: Are you working on something digital that can be represented physically? I’ve found that by making an oversize physical “demo” of your digital product, you can keep what you’re working on top of mind with your coworkers and tap into their collective creativity.
One recent physical demo we built at Upside was of the new home screen for our iOS and Android apps. You can pull off the “cards” in the feed section of the homepage, rearrange them, add new ones, and even create cards of your own, all thanks to the magic of Velcro!
This demo has already helped us collect more than 20 ideas for new “cards” that can be added to the Upside app home feed. | https://medium.com/startup-frontier/dont-be-digital-only-52810a885a0c | ['Alex Mitchell'] | 2018-08-18 11:03:26.768000+00:00 | ['Product Management', 'Marketing', 'Startup', 'Tech', 'Digital Marketing'] |
5 Lessons Child Abuse Taught Me | Lesson #1- Abuse Is Like Losing A Limb
Child abuse, indeed, any abuse, whether sexual, emotional, physical, or financial, permanently cripples.
People don’t easily forget negative experiences, especially ones that happened during their formative years. Chronic trauma stunts an individual’s growth and sentences them to solitary confinement… the story of my life! If you throw a frog into a pot of boiling water, the frog will reflexively leap out.
If you gently drop a frog in a pot of water and gradually raise its temperature, the frog will not jump out.
If you were constantly torn down and beaten as a kid, that burned tissue never fully heals.
Looking back, it wasn’t a handful of “acts of passion” that crippled me- but a schedule of violence that compounded over 15+ years.
No matter where you are, you cannot escape your past. Sometimes, a mind can move faster than a person’s body can handle; other times, the two hemispheres play “emotional ping pong” with each other.
Years of speaking with people makes you realize pieces of your puzzle are missing. You can feel their edges but don’t know what’s inside. You possess low behavioral control because that’s how you received attention. You can’t interpret certain thoughts or emotions, life is dreary, and probably, you harbor anger 24/7… Ad Infinitum.
But that does not mean you can’t find that missing link.
Although such crippling is daunting to rehabilitate, a person can still lead a happy and fulfilled life if he/she learns how to compensate with their remaining limbs.
Lesson #2- Be Grateful
Regardless of how “bad” you deem your situation- be grateful.
A simple mantra is easy to say but hard to do. However, a simple writing exercise every single day can help ease the process. Write 3 things you are grateful for because you need to build that resilience… I started building that resilience.
I get it. Maybe, you did not have the ideal upbringing, but you aren’t without kindred spirits in the world.
There’s no need to go float above your pain. There is someone, something, or an activity you love that will make you grounded- so keep your feet planted on firm soil.
Be grateful that although you were raised as a whipping boy or girl, that you were housed and fed. And if you weren’t housed and fed, be thankful you are alive.
According to Jocko Willink (retired Navy Seal officer), if you are alive and breathing, you still have some fight left inside you. My Okinawan great grandparents would say, “Ganbatte!”
Fight On.
As for me, I’m the “Wandering Warrior Poet” and found friends, acquaintances, and positive influences around the globe. Those interactions taught me to count my blessings- always to be grateful that I had food on the table and heat during the winter.
Every day I wake up, I train Martial Arts, write a little bit, travel a little, and do what I love- what more could I ask for? Happiness is in the eye of the beholder. My ancestors would say, “Ikigai!”
I’m trying to be grateful for the good moments right now. One day, my body will slow down, my brain will become mush, and my heart will malfunction.
Lesson #3- Don’t Pick Your Scabs
Sometimes, it’s okay to risk reinfection from the aliens (using an analogy) of the world… other times; it’s best to ward off evil.
Sometimes, it’s okay to be the assault victim that returns to berate the assailant- Other times, the fighter that fights and runs away lives to fight another day.
(This is a lesson; I continue to struggle with…. We all have our weaknesses, and that’s okay.)
People tend to be the moth that returns to the flame. Many individuals never break the cycle- It’s a challenging task. Especially when you love pitying yourself, you have blurred vision, and your aura is repellant.
I get it.
It’s more convenient to retrace the well-trod path of that abusive boyfriend, parent, friend, or toxic work. Please don’t do it.
Life is about getting better, so “progredire.”
These people wounded you, but when you scratch your scabs, you prolong their healing. I strive to break my cycle, so I prune my diseased tree’s rotten roots and branches to stimulate new growth.
Lesson #4- Many People Won’t Understand You- Let It Go
“Calm down. It’s not hard.”
“Can’t you control yourself?”
“Just forgive them already.”
“You had money; you don’t know what REAL abuse is.”
“There are people who have it tougher. Stop whining.”
“This guy’s complaining. He doesn’t know how bad it can get.”
Ad Infinitum.
People who have never been through abuse cannot understand your situation and feelings. Likewise, people who have been abused share your coping mechanisms but may be extremist and assert this is a battle you must WIN- therefore, they will chastise you.
Often, you will hear such seemingly well-intentioned or placatory statements. Many people will give you a neutral response to avoid confrontation OR challenge you to dismiss your feelings.
For better or worse, this is a world where each person’s situations, geographic location, and type of abuse are unique but share a common outcome- survivors of violence.
For example, my two older brothers always emphasized I had it “easier.” Maybe, their memories were and are far more carcinogenic than mine- I don’t know.
But you can’t agree: “Oh, maybe you’re right, I have it easier” because it never feels that way. So, let such comments be water passing under your bridge.
COLEY GELSIN.
Lesson #5- Lose The Battle, But Don’t Lose The War
Trauma is a tug of war between you (the victim) and your abuser.
But keep in mind, the tug of war of such family ties is not necessarily a Sisyphean struggle. You can go two steps forward and one step back, but as long as you don’t fall backward, you will eventually break the cycle.
The abuser is a vampire that wins if she or he infects and thereby creates more vampires. Remember, hurt people (tend to) hurt people- so tread with caution with a stake in hand.
It’s okay to lose a few battles, but don’t lose the war.
(It’s hard, I just lost my 4th battle, but I’m going to win the 100-year war.) | https://medium.com/wholistique/5-lessons-child-abuse-taught-me-ed9458ec1e00 | ['Max Takaesu Hsu'] | 2020-12-13 19:38:41.446000+00:00 | ['Personal Development', 'Personal Growth', 'Mental Health', 'Self-awareness', 'Mental Health Awareness'] |
30 Magical Python Tricks to Write Better Code | Python is quite a popular language among others for its simplicity and readability of the code. It is one of the simplest languages to choose as your first language. If you are a beginner with the basic concepts of python then this is the best time to learn to write better codes.
There are a lot of tricks in python that can improve your program better than before. This article will help you to know various tricks and tips available in python. Practice them continuously until it becomes a part of your programming habit.
Trick 01 - Multiple Assignment for Variables
Python allows us to assign values for more than one variable in a single line. The variables can be separated using commas. The one-liners for multiple assignments has lots of benefits. It can be used for assigning multiple values for multiple variables or multiple values for a single variable name. Let us take a problem statement in which we have to assign the values 50 and 60 to the variables a and b. The usual code will be like the following.
a = 50
b = 60
print(a,b)
print(type(a))
print(type(b))
Output
50 60
<class 'int'>
<class 'int'>
Condition I - Values equal to Variables
When the variables and values of multiple assignments are equal, each value will be stored in all the variables.
a , b = 50 , 60
print(a,b)
print(type(a))
print(type(b))
Output
50 60
<class 'int'>
<class 'int'>
Both the programs gives the same results. This is the benefit of using one line value assignments.
Condition II - Values greater than Variables
Let us try to increase the number of values in the previous program. The multiple values can be assigned to a single variable. While assigning more than one value to a variable we must use an asterisk before the variable name.
a , *b = 50 , 60 , 70
print(a)
print(b)
print(type(a))
print(type(b))
Output
50
[60, 70]
<class 'int'>
<class 'list'>
The first value will be assigned to the first variable. The second variable will take a collection of values from the given values. This will create a list type object.
Condition III - One Value to Multiple Variables
We can assign a value to more than one variable. Each variable will be separated using an equal to symbol.
a = b = c = 50
print(a,b,c)
print(type(a))
print(type(b))
print(type(c))
Output
50 50 50
<class 'int'>
<class 'int'>
<class 'int'>
Trick 02 - Swapping Two Variables
Swapping is the process of exchanging the values of two variables with each other. This can be useful in many operations in computer science. Here, I have written two major methods used by the programmer to swap the values as well as the optimal solution.
Method I - Using a temporary variable
This method uses a temporary variable to store some data. The following code is written with temporary variable name.
a , b = 50 , 60
print(a,b)
temp = a+b #a=50 b=60 temp=110
b = a #a=50 b=50 temp=110
a = temp-b #a=60 b=50 temp=110
print("After swapping:",a,b)
Output
50 60
After swapping: 60 50
Method II - Without using a temporary variable
The following code swaps the variable without using a temporary variable.
a , b = 50 , 60
print(a,b)
a = a+b #a=110 b=60
b = a-b #a=110 b=50
a = a-b #a=60 b=50
print("After swapping:",a,b)
Output
50 60
After swapping: 60 50
Method III - Optimal Solution in Python
This is a different approach to swap variables using python. In the previous section, we have learned about multiple assignments. We can use the concept of swapping.
a , b = 50 , 60
print(a,b) a , b = b , a
print("After swapping",a,b)
Output
50 60
After swapping 60 50
Trick 03 - Reversing a String
There is an another cool trick for reversing a string in python. The concept used for reversing a string is called string slicing. Any string can be reversed using the symbol [::-1] after the variable name.
my_string = "MY STRING"
rev_string = my_string[::-1]
print(rev_string)
Output
GNIRTS YM
Trick 04 - Splitting Words in a Line
No special algorithm is required for splitting the words in a line. We can use the keyword split() for this purpose. Here I have written two methods for splitting the words.
Method I - Using iterations
my_string = "This is a string in Python"
start = 0
end = 0
my_list = [] for x in my_string:
end=end+1
if(x==' '):
my_list.append(my_string[start:end])
start=end+1
my_list.append(my_string[start:end+1])
print(my_list)
Output
['This ', 'is ', 'a ', 'string ', 'in ', 'Python']
Method II - Using split function
my_string = "This is a string in Python"
my_list = my_string.split(' ')
print(my_list)
Output
['This ', 'is ', 'a ', 'string ', 'in ', 'Python']
Trick 05 - List of words into a line
This is the opposite process of the previous one. In this part we are going to convert a list of words into a single line using join function. The syntax for using join function is given below.
Syntax: “ ”.join(string)
my_list = ['This' , 'is' , 'a' , 'string' , 'in' , 'Python']
my_string = " ".join(my_list)
Output
This is a string in Python
Trick 06 - Printing a string multiple times
We can use the multiplication operator to print a string for multiple times. This is a very effective way to repeat a string.
n = int(input("How many times you need to repeat:"))
my_string = "Python
"
print(my_string*n)
Output
How many times you need to repeat:3
Python
Python
Python
Trick 07 - Joining Two strings using addition operator
Joining various strings can be done without using the join function. We can just use the addition operator (+) to do this.
a = "I Love "
b = "Python"
print(a+b)
Output
I Love Python
Trick 08 - More than one Conditional Operators
Two combine two or more conditional operators in a program we can use the logical operators. But the same result can be obtained by chaining the operators. For example, if we need to do print something when a variable has the value greater than 10 and less than 20, the code will be something like the following.
a = 15
if (a>10 and a<20):
print("Hi")
Instead of this we can combine the conditional operator into single expression.
a = 15
if (10 < a < 20):
print("Hi")
Output
Hi
Learn more about operators in the following article.
Trick 09 - Find most frequent element in a list
The element which occurs most of the time in a list then it will be the most frequent element in the list. The following snippet will help you to get the most frequent element from a list.
my_list = [1,2,3,1,1,4,2,1]
most_frequent = max(set(my_list),key=my_list.count)
print(most_frequent)
Output
1
Trick 10 - Find Occurrence of all elements in list
The previous code will give the most frequent value. If we need to know the occurrence of all the unique element in a list, then we can go for the collection module. The collections is a wonderful module in python which gives great features. The Counter method gives a dictionary with the element and occurrence pair.
from collections import Counter
my_list = [1,2,3,1,4,1,5,5]
print(Counter(my_list))
Output
Counter({1: 3, 5: 2, 2: 1, 3: 1, 4: 1})
Trick 11 - Checking for Anagram of Two strings
Two strings are anagrams if one string is made up of the characters in the other string. We can use the same Counter method from the collections module.
from collections import Counter
my_string_1 = "RACECAR"
my_string_2 = "CARRACE" if(Counter(my_string_1) == Counter(my_string_2)):
print("Anagram")
else:
print("Not Anagram")
Output
Anagram
Trick 12 - Create Number Sequence with range
The function range() is useful for creating a sequence of numbers. It can be useful in many code snippets. The syntax for a range function is written here.
Syntax: range(start, end, step)
Let us try to create a list of even numbers.
my_list = list(range(2,20,2))
print(my_list)
Output
[2, 4, 6, 8, 10, 12, 14, 16, 18]
Trick 13 - Repeating the element multiple times
Similar to the string multiplication we can create a list filled with an element multiple times using multiplication operator.
my_list = [3]
my_list = my_list*5
print(my_list)
Output
[3, 3, 3, 3, 3]
Trick 14 - Using Conditions in Ternary Operator
In most of the time, we use nested conditional structures in Python. Instead of using nested structure, a single line can be replaced with the help of ternary operator. The syntax is given below.
Syntax: Statement1 if True else Statement2
age = 25
print("Eligible") if age>20 else print("Not Eligible")
Output
Eligible
Trick 15 - List Comprehension with Python
List comprehension is a very compact way to create a list from another list. Look at the following codes. The first one is written using simple iteration and the second one is created using list comprehension.
square_list = []
for x in range(1,10):
temp = x**2
square_list.append(temp)
print(square_list)
Output
[1, 4, 9, 16, 25, 36, 49, 64, 81]
Using List Comprehension
square_list = [x**2 for x in range(1,10)]
print(square_list)
Output
[1, 4, 9, 16, 25, 36, 49, 64, 81]
Trick 16 - Convert Mutable into Immutable
The function frozenset() is used to convert mutable iterable into immutable object. Using this we can freeze an object from changing its value.
my_list = [1,2,3,4,5]
my_list = frozenset(my_list)
my_list[3]=7
print(my_list)
Output
Traceback (most recent call last):
File "<string>", line 3, in <module>
TypeError: 'frozenset' object does not support item assignment
As we applied the frozenset() function on the list, the item assignment is restricted.
Trick 17 - Rounding off with Floor and Ceil
Floor and Ceil are mathematical functions can be used on floating numbers. The floor function returns an integer smaller than the floating value whereas the ceil function returns the integer greater than the floating value. To use this functions we have to import math module.
import math
my_number = 18.7
print(math.floor(my_number))
print(math.ceil(my_number))
Output
18
19
Trick 18 - Returning Boolean Values
Some times we have to return a boolean value by checking conditions of certain parameters. Instead of writing if else statements we can directly return the condition. The following programs will produce the same output.
Method I - Using If Else Condition
def function(n):
if(n>10):
return True
else:
return False
n = int(input())
if(function(n)):
print("Eligible")
else:
print("Not Eligible")
Method II - Without If Else Condition
def function(n):
return n>10 n = int(input())
print("Eligible") if function(n) else print("Not Eligible")
Output
11
Eligible
Trick 19 - Create functions in one line
Lambda is an anonymous function in python that creates function in one line. The syntax for using a lambda function is given here.
Syntax: lambda arguments: expression
x = lambda a,b,c : a+b+c
print(x(10,20,30))
Output
60
Trick 20 - Apply function for all elements in list
Map is a higher order function that applies a particular function for all the elements in list.
Syntax: map(function, iterable)
my_list = ["felix", "antony"]
new_list = map(str.capitalize,my_list)
print(list(new_list))
Output
['Felix', 'Antony']
Trick 21 - Using Lambda with Map function
The function can be replaced by a lambda function in python. The following program is created for creating square of list of numbers.
my_list = [1, 2, 3, 4, 5]
new_list = map(lambda x: x*x, my_list)
print(list(new_list))
Output
[1, 4, 9, 16, 25]
Learn more about higher order functions here.
Trick 22 - Return multiple values from a function
A python function can return more than one value without any extra need. We can just return the values by separating them by commas.
def function(n):
return 1,2,3,4
a,b,c,d = function(5)
print(a,b,c,d)
Output
1 2 3 4
Trick 23 - Filtering the values using filter function
Filter function is used for filtering some values from a iterable object. The syntax for filter function is given below.
Syntax: filter(function, iterable)
def eligibility(age):
return age>=24
list_of_age = [10, 24, 27, 33, 30, 18, 17, 21, 26, 25]
age = filter(eligibility, list_of_age)print(list(age))
Output
[24, 27, 33, 30, 26, 25]
Trick 24 - Merging Two Dictionaries in Python
In python, we can merge two dictionaries without any specific method. Below code is an example for merging two dictionaries.
dict_1 = {'One':1, 'Two':2}
dict_2 = {'Two':2, 'Three':3}
dictionary = {**dict_1, **dict_2}
print(dictionary)
Output
{'One': 1, 'Two': 2, 'Three': 3}
Trick 25 - Getting size of an object
The memory size varies based on the type of object. We can get the memory of an object using getsizeof() function from the sys module.
import sys
a = 5
print(sys.getsizeof(a))
Output
28
Trick 26 - Combining two lists into dictionary
The zip unction has many advantages in python. Using zip function we can create a dictionary from two lists.
list_1 = ["One","Two","Three"]
list_2 = [1,2,3]
dictionary = dict(zip(list_1, list_2))
print(dictionary)
Output
{'Two': 2, 'One': 1, 'Three': 3}
Trick 27 - Calculating execution time for a program
Time is another useful module in python can be used to calculate the execution time.
import time
start = time.clock()
for x in range(1000):
pass
end = time.clock()
total = end - start
print(total)
Output
0.00011900000000000105
Trick 28 - Removing Duplicate elements in list
An element that occurs more than one time is called duplicate element. We can remove the duplicate elements simply using typecasting.
my_list = [1,4,1,8,2,8,4,5]
my_list = list(set(my_list))
print(my_list)
Output
[8, 1, 2, 4, 5]
Trick 29 - Printing monthly calendar in python
Calendar module has many function related to the date based operations. We can print monthly calendar using the following code.
import calendar
print(calendar.month("2020","06"))
Output
June 2020
Mo Tu We Th Fr Sa Su
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30
Trick 30 - Iterating with zip function
The zip functions enables the process of iterating more than one iterable using loops. In the below code two lists are getting iterated simultaneously.
list_1 = ['a','b','c']
list_2 = [1,2,3]
for x,y in zip(list_1, list_2):
print(x,y)
Output
a 1
b 2
c 3
Closing Thoughts
I hope you enjoyed this article. As an end note, you have to understand that learning the tricks is not a must. But if you do so, you can stand unique among other programmers. Continuous practice is must to become fluent in coding. Thank you for reading this article. You can follow me on medium.
Happy Coding! | https://towardsdatascience.com/30-magical-python-tricks-to-write-better-code-e54d1642c255 | ['Felix Antony'] | 2020-06-17 15:10:33.504000+00:00 | ['Machine Learning', 'Python', 'Artificial Intelligence', 'Data Science', 'Programming'] |
10 Ways To Find Unicorn Content | The infographic shows some of the most effective ways of finding the best unicorn content.
See Content From Top Performing Facebook Pages
Check the Tops From Relevant Subreddits
Mind Quora for FAQ
Use Medium To Follow Your Interests
Find Great Video Inspiration From YouTube and Vimeo
Subscribe To Relevant Industry Blogs
Follow Hashtags on Instagram and Twitter
Tap Into User Generated Content
See Top Performing Posts From Pinterest
Repurpose Your Unicorn Content | https://medium.com/marketing-and-entrepreneurship/10-ways-to-find-unicorn-content-b5937c9a515 | ['Larry Kim'] | 2020-12-01 01:58:00.825000+00:00 | ['Self Improvement', 'Content Marketing', 'Marketing', 'Content', 'Entrepreneurship'] |
Introduction to Istio Traffic Management | Introduction to Istio Traffic Management
Traffic Routing with Istio by Example
What is Istio?
The continued adoption of microservices architectures and the move toward complex distributed applications composed of decoupled components that communicate via APIs introduces various new challenges for developers. While these types of applications help us to encapsulate, reuse, and efficiently scale discrete components of business logic, they also require a lot more network traffic between services, which makes the need for reliable, secure, and observable channels of communication all the more important.
This is where Istio comes in. Istio is an open-source Service Mesh that helps to simplify the communication between microservices in distributed applications. The core features of Istio generally fall into one of three categories:
Traffic Management: Istio’s Traffic Management capabilities include support for resilience patterns like Retry Policies and Circuit Breakers, as well as request routing capabilities to support scenarios like A/B Testing and Canary Deployments. The Traffic Management components of Istio are what we are going to focus on primarily in the article.
Istio’s Traffic Management capabilities include support for resilience patterns like Retry Policies and Circuit Breakers, as well as request routing capabilities to support scenarios like A/B Testing and Canary Deployments. The Traffic Management components of Istio are what we are going to focus on primarily in the article. Observability: When applications are running in the service mesh, Istio provides out-of-the-box Metrics, Traces, and Logs to provide engineers with observability of service health and behavior. We will see shortly how Istio is able to generate this telemetry without any additional work from the application developers.
When applications are running in the service mesh, Istio provides out-of-the-box Metrics, Traces, and Logs to provide engineers with observability of service health and behavior. We will see shortly how Istio is able to generate this telemetry without any additional work from the application developers. Security: By default, Istio provides encrypted communication channels between microservices in our application, as well as support for various types of authentication and authorizations policies that can be applied to different components of our application.
Note: While Istio is a platform-independent technology, we will be running Istio on Kubernetes for the purposes of the examples below. This writing assumes a basic understanding of how Kubernetes works.
How does Istio Work?
At a high level, Istio works by deploying a proxy alongside each service in your application. This Envoy Proxy, which is often referred to as a sidecar, intercepts all traffic to and from the service, generates telemetry for this traffic, and can be configured to enable the various Authentication, Authorization, and Traffic Management functions described above.
The two diagrams below illustrate the difference between service communication without Istio and the Envoy Proxies (top), and communication within an Isito service mesh, where traffic is routed through the proxies (bottom).
Basic communication between two services without Istio.
Communication between two services in an Isito service mesh.
Note: In Kubernetes terms, the Proxy and its corresponding Service (e.g. the Proxy on the left and Service A above) are two containers running in the same pod.
Istio Components
One of the great things about Istio is that it is highly configurable; however, this flexibility means that there are several different components that we need to understand in order to configure traffic routing for our services.
Here is a bird’s eye view of all of the components involved in a single request to a service within an Istio service mesh running on Kubernetes.
Overview of Istio Components
Let’s break down each component involved in this request flow:
Load Balancer: This is an external load balancer that exposes a public IP address from which external traffic is routed into our Kubernetes cluster. This is not specific to Istio, this kind of load balancer is provisioned by your cloud provider (e.g. AWS or GCP) any time a service of type LoadBalancer is deployed to you cluster. If you’re running Kubernetes on-prem, there are options such as MetalLB that allow you to provision your own external load balancer.
This is an external load balancer that exposes a public IP address from which external traffic is routed into our Kubernetes cluster. This is not specific to Istio, this kind of load balancer is provisioned by your cloud provider (e.g. AWS or GCP) any time a service of type is deployed to you cluster. If you’re running Kubernetes on-prem, there are options such as MetalLB that allow you to provision your own external load balancer. Gateway Proxy: This is a Kubernetes service of type LoadBalancer that we can configure to customize how traffic entering the cluster is routed to our services. By default, Istio deploys a Gateway Proxy called istio-ingressgateway in the istio-system namespace (we’ll use this in our examples applications), but you can also deploy and configure your own Gateway Proxy if needed.
This is a Kubernetes service of type that we can configure to customize how traffic entering the cluster is routed to our services. By default, Istio deploys a Gateway Proxy called in the namespace (we’ll use this in our examples applications), but you can also deploy and configure your own Gateway Proxy if needed. Gateway Configuration: This is one of the more confusing Istio resources we’ll encounter, as it is often called simply called a Gateway in Istio documentation and can easily be confused with the Gateway Proxy described above. In fact, when we create one of these resources (which we will do shortly), we provide a value of Gateway to the kind property of the resource; however, I’ve chosen to call this resource a Gateway Configuration in the above diagram, as I feel that better describes its function. The Gateway Configuration configures the Gateway Proxy, specifying which ports are exposed and which protocols can be used by ingress traffic. The Gateway Configuration operates only on properties of OSI layers 4–6. You can’t configure application-layer routing rules here (this is what Virtual Services are for).
This is one of the more confusing Istio resources we’ll encounter, as it is often called simply called a in Istio documentation and can easily be confused with the Gateway Proxy described above. In fact, when we create one of these resources (which we will do shortly), we provide a value of to the kind property of the resource; however, I’ve chosen to call this resource a Gateway Configuration in the above diagram, as I feel that better describes its function. The Gateway Configuration configures the Gateway Proxy, specifying which ports are exposed and which protocols can be used by ingress traffic. The Gateway Configuration operates only on properties of OSI layers 4–6. You can’t configure application-layer routing rules here (this is what Virtual Services are for). Virtual Service: A Virtual Service defines a set of request routing rules that can be used to distribute traffic to different destinations in the service mesh. Specifically, Virtual Services define application-layer traffic routing rules, meaning that HTTP requests can be routed to different destinations based on properties like URI, request method, and headers. Similar to Gateway resources discussed above, VirtualService resources are not standalone services running on their own set of pods, instead they are simply configuration that is applied to the proxies in the mesh that actually accept and send requests. Virtual Services can be applied either to the Gateway Proxy, or to the sidecar Envoy proxies that run alongside the services for your application that are running in the mesh.
A Virtual Service defines a set of request routing rules that can be used to distribute traffic to different destinations in the service mesh. Specifically, Virtual Services define application-layer traffic routing rules, meaning that HTTP requests can be routed to different destinations based on properties like URI, request method, and headers. Similar to resources discussed above, resources are not standalone services running on their own set of pods, instead they are simply configuration that is applied to the proxies in the mesh that actually accept and send requests. Virtual Services can be applied either to the Gateway Proxy, or to the sidecar Envoy proxies that run alongside the services for your application that are running in the mesh. Destination Rule: Destination Rules define routing policies applied to traffic that has already been routed to a particular service. Additionally, we can use Destination Rules to define service subsets, which allow us to group the instances of our service by version, giving us the ability to route traffic intelligently between multiple active versions of a service without changing anything in our service code.
Destination Rules define routing policies applied to traffic that has already been routed to a particular service. Additionally, we can use Destination Rules to define service subsets, which allow us to group the instances of our service by version, giving us the ability to route traffic intelligently between multiple active versions of a service without changing anything in our service code. MyApplication Sidecar: This is the Envoy Proxy that runs alongside each instance of your deployed service. Traffic to and from your service is intercepted by this proxy.
This is the Envoy Proxy that runs alongside each instance of your deployed service. Traffic to and from your service is intercepted by this proxy. MyApplication Service: This is our application, deployed to a Kubernetes cluster as a standard Kubernetes service. Instances of this service run in pods alongside instances of the MyApplication Sidecar proxy.
This is a lot to wrap your head around, so if it doesn’t completely make sense yet, it’s okay. Hopefully the purpose of each of these Istio components will become clearer when we start to see them in action with some example services and traffic routing configurations.
Istio Traffic Management in Action
To follow along with these examples, you’ll need a running Kubernetes cluster. I’ll be using GKE on Google Cloud, but the equivalent offerings on AWS and Azure will work fine as well.
Downloading Istio
Run the following command to download Istio:
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.7.3 TARGET_ARCH=x86_64 sh -
This will download, among other things, the istioctl command line tool which we will use to install Istio on our Kubernetes cluster.
Note: I’m using version 1.7.3 in these examples, but the latest version of Istio is likely higher by the time you read this.
Installing Istio on Kubernetes
Run this command to install Istio on your cluster using the demo configuration profile:
istio-1.7.3/bin/istioctl install --set profile=demo
Note: The path to istioctl may be slightly different on your machine if you’re using a different version of Istio.
Sample Application
The application that we will be running in the service mesh to illustrate Istio’s traffic routing capabilities will be a super simple blog website using Node.js and Express.
Our website will simply display a static html page with some example text.
Here is what the site will look like when we navigate to it (I know, pretty spectacular):
Next, we’ll need to containerize this website in order to run it on our Kubernetes cluster. Here is a Dockerfile defining a container image for the application:
Creating the Istio Components
Now let’s start creating the Istio components described earlier.
Example 1: Simple Routing
The first scenario we will demonstrate will be as simple as it gets: routing requests directly to the blog site. For this example, we will need a Gateway Configuration and a Virtual Service. We’ll define these components in a YAML manifest file:
Let’s break this down:
The first resource defined in this file (lines 1–14) is the Gateway Configuration, as specified by kind: Gateway
By using the selector app: istio-ingressgateway we’re saying that this Gateway Configuration should be applied to the istio-ingressgateway gateway proxy. If you wanted to use your own gateway proxy, you would modify this line to match your own labels.
we’re saying that this Gateway Configuration should be applied to the gateway proxy. If you wanted to use your own gateway proxy, you would modify this line to match your own labels. The server field contains the meat of the Gateway Configuration. Here we specify a list of Server Specifications, which is essentially an open port, an expected protocol, and a set of hosts that can accessed through this port. If we wanted to set up TLS or Mutual TLS (MTLS), we would configure that at the Server Specification level as well. For this simple example, we’re just exposing port 80 for HTTP traffic, and stating that all hosts are accessible through this gateway by using the wildcard "*"
field contains the meat of the Gateway Configuration. Here we specify a list of Server Specifications, which is essentially an open port, an expected protocol, and a set of hosts that can accessed through this port. If we wanted to set up TLS or Mutual TLS (MTLS), we would configure that at the Server Specification level as well. For this simple example, we’re just exposing port 80 for HTTP traffic, and stating that all hosts are accessible through this gateway by using the wildcard The second resource defined is the Virtual Service (lines 16–35).
We provide fakeblog.com in the hosts field to specify that this Virtual Service handles traffic going to the fakeblog.com domain (on my local machine, I’ve mapped the IP address of my Kubernetes cluster’s external load balancer to the domain name fakeblog.com. You could also use the public IP address instead of a domain name, or you could use the wildcard "*" to allow this Virtual Service to handle traffic meant for any host).
in the field to specify that this Virtual Service handles traffic going to the fakeblog.com domain (on my local machine, I’ve mapped the IP address of my Kubernetes cluster’s external load balancer to the domain name fakeblog.com. You could also use the public IP address instead of a domain name, or you could use the wildcard to allow this Virtual Service to handle traffic meant for any host). In the gateways field, we’re using the name of the Gateway Configuration defined in this same file. This applies the Virtual Service’s routing rules to traffic entering the mesh through this gateway.
field, we’re using the name of the Gateway Configuration defined in this same file. This applies the Virtual Service’s routing rules to traffic entering the mesh through this gateway. The http field is where we specify our Virtual Service’s routing logic. Lines 26–28 define a match condition for the following routing actions, namely that the URI begins with /myblog . Lines 29–30 describe a URI rewrite rule that replaces the URI with / (this is done so that, when the request is ultimately sent to the blog website, it will be sent to the root of the site). Lines 31–35 contain the action to be taken if the match condition is met and after the rewrite rule has been applied: the request will be routed to a service named myblog on port 80.
With our Istio routing configuration created, we now need to actually deploy the blog website to our Kubernetes cluster. Since this next YAML file contains only standard Kubernetes resources (nothing Istio-specific), we won’t go into a line-by-line breakdown, but there are a few points worth noting.
Notice the name given to the service on line 4 above. This must match the value given to the host field (line 35 in the previous file) in the Virtual Service definition.
given to the service on line 4 above. This must match the value given to the field (line 35 in the previous file) in the Virtual Service definition. On line 32, the image we’re using for this service is gnovack/my-blog:v1 . This is a public Docker image repository on Docker Hub to which I pushed the blog website image defined in the Dockerfile from earlier.
For convenience, I’ve created both of the above YAML files in the same directory like so:
1_simple_access/
├── myblog-gateway.yaml
└── myblog.yaml
Assuming you’ve followed this pattern, you can create all of the resources for this example on your cluster with the following command:
kubectl apply -f 1_simple_access/
Now, we can go check out our underwhelming blog website by navigating to http://fakeblog.com/myblog (assuming the domain name fakeblog.com has been mapped to the public IP address of the Kubernetes cluster external load balancer, if not, you can simply use the public IP).
So, this proves that we can access the blog website through Istio’s components, but that isn’t really all that exciting. Let’s take a look at something a little more interesting in the next example.
Example 2: Weighted Routing
In this example, we’re going to deploy a second version of the blog website, then use Istio’s traffic management capabilities to perform weighted traffic routing between both versions.
First, we’ll need to create a new version of the blog website. To keep it simple, all we’re going to change is the header color; version 2 will have a blue header.
Next, we’ll push a container image for the new website version to the same Docker image repository used previously, meaning we will now have both a v1 and v2 version of the image.
Now let’s reconfigure the Istio components created earlier to enable weighted routing. We’re going to run both versions of the blog website (red & blue) in the Kubernetes cluster, then use Istio to route 75% of requests to the original version, and 25% to the new version.
Since the Istio Gateway Configuration is just concerned with exposing ports and protocols, we won’t need to make any changes to it to support the weighted routing scenario; however, we will need to modify the Virtual Service created earlier.
Here is the new Virtual Service definition for our weighted routing setup:
A lot of this (lines 1–16) is the same as before. We’re still using the same hostname, Gateway Configuration, match condition, and URI rewrite rule. The difference is in the route field, that is, the action taken if the match condition is met. Whereas before we were simply routing the request to the myblog service on port 80, now we have two different destinations defined. They both use port 80, and both ultimately send the request to the myblog service (lines 20 & 26), but they are distinguished by the values of the subset and weight fields. Weight is fairly self-evident; this is the proportion of total traffic that will be sent to that particular destination (75% goes to the first destination and 25% goes to the second destination in the example above). But what is a subset?
Subset is a property of Istio Destination Rules. We mentioned Destination Rules earlier, but didn’t need to use them for the first example. Destination Rules allow us to create routing policies to apply to traffic bound for a particular service, and to logically group instances of that service into subsets.
Let’s define a Destination Rule now that groups the instances of our service into v1 and v2 subsets based on whether they are running version 1 (red) or version 2 (blue) of the blog website.
This Destination Rule should be pretty easy to make sense of. We’re giving it a name (line 4), specifying that it applies to the myblog service (line 6), and then defining two subsets (v1 & v2; these names must match the value of the subset field in the Virtual Service) of our blog service (lines 7–13). The real question is, “How does this Destination Rule know which instances of the blog service are running version 1 and which are running version 2?”. This is accomplished by the labels field in each subset. The Destination Rule will group particular pods running the myblog service into it’s subsets based on the labels assigned to those pods. For example, any pod with the label version:v1 will be considered part of the v1 subset.
So now the last thing to do is to redeploy the blog website, this time with some pods running version 1, and other pods running version 2, ensuring that the appropriate labels are assigned to each.
Here is the same definition from earlier for the myblog service, although this time with a separate deployment for each version of the website (notice that the deployments assign different labels and make use of different container images).
Now, if we deploy the resources to the cluster and navigate back our blog site at http://fakeblog.com/myblog we will see, after refreshing the page several times, that we are shown the red blog page about 75% of the time, and the blue blog page the other 25%.
By using Istio’s routing capabilities, we were able to set up a simple A/B test where a certain portion of our customers are shown a new experience, without making any modification to our application code to enable this. If the new version isn’t a success, we can easily modify the routing configuration to send all requests back to the original version, or if we feel we need more data on the new version, we can just as easily modify the weights in our Virtual Service to make this happen.
Example 3: Routing with Request Parameters
Another common scenario where we can leverage Istio Traffic Management is when we want to expose a new version of our service to only a specific set of users, such as internal users or beta testers. Let’s look at one way to accomplish this using Istio.
Instead of simply routing a specified percentage of traffic to each version of our service, Istio also allows us to configure routing based on properties of each HTTP request. In this example, we’ll modify our Virtual Service to route traffic based on the value of a query string parameter, specifically if the user query parameter has a value of internal, we will route the request to version 2, otherwise the request will go to version 1 (I chose to use query string parameters since these are easy modify directly in the browser, but you can also use other properties like URI or headers. A full list of properties that can used to match a request to a destination can be found in the Istio reference docs: HTTPMatchRequest).
Let’s see how to do this with our updated Virtual Service definition:
As you can see, we now have 2 different match conditions. When multiple match conditions are provided, they are evaluated sequentially, and the first route for which the request satisfies the match condition is chosen. For the Virtual Service above, this means that the request is first evaluated against the first match condition (i.e. does the URI begin with /myblog AND does the request have a user query string parameter with a value of internal?). If the request satisfies these conditions, then it is routed to version 2, of not, then it is checked against the next match condition (and so on, if there are more than 2 match conditions).
If we redeploy the Virtual Service with this definition, we will be able to reach the new version of the service by navigating to http://fakeblog.com/myblog?user=internal; all other requests will continue to be routed to the old version. In a real-world scenario, you might add this parameter programmatically based on on the IP address of the user, or based on some kind of user identifier if your application is one that requires users to log in.
Conclusion
We’ve looked at just a few of the myriad use cases where Istio Traffic Management can be leveraged to simplify the development and deployment of microservices. Specifically, we saw how to intelligently route between different versions of a service by using either weights or request parameters to support processes like A/B testing or beta releases; and, most importantly, we accomplished this by simply configuring and reconfiguring our Istio components rather than creating (and probably later removing) custom logic in our application to support these scenarios.
We didn’t have a chance to look into many of the great features that Istio offers (maybe next time), such as Fault Injection, Retry Policies, and Mirroring, but hopefully this article helped establish an understanding of the foundational concepts of Istio that can help you start exploring its many other capabilities and components on your own.
Thanks for reading. Feel free to reach out with any questions or comments.
Source Code
All of the code used in the examples can be found here: https://github.com/gnovack/istio-routing
References | https://medium.com/swlh/introduction-to-istio-traffic-management-6b62c86f8cb4 | ['George Novack'] | 2020-12-12 00:43:21.051000+00:00 | ['Kubernetes', 'Istio', 'Service Mesh', 'Microservices'] |
Does Being an Author Change How You Read Novels? | The Negatives
The bad part of all this studying is, of course, the constant detection of even the tiniest errors. There are books which I read from start to finish in a state of immersion and suspended disbelief, and even though there are errors here and there, I know they’re made on purpose because their strict, grammatically correct counterparts would destroy the flow of the prose. So, even though I detect these errors, I don’t care about them and they disappear from my mind seconds later.
But then there are stories I don’t find engrossing to begin with. The errors in these stories completely pull me out of the immersion. The kind of errors I’m talking about are perhaps not too irksome to everyone. But they’re as annoying to an author as they would be to a chef who has to consume food he knows he could’ve prepared better. I’ll give you an example from this article itself —
Do they employ short chapters which end on cliffhangers, like Dan Brown…
If there were no comma after “cliffhangers”, it would imply Dan Brown is a kind of cliffhanger. Once you learn these little nuances, reading becomes an entirely different experience. You start reading a book, and while it doesn’t blow your mind away at the beginning, it’s a solid read. You’re just starting to get immersed in the story when you spot a mistake like the one I mentioned above. Your immersion is immediately broken. Sighing, you check the time and decide to do something else instead.
That book might have turned out much better had you kept on with it. But now you’ll never know, and it’ll be just another book you started and never finished. That’s been my experience. I’ve lost track of the number of books I’ve begun reading only to abandon fifteen minutes later because my immersion kept getting broken.
Examining novels to improve your own writing is good, but I take it too far. The number of books in my discarded pile is proof of that. | https://medium.com/books-are-our-superpower/does-being-an-author-change-how-you-read-novels-6724250c335e | ['Chandrayan Gupta'] | 2020-12-06 21:52:48.496000+00:00 | ['Self Improvement', 'Writing', 'Advice', 'Reading', 'Books'] |
The First Time I Killed My Little Darlings | Who wrote that ridiculous rule? I’d bet it wasn’t Stephen King, Master of Doorstops!
(Never mind how Carrie was 60,000 words.)
Anyone ever heard of Margaret Mitchell? Gone With The Wind! 418,000! (FIRST AND ONLY!)
Okay, okay, that was like seventy years ago. But hey, look at this, Miss Snark! A DEBUT NOVEL about vampire librarians (yes, you read that right) just came out and it’s 240,000 words!
So yeah, you can have a debut novel thicker than a grilled cheese sandwich!
(It sucked. Torturous.)
Yah sure, mine’s still longer — I’ll bet I could whack it down to 240,000. And it definitely doesn’t suck!
Miss Snark begged to differ. She listed the literary sins of the wannabe novelist’s first effort: Too many characters; subplots that go nowhere; too many useless words; too much description (especially settings); too-long-too-graphic sex scenes; plots that sag in the middle or lack dramatic tension.
Her annoying list nagged me like a persistent pet demanding attention when I had more important things to do. I pushed it away but it kept throwing its paws in my lap.
Maybe 300,000 words was too long for a debut novel, today.
Other, better writers got away with these crimes in the past but maybe longer novels were best left to the pros. She was right. Stephen King I ain’t.
Then again, Stephen King wasn’t Stephen King, either. I felt IT could have been pared down by about a quarter to a third. Having read even heavier doorstops since then, I’m done with Stephen King until some brave editor goes Freddie Krueger on every work over 100,000 words and slashes them down to Abridged.
Better writers than the King and I had committed these many sins.
Too many characters: Jane Austen, Leo Tolstoy.
Subplots going nowhere: J.R.R. Tolkien.
Plots without dramatic tension: Jules Verne
Not knowing when the story ends: Tolkien again, and King’s Rose Madder.
Too many useless words: Every Victorian writer. And Tolstoy.
Too much description, especially of settings: Tolkien again. Tolstoy — farming.
Too-long, too-graphic sex scenes: Every novel written since 1980, until, I guess, 2005. Although I happily cut my 10-page orgasm down to his slipping his hand under her halter top and ending, “I arched my back, abandoning myself.”
Stories sagging in the middle: Arthur C. Clarke’s Childhood’s End. I’ve half-read it twice and he loses me after the aliens reveal themselves.
They’d all broken Miss Snark’s One Blog Tip To Rule Them All: Kill your darlings. | https://medium.com/illumination-curated/the-first-time-i-killed-my-little-darlings-64d84d8f81a1 | ['Nicole Chardenet'] | 2020-12-26 04:34:27.232000+00:00 | ['Editing', 'Books', 'Fiction', 'Advice', 'Writing'] |
A Website without Email Marketing Is Like a House with No Roof | Image by Annie Spratt on Unsplash
I had a difficult conversation again last week with an online retailer who was struggling with the high acquisition costs of selling via online marketplaces (eBay, Amazon, etc.) but not putting the time and effort into his own site because it hadn’t generated enough sales.
He’d given up on expensive paid search marketing, could never find the time to promote new products on social media and — the ultimate sin — never even considered email marketing.
The little traffic he did generate came from affiliate partners — although much of this came from price comparison sites, which meant most sales were based on a strategy of heavy discounting and wafer-thin margins.
No Relationships
In short, the online retailer had no real relationships with his customers — their loyalty was with the online marketplaces and affiliate sites, and he only picked up the crumbs because he fought hard to be the cheapest. While this is a common business strategy for many online retailers, it’s not an easy or very lucrative path to success.
Note: Volume sales, if not managed correctly, can be incredibly expensive. The more you sell, the more staff you need to pick and pack, the more warehouse space you need, and the more costs add up and squeeze those tiny margins.
Acquisition vs. Retention
The online retailer had thrown all his marketing eggs into one strategic basket and focused his efforts entirely on the acquisition, with nothing on retention.
Note: Acquisition marketing is always more expensive than retention marketing.
The retailer had two options:
Continue down the same path and hope his competition would allow him the space to make a decent margin on a higher volume of sales (he worked in an incredibly competitive space, so this wasn’t likely to happen). Start looking at a strategy to retain and grow his own-site sales — yep, you’ve guessed it — through email marketing.
Start Small and Build
Even though sales via his site were limited, every customer engagement gave him the opportunity to build a potentially lucrative relationship and increase his margins by bypassing those expensive acquisition costs.
Note: Too many online businesses believe they have to have a substantial list in order to start email marketing. To the contrary, if you have a handful of customers, you have enough people to start marketing to. They have already made a commitment to your business, so why would you ignore them?
Insulate Your Business
Think of email marketing as insulation for your business. Like a roof on a house, it traps the heat (cash flow) inside a business and prevents the rain (unwarranted expenses) from coming in.
You wouldn’t build a house without a roof, so why would you build a website without email marketing?
How have you used email marketing to build relationships and take ownership of your business and its profits? Share your comments below: | https://john-w-hayes.medium.com/a-website-without-email-marketing-is-like-a-house-with-no-roof-2e0f1be65598 | ['John W Hayes'] | 2018-09-17 08:46:04.753000+00:00 | ['Ecommerce', 'Email Marketing', 'Marketing', 'Amazon'] |
Use BIRT to generate reports from CSV | If you achieve data from CSV or other types of flat files in BIRT, you can s use Flat File Data Source. If you generate a dynamic parameter report, you can create a second dataSet to use the parameter and filter the original dataSet. If you use the Unformatted CSV file as a data source and produce the report, you need to implement Scripted Data Source to bring the data in. How can we query data and create a dynamic parameter report as SQL does with flat files? How can we use a simple and easy way replacing Scripted Data Source? How can we process the flat files all in one? The answer is using esProc in Birt.
Let’s take an example to introduce the implementation process:
In this example, employee sales information is stored in sale.csv, and employee information with dates greater than 2015–03–02 needs to be queried based on input parameters.
sale.csv data are as follows:
userName,date,saleValue,saleCount
Rachel,2015-3-1,4500,9
Rachel,2015-3-3,8700,4
Tom,2015-3-2,3000,8
Tom,2015-3-3,5000,7
Tom,2015-3-4,6000,12
John,2015-3-2,4000,3
John,2015-3-2,4300,9
John,2015-3-4,4800,4 userName,date,saleValue,saleCount
Rachel,2015-3-1,4500,9
Rachel,2015-3-3,8700,4
Tom,2015-3-2,3000,8
Tom,2015-3-3,5000,7
Tom,2015-3-4,6000,12
John,2015-3-2,4000,3
John,2015-3-2,4300,9
John,2015-3-4,4800,4
userName,date,saleValue,saleCount
Rachel,2015-3-1,4500,9
Rachel,2015-3-3,8700,4
Tom,2015-3-2,3000,8
Tom,2015-3-3,5000,7
Tom,2015-3-4,6000,12
John,2015-3-2,4000,3
John,2015-3-2,4300,9
John,2015-3-4,4800,4 userName,date,saleValue,saleCount
Rachel,2015-3-1,4500,9
Rachel,2015-3-3,8700,4
Tom,2015-3-2,3000,8
Tom,2015-3-3,5000,7
Tom,2015-3-4,6000,12
John,2015-3-2,4000,3
John,2015-3-2,4300,9
John,2015-3-4,4800,4
Here we show how to use parameters for CSV data source in birt.
The integration of BIRT with esProc is not introduced here. Please refer to How to Call an SPL Script in BIRT.
Step 1: Add an esProc JDBC data source.
Step 2: Add the dataset and write the SQL query CSV file directly.
Query text:
Report parameter:
Set dataset parameters and link them to report parameters:
Step 3: Create a report
The report is designed as follows:
Step 4: WEB preview, input parameters, preview results:
(1)input parameter:Date 2015–03–02
For more examples, procedure text files refer to the following Structured Text Computing. | https://medium.com/analytics-vidhya/use-birt-to-generate-reports-from-cvs-e8e9e5972992 | ['Easily Simplify Data Processing'] | 2019-11-24 09:57:35.795000+00:00 | ['Report', 'Big Data', 'Analytics', 'Visualization'] |
You’re Not Weird If You Think Trees Have Conversations | More Than What You See
Photo by Rishi Deep on Unsplash
“A forest is much more than what you see…underground there is this other world, a world of infinite biological pathways that connect trees and allow them to communicate…and allow the forest to behave as though it’s a single organism. It might remind you of a sort of intelligence.” — Suzanne Simard, forest ecologist, 2016 Ted Talk
Harris references a book called “What A Plant Knows” by plant geneticist Daniel Chamovitz. In his book Chamovitz explains that plants can react to outside stimuli in ways that are startling and mainly go unnoticed. In particular, plants can sense touch and also show indications that they have memory.
Chamovitz explains that vines can change the rate and the direction of their growth when it senses by touch something to grow around. Venus fly traps can also tell the difference between the touch of wind and rain versus the pressure from the touch of an insect or animal.
He also explains that plants show some form of memory. For instance, venus fly traps have hairs on them that when touched will close their leaves around an insect. Two hairs need to be touched before they will close. So, it ‘remembers’ that the first hair is touched before the second hair triggers the closing. Similarly, Chamovitz explains that wheat seedlings ‘remember’ they’ve gone through a winter before flowering.
Forest ecologist Suzanne Simard believes trees can communicate with each other. In her 2016 Ted Talk, she details experiments she’s done over 30 years that she says prove this.
In her first study to prove her hypothesis, she took Paper Birch and Douglas Fir trees and planted them together. She used tracer elements of carbon 14 gas and carbon 13 gas. She bagged the trees and put an individual gas in the bags with each individual species of tree. After sitting for an hour, she removed the bags and found that the birch and fir trees were passing the carbon back and forth when she analyzed them with a geiger counter.
In this particular experiment, the birch sent carbon to the fir, which she covered with a blanket and shielded from light exposure. In other instances she found the fir sending carbon to the birch trees.
Mycorrhizal Networks — Drawing By Nefronus Via Wikipedia Creative Commons
Simard explains that this carbon is being passed through mycorrhizal networks that originate from fungi. The top mushroom part of the fungus you see has threads called mycelium coming out of the bottom of it that interconnect with the roots of trees. The fungus and tree exchange nutrients and products from photosynthesis this way.
This network can also be used to trade carbon between trees and also information. For instance, trees can alert other trees of harmful insects. The network can be so dense that there can be hundreds of kilometers of mycelium under your feet.
In further experiments, Simard also found that ‘mother trees’ can recognize their ‘offspring’. Through monitoring isotope exchanges, she found that the parent tree will give more carbon to trees that were its ‘children’. The parent tree will also reduce their root competition with the related tree as well. Defense signals were also sent from ‘parent’ to ‘child’.
It appears this mycorrhizal network functions as a plant internet of sorts. Some even call it the wood wide web. | https://medium.com/discourse/youre-not-weird-if-you-think-trees-have-conversations-35c888a2002f | ['Erik Brown'] | 2019-09-23 10:21:01.161000+00:00 | ['Nature', 'Life Lessons', 'Environment', 'Technology', 'Science'] |
Draw insights from fiction books with Text Mining | Getting Started
What you’ll need: book in PDF or TXT format
Programming language and IDE: R and the IDE of your choice
Packages we’re gonna use: tm, stopwords, tidytext, tidyverse, wordcloud2, ggplot2, and others.
The book I have chosen to analyze this time is Carrie, by Stephen King. While it’s actually not my favorite book, it was written by my favorite author of all time, Mr. Stephen King. The story of the young girl Carrie is known by many people, perhaps more than any other of his books — and that’s saying something, coming from someone who wrote It, The Shining, The Dark Tower, Under the Dome, and many other masterpieces.
Carrie, by Stephen King
In addition to that, it has the added benefit of being very short, especially for King’s standards, with less than 200 pages and around 60K words.
Loading the book and preparing the data
Basically, 80% of the time you spend doing Text Mining and NLP will be actually dedicated to obtaining, transforming, and preparing the data so it can be used with the methods you’re gonna use — and that’s basically the way doesn’t matter the method and function you choose.
First, we use readLines to import our book in TXT format and load it into a vector of 4500+ strings of characters, each string containing a paragraph.
carrie <- readLines("carrie.txt",skipNul = T,encoding = "UTF-8")
If your book is in PDF file format — way more common than TXT— you can use package pdftools, which would help you load the text fairly easily into your R environment. The command is pretty simple and it would look like this:
carrie <- pdf_text("carrie.pdf")
Next, we create an object called corpus — a collection of text documents. Some additional steps take as input that corpus we just created and that are necessary to transform and prepare the data. They are explained in more detail in the source code for this article, which you can find on my Github page. Some of those include:
filtering out the cover, dedicatory, preface, footnotes, etc. if your file has those sections.
removing undesired characters /, “, ”, — and others.
transforming the text to lowercase.
removing numbers, punctuation, etc.
For those transformations and others, I’ve used the package tm, which provides methods for data import, corpus handling, preprocessing, metadata management, and creation of term-document matrices. Another step you can’t miss while preparing your book for analysis is removing any stopwords.
Stopwords are mainly articles, pronouns, and prepositions such as “the”, “my”, “he”, “she”, “for”, etc. that usually don’t add any value themselves to the text, they’re there to connect other types of words. We use a well-established database of stopwords for the English language obtained from package stopwords, which has 5 different sources of stopwords for more than 40 languages.
stopwords::stopwords("en", source = "stopwords-iso")
Some of the words considered “stopwords” for the English language
When crossing the dataframe of 1298 stopwords with our corpus of 4500+ paragraphs, we end up with only words important for a thorough but meaningful analysis of the book.
Last but not least, we perform a procedure called tokenization, which breaks the text into words so then we can analyze them individually. There are other types of tokenization: by sentence, by paragraph, and others, check more about it here.
carrie %>% unnest_tokens(input = text, output = word)
text tokenized by word
Now we got something that looks promising to really begin our analysis.
Part 2: The fun stuff
The most basic step when analyzing a book, in my opinion, is looking into the most frequent words contained in the text. Let’s plot the top 10.
A few observations:
Carrie : it’s common and obvious that the main character’s name is the most mentioned word of the book. If you feel that word doesn’t add any value to your analysis, it’s possible (and easy) to remove it so it doesn’t appear in your plots.
: it’s common and obvious that the main character’s name is the most mentioned word of the book. If you feel that word doesn’t add any value to your analysis, it’s possible (and easy) to remove it so it doesn’t appear in your plots. Momma : pretty indicative of how messed up the relationship between Carrie and her mom was and how violent were their interactions.
: pretty indicative of how messed up the relationship between Carrie and her mom was and how violent were their interactions. White : “of course, it’s their last name” you might say since the last name of the two main characters is bound to appear a lot. But one of the most memorable and impressive things about this book is the contrast between white (purity) and red (love, anger, danger) and I have no question at all that the surname and the number of times the word appear are no coincidence at all.
: “of course, it’s their last name” you might say since the last name of the two main characters is bound to appear a lot. But one of the most memorable and impressive things about this book is the contrast between white (purity) and red (love, anger, danger) and I have no question at all that the surname and the number of times the word appear are no coincidence at all. Blood: I personally thought this word was going to be mentioned a bit more, but it makes sense since there are only two main events in which blood is involved (the very beginning and the very ending of the book). But don’t get me wrong, this is a bloody/gory story.
Analyzing only single words makes it easy to overlook important relations, so another way of aiding your analysis is breaking the text down into pairs of words, what we’ll call bigrams. In doing that it’s possible to see better how words are related and identify common pairs, amongst other things.
Looking past the obvious “Carrie White” and “Margaret White” combinations, the third bigram shows a very interesting phenomenon common in books written by Stephen King:
Creepy writing style (but that’s one of the reasons we love him)
If you ever read a book by Stephen King, especially the ones belonging to the horror genre, you probably remember a part like this where he pauses the story for a bit to give us an often creepy look into what the character’s thinking or feeling. Stephen King at his best!
Word Cloud
Another way of visualizing text is a Word Cloud. Although there are several authors now advocating against it — and for good reasons, it is consensus that a Word Cloud is still very useful. I usually like to construct a Word Cloud when working with text since 1.it is easy 😅 and 2. it shows unmistakenly the most frequent and important words in the text.
Using package wordcloud2 and our dataframe of frequent words, we plot 200 words of those. The intensity of the red color and the size of the font are greater for more frequent words.
wordcloud::wordcloud2(data = words[1:200,], size = 1.6,
shape = "oval",
rotateRatio = 0.5,
color= rev(cartography::carto.pal("red.pal",n1=20))
)
Word Cloud of the most frequent words of the book
Correlation between words
Now we want to determine the most relevant correlations between our 10 most frequent words and any other words of the book. For that, we take a few steps.
First, we find associations of words with correlation greater than 0.15 for our 10 words. Then we select the top 5 most correlated words to our 10.
wordassociation=findAssocs(x=dtm,terms= head(words$word,10),corlimit = 0.15) association = as.data.frame(unlist(wordassociation)) %>%
tibble::rownames_to_column(var = "word") %>%
rename(corr = `unlist(wordassociation)`) %>%
tidyr::separate(col=word,sep="([.])",into=c("word1","word2")) %>%
mutate(word1 = factor(word1,levels=head(words$word,10)),wordno = as.numeric(word1)) %>%
group_by(word1) %>%
slice(seq_len(5)) %>%
arrange(word1, desc(corr)) %>%
mutate(row = row_number()) %>%
ungroup() ggplot(association, aes(corr, reorder(word2,corr),fill=word1)) +
geom_col(show.legend = FALSE) +
facet_wrap(~word1,scales="free_y")+
theme(panel.grid.major.x = element_blank(),plot.title = element_text(hjust = 0.5)) +
ggtitle("Relationship of top words") + xlab("Correlation") + ylab("Words")+ scale_fill_manual(values = cores)
Words more correlated with our top 10 most frequent words
For instance, the word “blood”, relates more strongly with “expiate”, “pig”, “pour”, “coppery” and “awful”. All of those make sense based on the story of the book and the properties of the blood itself, so we’re doing well. 😀
Network of bigrams
Another intuitive and eye-popping way of visualizing bigrams is a network plot, which you can make with ggraph package.
bigram_graph <- carrie_bigrams %>%
filter(n > 5) %>%
graph_from_data_frame()
ggraph(bigram_graph,layout = "fr") +
geom_edge_link(color="red") +
geom_node_point(color="red") +
geom_node_text(aes(label = name), vjust = 1, hjust = 1)
Network graph of the relationships between pairs of words
This method takes as input our dataframe of tokenized bigrams (pairs) and links those pairs of words that have a high correlation. For instance, “carrie” is directly linked with screamed, white, looked, and “tommy ross”, all of those words that appear many times in the book next to it.
Other relationships are pretty apparent and make a lot of sense, such as “police x station”, “root x beer”, “dance x floor”, since those are pairs that appear several times together in the text.
Final remarks
That concludes our first part of the analysis with methods and comments on how to prepare your data, visualize the text in a more general way, and look for relationships within the data.
There are certainly many other ways of analyzing and visualizing text but those were some of the most interesting I wanted to try for this book. Stay tuned for the next articles and analysis of more books.
People don’t get better, they just get smarter. When you get smarter you don’t stop pulling the wings off flies, you just think of better reasons for doing it. Stephen King.
Shoutout to the great people that wrote these articles. I definitely couldn’t have done it without those amazing pieces of information. | https://medium.com/analytics-vidhya/drawing-insights-from-any-book-with-text-mining-in-r-part-1-ffc9788d4cf2 | ['Rafael Belokurows'] | 2020-12-28 16:40:20.429000+00:00 | ['R Programming', 'Artificial Intelligence', 'R', 'Data Science', 'Books'] |
A Comprehensive Guide To Genetic Algorithms — The ELI5 Way | Genetic Algorithms are based on Charles Darwin’s theory of natural selection and are often used to solve problems in research and machine learning.
In this article, we’ll be looking at the fundamentals of Genetic Algorithms (GA) and how to solve optimization problems using them.
What are Genetic Algorithms?
Genetic algorithms were developed by John Henry Holland and his students and collaborators at the University of Michigan in the 1970s and 1980s.
It is a subset of evolutionary algorithms, and it mimics the process of natural selection in which the fittest individuals survive and are chosen for cross-over to reproduce offsprings of the next-generation.
The natural selection process also involves the addition of small randomness to the offsprings in the form of mutation. This will result in a new population of individuals with mixed fitness.
But only the fittest individuals are chosen for reproduction, and the fitness is improved consistently over generations. | https://medium.com/towards-artificial-intelligence/a-comprehensive-guide-to-genetic-algorithms-the-eli5-way-fcc8940ae9a4 | [] | 2020-12-15 01:03:18.877000+00:00 | ['Programming', 'Python', 'Artificial Intelligence', 'Machine Learning', 'Technology'] |
How to Conquer Writer’s Block Today: 5 ways | After moving to another city with my wife I am trying to get my life back on track, and also my writing… Last few weeks were very stressful due to the reason why we moved, and I found it very difficult to start writing again, I had a severe case of writer’s block. After staring at my screen for a few hours (again) one day, I decided to search for ways to get rid of my writer's block. Some ways are just plain dumb, but some ways do really work wonders for me!
Below I will share 5 ways that work for me. I would love to hear how you tackle this nasty state of mind. Feel free to leave a comment below since we can all learn from each other. I would appreciate it very much!
What is it and what are the most common causes?
Fear: Many writers have problems with sharing their ideas with the world. ‘What if they don’t like it?’. Many writers are struggling with the fact that not everyone will like their articles, and some people will even criticize your writing. But it is nothing personal. And this is a big reason that a lot of aspiring writers never make it to where they want to be.
Timing: Some ideas need more time to develop, sometimes it’s a good idea to go for a long walk and read your draft again and improve it. There is always something to improve before the article is ready for publication.
It has to be perfect: At least, that is what you believe. But I disagree with you on this, I believe it is better to finish and stop making 50 changes. ‘When do you decide the article is good?’. Your article will never be perfect, this sucks to hear maybe, but I am here to help you and not kick you down.
So how do we put this enemy down?
The answer to this question is not an easy one or a short one to be completely fair. Simply because I have handled this enemy in different ways with different outcomes.
There are no ‘5 easy ways to conquer writer’s block’, which is a shame but it also gives us plenty of space to experiment and find out what works best for us.
Creative ways to kick writer’s block in the b@lls
Stretch your legs
I can almost hear you think ‘duh’ the whole world knows this, but did you know why walking improves your writing? Please allow me to explain.
A different desk brings different words
Change your working environment, go to the local coffee shop, or try out that new restaurant with their super-fast wifi. I and my wife (who also works from home, not a writer btw) try to change our home-offices at least once a week for a new inspiring place. We love to work in our local coffee-shop, it is always busy with tourists and other interesting coffee-lovers. I love to write while watching people and imagine how their holidays are going and what kind of tourist they are. ‘Are they the fancy kind that only tries out the best food in town, or are they the kind of backpackers that wants to feel part of the community?’.
Seeing different people and different places sparks my creativity which will often give me a different view on things. And this helps me to not be judgemental about the subject I’m writing about.
Stay in the zone, the ‘writing zone’
It’s very nice, having your own home office. You have everything at hand, and the coffee is always fresh. Sounds great, but there are also a few downsides. Imagine having several children running around the house and making a mess everywhere they go. Would they distract you up to the point that you cannot focus anymore? And how about other things like noise?
Does your friendly neighbor upstairs wear her high-heels on her beautiful dark-brown oak-floor? Or is there a lot of heavy traffic in the street you live? These are just a few examples of some things that can make your writing-life a living hell. So try to find a calm place to work, without any distractions like loud noises and children.
And, do not forget to turn off notifications on your phone. Once you are in ‘the zone’ you should keep going, if you need to search for things on Google to support your content, don’t! You can do this later, keep writing for now. Just put notes in your draft describing the searches you need to do.
Pump it up!
Getting your heart rate going is a good way to get those creative juices flowing, I know I know.. That saying might be lame, but I do believe it’s true for us creative artists.
For example, go for a run in the morning before taking the kids to school, or ride your bike to the park an enjoy a long break. There are many ways to get your blood flowing and I suggest regular exercise to keep your body healthy and your mind creative.
Make your inner child happy
Every person should be able to let their inner child out once in a while, to play with Lego or play video games for example. Life is serious enough and playing is a good way to get away from the hustle and bustle of being a business owner. So embracing your inner child is the perfect way to balance life and pleasure if you ask me.
Let’s look at my working-days as an example: I wake up at 7.30 AM and have breakfast with my wife. After that, I check my inbox and answer requests from project managers and questions from clients.
After getting the most urgent needs out of my way, I tackle the other tasks of the day. My tasks range from improving my LinkedIn profile to writing articles about financial issues like how to get all your employees ready for the next big change in your call center? for some of my regular clients. It takes up between 3 and 7 hours a day depending on my clients’ needs. But after working for 2 or 3 hours I usually take a long break by having lunch, playing video games, or going for a walk.
In the afternoon, I tend to finish the tasks I started for the day and I tend to be finished at 17.30. After that, I have time to relax, have some food and watch a movie (or two). For me, working like this is perfect because I take the time to real y take my time to deliver the quality my clients deserve, and I take the time to make myself happy including my inner child.
How do you handle the responsibility of everyday life, how do you relax after a long day at the (home) office?
Once you found what works for you, keep on it and follow this road. Once you have mastered this way of working, writer’s block will be a thing of the past. You are able to write more and write better, enjoy your improvements ☺
Just for laughs, I made a list of things not to do…
How to let writer’s block ruin your life:
Refuse to write when you feel blocked and wait until you get Inspired.
Feel sorry for yourself, blame the whole world and f#ck the deadlines.
Binge watch your favorite TV series and ignore the needs of your clients, they can wait.
Read every article there is about how to overcome writer’s block.
Just wait a few hours more until you cannot find any more excuses, it is not your fault that you’re stuck, right?
If all else fails:
If you still do not feel like writing, then it is time to face the truth, what needs to be done need to be done. It is time to swallow that bitter pill and get on with it... You know what I mean.
Just start with it, if I cannot convince you if you cannot convince yourself.. Then it’s time to just start writing no matter how you feel…
I did not feel like writing today, to be honest, after taking my neighbor to the hospital with a dangerous infection in her mouth. But now I am waiting outside the first help area, thinking how even this situation could be turned around into something less negative by letting my thoughts lead my fingers while writing these words.
The point that I’m trying to make is that no matter how uninspiring the situation is, there is always a moment to start writing or continue writing. There is never a reason why you should not enjoy writing, after all, most of us started writing because it is fun! So whenever you really feel like you cannot write, just write down a few sentences and maybe leave it like that. You might feel instantly inspired to edit, rewrite or write some more after you took this step. And if not, just leave it at that for the time being and you will enjoy writing again at a later time.
I would love to hear how you tackle this nasty state of mind. Feel free to leave a comment below since we can all learn from each other. I would appreciate it very much!
#writerslife #writersblock #entrepreneur #kickwritersblockintheb@lls The Startup Writer’s Relief Let's get writing! Entrepreneur Magazine | https://medium.com/swlh/five-ways-to-conquer-writers-block-today-2609b50cece2 | ['Eric Jan Huizer'] | 2019-11-07 20:03:58.874000+00:00 | ['Love', 'Entrepreneurship', 'Writing', 'Conquer', 'Writing Tips'] |
Linear Regression To Solve Advertising Problems | The purpose of this tutorial is to get a clear idea on how a linear regression can be used to solve a marketing problem, such as selecting the right channels to advertise a product.
This time, we will use Google’s Tensorflow on a Docker container. TensorFlow is an open-source software library for machine learning across a range of tasks. It is a system for building and training neural networks to detect and decipher patterns and correlations, analogous to (but not the same as) human learning and reasoning.
What is a Linear Regression?
A linear regression model is one of the simplest regression models. It assumes linear relationship between X and Y.
The output equation is defined as follows:
How to Install Docker and run Tensorflow Notebook image on your machine
As we mentioned on our post “Learning to paint the Mona Lisa with Neural Networks” the best way to run the TensorFlow is to use a Docker container. There’s full documentation on installing Docker at docker.com, but in a few words, the steps are:
Go to ``docs.docker.com`` in your browser.
Step one of the instructions sends you to download Docker.
Run that downloaded file to install Docker.
At the end of the install process a whale in the top status bar indicates that Docker is running, and accessible from a terminal.
Click the whale to get Preferences and other options.
and other options. Open a command-line terminal, and run some Docker commands to verify that Docker is working as expected. Some useful commands to try are docker version to check that you have the latest release installed.
to check that you have the latest release installed. Once Docker is installed, you can download the image which allows you to run Tensorflow on your computer.
In a terminal run: docker pull 3blades/tensorflow-notebook
MacOS & Linux: Run the deep learning image on your system: docker run -it -p 8888:8888 -p 6006:6006 -v /$(pwd):/notebooks 3blades/tensorflow-notebook
Windows: Run the deep learning image on your system: docker run -it -p 8888:8888 -p 6006:6006 -v C:/your/folder:/notebooks 3blades/tensorflow-notebook
Once you have completed these steps, you can check the installation by starting your web browser and introducing this URL: http://localhost:8888
Our Advertising Dataset
The Advertising data set we are going to use is from “An Introduction to Statistical Learning”, textbook by Gareth James, Robert Tibshirani, and Trevor Hastie, which consists of the sales of a product in 200 different markets, along with advertising budgets for the product in each of those markets for three different media: TV, radio, and newspaper.
Our Objetive
By training an inference model, a series of mathematical expressions we want to apply to our data that depends on a series of parameters. The values of parameters change through training in order for the model to learn and adjust its output.
The training loop consists in the following steps:
First, we need to initialize the model parameters to some random values.
Second, we need to read the training data -for each example, and possibly using randomization strategies in order to assure that training is stochastic.
Third, we need to execute the inference model on the training data, getting for each training example the model output with the parameter values.
Four, we compute the loss.
And last, we adjuts the model parameters.
We need to repeat this process several times, according to the learning rate. After the training of the model is done we need to apply an evaluation phase.
Reading the data
The first thing we need to do is load our dataset and define our training set.
# load libraries
import warnings; warnings.simplefilter('ignore')
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
%matplotlib inline # Load data.
data = pd.read_csv('data/Advertising.csv',index_col=0) # visualize our data set
data.head()
# Define our train dataset
train_X = data[['Radio']].values
train_Y = data.Sales.values
train_Y = train_Y[:,np.newaxis]
n_samples = train_X.shape[0] #print training samples for Radio values
print "Number of samples:", n_samples
print train_X.shape, train_Y.shape
Number of samples: 200
(200, 1) (200, 1)
Let’s now visualize our data set
# visualize our results
fig, ax = plt.subplots(1, 1)
ax.set_ylabel('Results',
rotation=0,
ha='right', # horizontal alignment
ma='left', # multiline alignment
)
ax.set_xlabel('Radio')
ax.plot(train_X, train_Y, 'o', color=sns.xkcd_rgb['pale red'], alpha=0.7,label='Original data')
plt.show()
# import tensor flow library
import tensorflow as tf # problem: Solve ŷ = WX + b tf.reset_default_graph() # Set up our training parameters
# ------------------------------
# learning rate
lr = 0.01
# training epochs
t_epochs = 10000 # Define TensorFlow Graph Inputs
X = tf.placeholder("float",[None,1])
y = tf.placeholder("float",[None,1]) # Create model variables
# ----------------------
# Set model weights
W = tf.Variable(np.random.randn(), name="weight")
b = tf.Variable(np.random.randn(), name="bias") # Construct a linear model
y_pred = tf.add(tf.mul(X, W), b) # Minimize the squared errors
# we will use L2 loss
cost = tf.reduce_sum(tf.pow(y_pred - y,2))/(2*n_samples) # Define the optimizer
'''Adam is an optimization algorithm that can used instead of the classical
stochastic gradient descent procedure to update network weights iterative
based in training data.'''
optimizer = tf.train.AdamOptimizer(lr).minimize(cost) # Initiate the variables
init = tf.initialize_all_variables() # Launch the graph
with tf.Session() as sess:
sess.run(init)
cost_plot = []
# Fit all training data
for epoch in range(t_epochs):
sess.run(optimizer,
feed_dict={X: train_X, y: train_Y})
cost_plot.append(sess.run(cost,
feed_dict={X: train_X, y:train_Y}))
print ""
print "Optimization Finished!"
print "cost=", sess.run(cost,
feed_dict={X: train_X, y: train_Y}), "W=", sess.run(W), "b=", sess.run(b)
fig, ax = plt.subplots(1, 1)
ax.set_ylabel('Results',
rotation=0,
ha='right', # horizontal alignment
ma='left', # multiline alignment
)
ax.set_xlabel('Radio')
ax.plot(train_X,
train_Y, 'o',
color=sns.xkcd_rgb['pale red'],
alpha=0.7,label='Original data')
plt.plot(train_X,
sess.run(W) * train_X + sess.run(b),
label='Fitted line')
plt.show()
x = range(len(cost_plot))
plt.plot(x, np.sqrt(cost_plot))
plt.show()
print cost_plot[-1]
Optimization Finished!
Cost= 9.0462
W= 0.202496
b= 9.31162
Let’s try it now with all 3 variables, Radio, Tv and Newspaper.
Using all 3 values as input, it becomes a multiple linear regression problem, but the process is similar.
# reset our graph to work with the new data
tf.reset_default_graph() # We read our data set
data = pd.read_csv('data/Advertising.csv',index_col=0) # Set up our training parameters
# ------------------------------
# learning rate
lr = 0.01
# training epochs
t_epochs = 10000 # Define our train dataset
# ------------------------
# we set all our variables as input vectors
train_X = data[['TV','Radio','Newspaper']].values
train_Y = data.Sales.values
train_Y = train_Y[:,np.newaxis]
n_samples = train_X.shape[0] # Print our samples
print "Number of samples:", n_samples
print train_X.shape, train_Y.shape
Number of samples: 200
(200, 3) (200, 1)
# Define TensorFlow Graph Inputs
# ------------------------------
# we need to change our dimensions since we now have 3 inputs
X = tf.placeholder("float",[None,3])
y = tf.placeholder("float",[None,1]) # Create model variables
# ----------------------
# Set model weights
# our W changes due to our new dimension
W = tf.Variable(tf.zeros([3, 1]),name="bias")
b = tf.Variable(np.random.randn(), name="bias") # Construct a multidimensional linear model
y_pred = tf.matmul(X, W) + b # Minimize the squared errors
# we will use L2 loss
cost = tf.reduce_sum(tf.pow(y_pred - y,2))/(2*n_samples) # Define the optimizer
'''Adam is an optimization algorithm that can used instead of the classical
stochastic gradient descent procedure to update network weights iterative
based in training data.'''
optimizer = tf.train.AdamOptimizer(lr).minimize(cost) # Initiate the variables
init = tf.initialize_all_variables() # Set up a display step for epoch log visualization
display_step = 1000 # Launch the graph
with tf.Session() as sess:
sess.run(init)
cost_plot = []
# Fit all training data
for epoch in range(t_epochs):
sess.run(optimizer,
feed_dict={X: train_X, y: train_Y})
cost_plot.append(sess.run(cost,
feed_dict={X: train_X, y:train_Y}))
#Display logs per epoch step
if epoch % display_step == 0:
print "Epoch: ", '%04d' % (epoch+1), "
Cost= ", sess.run(cost, feed_dict={X: train_X, y: train_Y}), \
"
W= ", sess.run(W), "
b= ", sess.run(b), "
"
print ""
print "Optimization Finished!"
print "cost=", sess.run(cost,
feed_dict={X: train_X, y: train_Y}), "W=", sess.run(W), "b=", sess.run(b) Epoch: 0001
Cost= 68.7282
W= [[ 0.01]
[ 0.01]
[ 0.01]]
b= 1.15629 Epoch: 1001
Cost= 1.404
W= [[ 0.04686551]
[ 0.19321483]
[ 0.00140966]]
b= 2.53457 Epoch: 2001
Cost= 1.39207
W= [[ 0.04579829]
[ 0.18867318]
[-0.00096271]]
b= 2.92653 Epoch: 3001
Cost= 1.39206
W= [[ 0.0457647 ]
[ 0.18853027]
[-0.00103737]]
b= 2.93887 Epoch: 4001
Cost= 1.39206
W= [[ 0.04576467]
[ 0.18853012]
[-0.00103745]]
b= 2.93888 Epoch: 5001
Cost= 1.39206
W= [[ 0.04576465]
[ 0.18853007]
[-0.00103747]]
b= 2.93888 Epoch: 6001
Cost= 1.39206
W= [[ 0.04576106]
[ 0.18852659]
[-0.00104105]]
b= 2.93889 Epoch: 7001
Cost= 1.39207
W= [[ 0.04577899]
[ 0.18854418]
[-0.00102315]]
b= 2.9389 Epoch: 8001
Cost= 1.39206
W= [[ 0.04576319]
[ 0.18852855]
[-0.00103896]]
b= 2.93889 Epoch: 9001
Cost= 1.39206
W= [[ 0.04576985]
[ 0.18853523]
[-0.00103229]]
b= 2.93889
Optimization Finished!
Cost= 1.39206
W= [[0.04576455] [0.18852994] [-0.00103758]]
b= 2.93889 | https://medium.com/3blades-blog/linear-regression-to-solve-advertising-problems-7e4fb96c881a | ['Samuel Noriega'] | 2017-10-31 20:55:07.547000+00:00 | ['Big Data', 'Deep Learning', 'Marketing', 'Marketing Technology', 'Machine Learning'] |
Top Writer in Fiction? | Update on 1/10/20 — I’m now a Top Writer in Movies, Travel, Reading AND Fiction now. W.T.F.?
I’ve been writing fiction since I could hold a crayon. By the time I stumbled upon Medium I’d accumulated a nice fat folder of short stories, flash fiction, long-form fiction, and three novels. I’ve had six short stories published over the course of thirty years and was nominated for a Pushcart Prize in 2011 and was interviewed by the editor who nominated me.
I never made a penny from my fiction
Then I found Medium. You’ve heard this one before. You may very well have experienced this yourself. Readers! I found readers for my stories.
Amazing!
If you’re writing on Medium you know that you have to feed the beast daily. It’s not just Medium. The entire digital world is one big, ravening maw that needs fresh content to devour daily. Once you start down this road be ready to move and move fast.
About three months ago I was thrilled and delighted to discover that Medium had seen fit to pat my head and award me Top Writer in Fiction status.
WoooHOOOOOOOOOO!
My secret? I’ve been mining my folders filled with thirty, forty years of fiction.
Literally Literary, The Non-Conformist, The Narrative, Plan B Vibe , London Literary Review, and For Members Only have all published my fiction over the past year (a humble and heartfelt thank you!). Some of that work was finished, polished, perfect and beautiful when I submitted it but a lot of it was rough and sketchy. Thanks to Medium, to the editors of these publications and to readers, I had the opportunity to work on those half-formed stories and bring them to life.
Again, thank you.
In those folders on my hard drive, there also existed a number of half-formed personal essays that I’d never bothered to complete because there was no way those were ever going to find publishers.
Here on Medium and thanks to the publishers of The Ascent, Live Your Life on Purpose, P.S., I Love You, Candour, and Tenderly I have found readers for my non-fiction work. Again…….wow!
But I’m certainly going to lose my Top Writer in Fiction status because I’ve exhausted my store of fiction.
Writing fiction takes time and concentration and, for me, it’s kind of hit or miss. I can’t tell if a story is going to work while I’m in it and there are numerous times when I have to backtrack and find my way again. It’s a process that does not lend itself to daily publication. I’m sure there are fiction writers who can and do slam new stories out and are able to publish daily.
It ain’t me, babe
Meanwhile, my mind teems with new ideas for essays. If I’m awake, I’m percolating multiple story ideas and my draft folder is filled with outlines and titles of pieces you’ll be reading tomorrow and next week and next month.
I absolutely can and do manage nearly daily publication of my essays and thanks to being an editor on The Partnered Pen and Plan B Vibe, I have the luxury of being able to immediately publish my work in widely-read publications. Huzzah!
However, even in non-fiction, I’m probably not going to nail top writer status in anything.
I can’t be bothered with niches or SEOs or killer keywords. I’m never going to even try to limit or redirect or tame that bubbling froth of ideas that fills my brain and zip out across the keyboard every day.
Nothing lasts forever, not even Medium, so the day will come when I reclaim the time, the concentration, the hunger to immerse myself in making shit up again (aka fiction writing). In the meantime, however, I’m not worried that I can’t write fiction anymore, that I’ll lose my “touch” and will only be able to slam out these ponderings and memoir lite pieces.
I may even figure out how to manage my time so that I can block off writing time for fiction in addition to writing pieces like these and promoting them across the social media spectrum. It could happen.
But here we are poised to start 2020 and I’ve renewed my subscription to Medium for another year. I’ll be reading your work and writing, publishing, and promoting my work at the kind of clip being unemployed allows. Thank you for reading my work. Thank you for responding, for highlighting, for clapping and, most of all, (to my fellow Medium writers) thanks for continuing to enrich my reading life with your work.
Onward!
© Remington Write 2019. All Rights Reserved
Here’s the first chapter of my most recent novella, “Graceless”, which I published as a series on Medium. It’s accompanied by my partner’s photography. | https://medium.com/the-partnered-pen/top-writer-in-fiction-aa6f5e811cc7 | ['Remington Write'] | 2020-01-10 22:21:10.124000+00:00 | ['Writers On Writing', 'Nonfiction', 'Fiction Writing', 'Writing', 'Reading'] |
Not As Easy As ABC | Not As Easy As ABC
A review of Judith Flanders’ unconventional history of alphabetical order, *A Place For Everything*
When I was in high school, one of my favorite things to do was to come up with “unanswerable” questions. I thought they were unanswerable because no one had Google at their fingertips quite yet, and high schoolers’ internet searching skills were and still are truly hideous. One of those questions was “Which came first, the color ‘orange’ or the fruit ‘orange’? (It turns out the fruit was named first in English.) The other one, which is a little bit more unanswerable but I still never bothered to do an internet search for: “Why is the alphabet in the order that it is, and who decided it?” Well, that’s a big reason why I picked up Judith Flanders’ new book, A Place For Everything: The Curious History of Alphabetical Order. As you can imagine if you think of how little sourcing there would be for such a historical query, that is not exactly what the book is about (although I did get an answer to that question in my interview with Flanders). However, the book is a wonderful example of great microhistory, following the development and use of alphabetical order over time and place.
You may be thinking, like me, that alphabetical order is pretty natural and doesn’t require a lot of development. It probably just happened naturally. But at some point, you had to learn how to put things (or find things) in alphabetical order, and it probably didn’t come very naturally at first. Most likely, someone taught you how it worked. Alphabetical order’s need for development becomes clear in an excerpt from Giovanni Balbi’s Catholicon (a compendium of grammar) where he explains in minute detail how Balbi uses alphabetical order. Flanders explains:
Balbi’s long explanation brings into focus the work that goes into dictionaries, which, when we use them, seem straightforward, as though the dictionary maker has simply had to perform a mechanical task: Ab before Ac, and so on. For while alphabetical order is the most useful tool to find a word in a dictionary that has so far been devised, it is by no means the easiest tool to create a dictionary. Grouping words by type (slang, or technical words, or fields of knowledge), or by entry size (words like “go” and “run,” which require lengthy definitions and have a large number of meanings), or grammatically (verbs, nouns, adjectives) are all far easier for the dictionary writer to implement. In addition, endless decisions are required to alphabetize usefully. Where do abbreviations get placed? — does DIY go under “di,” or under “do” for “do it yourself ’? What about compound or hyphenated words? Post / Postilion / Post office, or Post / Post office / Postilion? What about words that have no letters, like 9/11? What happens to words with accents? Are those letters treated as though there is no accent, or are they ordered as though the accented letter is a separate entity? Alphabetization is easier to use than to produce, as Balbi’s explanation made clear.
In other portions of the book, Flanders also notes that absolute alphabetical order was not exclusively used until relatively recently, meaning that putting “Nesbitt” before “Nathan” would be just as correct as vice versa. It’s only later that there is a correct way to order “Nash” and “Nathan”, much less “Nat”, “Nathan” and “Nathaniel”.
Most of Flanders’ published work explores the Victorian era, so it is impressive that she delves deep into almost 5000 years of history all over the world with seeming ease. She even refuses to make the ultra-common mistake of starting the story of the development of printing with Johannes Gutenberg, choosing instead to trace the origins of movable type in China through its spread and modification in Korea before Gutenberg stole ideas from about fifteen different people to develop the movable type printing press. It’s details like this that give me confidence in a historian’s process, and Flanders exhibits fitness for that confidence throughout the book.
Alphabetical order, like any technology, had its detractors. Devotees to various faiths considered it a vice and a shortcut to the more meaningful memorization of locations and even ideas. Flanders relays a get-off-my-lawn-style rant against “kids these days” who used alphabetization:
The barrister Abraham Fraunce (c. 1559 — c. 1592/3), whose greater fame was as a poet under the patronage of Philip Sidney, trumpeted his discontent in an I-had-to-learn-the-hard-way-so-you-youngsters-should-too outburst: “I could heartily wish the whole body of our law to be rather logically ordered, than by alphabetical breviaries torn and dismembered. If any man say it cannot be . . . then I do not so much envy his great wisdom, as pity his rustical education, who had rather eat acorns with hogs, than breed [bread] with men, and prefer the loathsome tossing of an A.B.C. abridgement, before the lightsome perusing of a methodical coherence of the whole common law.”
In other ways, however, the use of alphabetical order does legitimately mark “a transition in worldview”. That seems so silly to say of what seems to most a benign innovation, but by the time you reach this point in A Place For Everything, you will see that alphabetization is not a neutral conduit. Flanders writes:
Just as the spread of alphabetically organized dictionaries and indexes had indicated a shift from seeing words purely as meaning to seeing them as a series of letters, so too the arrival of alphabetically ordered encyclopedias indicated a shift from seeing the world as a hierarchical, ordered place, explicable and comprehensible if only a person knew enough, to seeing it as a random series of events and people and places.
You may, like me, find that change in worldview nauseating. I would hate for my students to think world history is a “random series of events and people and places”. I would rather my students see it as “explicable and comprehensible if only a person knew enough”. But then again, that’s why I don’t arrange my world history class in alphabetical order. Doing so would be silly. And that’s where Flanders is spot on in her evaluation of the merits or demerits of alphabetization:
Alphabetical order is a means, not an end in itself. It is a system that permits us to organize large quantities of information, and to make it available to others whom we do not know, and who have no information regarding the people or ideas or intentions of those who originally produced and arranged it. There continue to be many ways of organizing, storing, and retrieving information, sometimes in the same way it originated, sometimes in a way that transforms it. Linnaeus’s taxonomy, and the periodic tables, are naming and classifying systems. The Dewey decimal system classifies, but does not name. Most museum exhibits are organizing, classifying, and displaying systems. Maps are displaying and also transforming systems, as are graphs and pie and bar charts. The importance is not the method, but a method, any method. “The human mind works by internalizing such arbitrary and useful tools, as a kind of grid onto which knowledge can be arranged, and from which it can be retrieved,” wrote the novelist A. S. Byatt. We think, therefore we sort.
In any sorting decision, there are going to be better and worse means of ordering. Some social scientists have provided evidence that the alphabetization of student names is discriminatory. In some cases, like a dictionary or encyclopedia, it may be the proper sorting technique. The key is to find a sorting means that helps you meet your ends. (You didn’t think a review of a book about alphabetical order could get this serious, did you?)
Flanders’ A Place For Everything is skilled in its job of telling a unique history through a means that is both interesting and historically sound. If such quirks of history enrapture you, I would heartily recommend this book. And if you haven’t already, check out my interview with Judith Flanders for more about how she came to the topic, the process of writing history, and more.
I received a review copy of A Place For Everything courtesy of Basic Books and NetGalley, but my opinions are my own. | https://medium.com/park-recommendations/not-as-easy-as-abc-6bfce7244f63 | ['Jason Park'] | 2020-10-19 10:25:01.733000+00:00 | ['History', 'Nonfiction', 'Microhistory', 'Books', 'Reading'] |
My Journey To Fear-Less: An Act of Self-Care & Survival | I am tired of being afraid.
That is what I definitively stated, out loud while sitting in a corner of the verandah while at home in Jamaica in November 2015. I had just completed my Master’s program in England, birthed from a longtime dream to see a side of the world that I never have before, and a fear that I may never have the opportunity to do so again. At this point, I could feel the stirrings of the anxiety and panic that I have spent the majority of my years on Earth battling with. They were fighting the good fight to bubble to the surface, and I can’t really blame them, because I was inviting them with my own negative energy and self-doubt, considering my “What’s next?”.
The fear of embarrassment. The fear of disappointing myself. The fear of making myself seen and/or heard. The fear of not being successful. The fear of disappointing others. The fear of not living up to my full potential. The fear of rejection. The fear of not living up to all the expectations that others have of me.
These were the main players playing catch with my mind, while deep at the very core of my soul, my spirit, my essence, I knew that pursuing a conventional, sit-in-a-cubicle-under-fluorescent-lights job/career was the last thing that I wanted to pursue. And that scared the daylights out of me. It made me uncomfortable to go against what has always been expected and seen as the norm, so the initial hesitation was rampant. To be honest, when I look back through the snapshots of my childhood and beyond, my life has been inundated with “supposed to”. I lived my life as a shell of myself, because I never felt that I could be more than what was expected of me.
The seeds of fear have been sprouting for far too long, taking a hold of my life while I went through the motions that it directed me to. I can’t say what the exact moment was that I made that decision, but I decided that fear was not going to be the driver of my life any longer. My life may now be in what may seem to many as a never-ending period of transition, but what is at my core, burning so brightly, just waiting to come out and show its full potential for illumination, trumps any self-doubt that may sneak its way in.
The idea of fearing less has been my most effective catalyst for growth. Admitting to myself that I was tired of being afraid, and committing myself to live a life being fulfilled by helping others to realize the greatness within themselves, has led me down a path of educational, and many times emotional, twists and turns.
Committing yourself to fearing-less will mean that sometimes (more often than not), your ways of perceiving the world, your energies, and your vision will not align with that of others, even those that you hold close.
The manifestations of fearing-less can look like any number of things on any given day, but the roots remain the same. These are some of the foundational principles of fear-less-ness:
Create opportunities for self-fulfillment in all situations.
Be okay with saying “No” to what does not serve you.
Find peace in the fact that letting go is at times the greatest necessity to keep moving forward.
Trust your gut.
Being committed to this #JourneyToFearLESS has opened up new doors to what it means to be unapologetically me. It has allowed me to overcome my tendency to shrink myself, whether figuratively or literally, to make the world more comfortable with my existence. I now understand more clearly that my space in this world is unique and necessary. I do not need to extinguish my own fire in order to help others keep their own blazing. Instead of ruminating about unanswered job applications for roles that do not serve me, I am focusing on consistently unearthing my own opportunities for growth, side-tracking self-doubt to find the greatness in all things, all paths, and all possibilities.
Being on a journey of fear-less-ness, with the journey being treasured and no destination in sight, has brought me to a place where I remind myself each day of my own greatness, which needs no definition other than the one that I give to it myself. So, would I say that I am 100% settled in a position as a leader of fear-less-ness? Maybe not. I will not say that I’m not yet where I want to be, but I will say that I am right where I am meant to be, and that is making sure I am at the forefront of my own mind, seeing me, filling my own cup, nourishing all that is within me while on this journey to living a life of purpose and fulfillment, fearlessly.
__________
Thank you for taking the time to read the words that have been spilling from my heart onto the screen in front of you.
We get vulnerable around these parts, and it’s no easy task finding the strength it takes to reveal your truths (whether through writing or just reading a piece). | https://shanicejdouglas.medium.com/my-journey-to-fear-less-an-act-of-self-care-survival-ac9a97bc08fc | ['Shanice J. Douglas'] | 2018-08-07 18:56:35.624000+00:00 | ['Personal Growth', 'Life Lessons', 'Mental Health', 'Fear', 'Psychology'] |
Principal Components Analysis (PCA), Fundamentals, Benefits & Insights for Industry | The intuition behind the dimension reduction
Let’s start with a very simple example where the 3 features of a dataset (x, y and z) are displayed in the 3D space below (code below):
Original dataset displayed around x, y and z axis.
You can easily notice that despite having 3 dimensions, the scatter plot is mostly spread around “y” and “z”; the variation around the “x” axis is quite low.
Should we had to model this dataset, how important the “x” dimension would be in comparison with “y” and “z”? Probably not that much!
So, instead of using 3 coordinates [x, y, z] to identify one dot within this 3D space, [y, z] might already be a good indication of where this dot is. Isn’t it?
We could even go further and imagine a 2D plan following the alignment of dots along the x-axis and designating the dots’ positions with only 2 coordinates according to this plan! That would probably look like this:
Original dataset with a possible 2D reduction plan
Thanks to this red 2D plan and its two corresponding vectors, we only need two coordinates to identify accurately the position of each dot.
Obviously, we are losing some information regarding the “depth” of each dot according to the original x-axis but this is a risk we are willing to take!
If you got this example right, you know what dimension reduction is! | https://towardsdatascience.com/principal-components-analysis-pca-fundamentals-benefits-insights-for-industry-2f03ad18c4d7 | ['Pierre-Louis Bescond'] | 2020-05-31 21:45:11.927000+00:00 | ['Dimensionality Reduction', 'Data Science', 'Machine Learning', 'Python'] |
Revisiting six memos | A lost memory from my childhood Christmas TV schedule, also lost in 2015
Back in January I proposed Six memos for 2015, in which I suggested six ideas that I believed would be popular or at least interesting in 2015. They were: Resilience, Ambient persuaders, Ambiguity, Mavens, Emotional sensing and Personal data rights. The thing about these sort of clickbait ‘predictions’ is that people don’t tend to revisit them, they just push out a new set every year — as if the list itself is it’s very own medium. So to break with that tradition, I’m not publishing a new list, instead I’m going look at how much traction they got last year and explore some of the ideas that surround them.
First up, Resilience.
As a concept for business it didn’t really take off in 2015. Overall it increased in use only slightly (according to Google Trends). Mostly the notion of resilience is still focussed squarely on the environment and climate change or our personal psychological state and ability to deal with stressful situations. Of course both of those things do impact the business world, whether that’s at a strategic level or personal work/life activities. I think one of the issues with idea of resilience is that it’s about ‘weathering the storm’ — it’s about coping with the bad things, the stressful things, the negative events. It doesn’t fit the upbeat, peppy business talk, where we are agile, responsive, adaptive, lean, growth hacking super bros (especially in the US). I still like it as an idea for thinking about business and design, and there‘s plenty of writing in relation to investment risk and volatility (eg: FM Global’s Resilience Report which is at a macro economic level). I’ve recently been reading Antifragile by Nassim Nicholas Taleb and I think his notion of “antifragile” over the idea of resilience is rather brilliant. It helps get past “weathering the storm” to the slightly more positive position of “what doesn’t kill you makes you stronger”. For him an antifragile system is one with the ability to gain or benefit from those things that diminish or harm fragile systems. And he describes those things as “The Extended Disorder Family”.
The Extended Disorder Family from Antifragile
Resilience is important in dealing with the effects of things mentioned above, but I think that I missed the subtlety in the meaning of ‘resilience’ before. It’s important for us not to simply rebound from adversity but to learn from it and come back stronger. I still think we are going to hear a lot more about resilience in the coming years and most likely without the nuanced understanding that ‘antifragile’ brings to the table. I believe this because our exposure to the ‘disorder family’ will grow, thanks to climate, economic, social, technological and cultural change in the coming years. Ultimately understanding and creating ‘antifragile’ systems will be the key to dealing with that change but this will start from a standpoint of being resilient.
Ambient persuaders
This was also a bit of a misnomer. The focus has continued to be about ‘nudging’ and thankfully, with a lot more being written about the ethics of nudges and how people react to being nudged. Although behavioural economics continued to be a hot topic this year, not much was said about the tactical side, about the manifestation of the nudge or indicator, about the ambient signifier (with the exception of sites like ambient-accountability.org and Dan Lockton’s work). When we did hear about these techniques most was around climate action, governmental policy change and wearable devices, especially regarding health. I chose the phrase ‘ambient persuader’ because for me a ‘nudge’ is an action and I wanted to point out the manifestation, the thing that leads to the action. I wanted to find a way to talk about those designed elements whose purpose was to nudge. In the ’50s Vance Packard wrote about The Hidden Persuaders in advertising and PR. For me much of nudging comes from those ideas. The creation of devices that trigger psychological states that nudge people towards certain actions. The ambient idea is really about separating them from the dark patterns of MadMen. These aren’t the subliminal messages telling me I’m worthless because I don’t have the latest gadget, but rather background signals that indicate to me that I’m making a good decision or that I need to change what I’m doing to help improve my life or the life of others. Ambient persuaders can also be hints to mindfulness and connectedness eg. even that my partner, although thousands of miles away, is there thinking about me, alive, and in tune. This year we saw the Apple watch and the ability to simply tap a friend on the wrist from afar and even this Kickstarter for a connected pillow to hear your partner’s heartbeat. They both are about reminding or remaining intimately connected, to ensure the other person is ‘in mind’. They aren’t strictly ambient or persuasive but they are less intrusive than the tyranny of notifications that overt nudges bring, where the contemporary manifestation of MS Office Clippy could permeate our world. Here’s a brilliant, if a little dystopian glimpse from Superflux at what that could be like.
That’s why I believe there’s still a lot more discussion to have around ambient persuaders. We need to investigate how they can be used to hint, guide and show the way, rather than overtly nag us. We need to talk openly about the techniques employed, the ways in which they manifest and the impact they may have on our already very noisy lives. So let’s see what happens in 2016. One thing already to look forward to is Dan Lockton’s book Design with Intent for O’Reilly later this year.
Ambiguity
This year we saw a lot about deep learning and attempts to teach AIs to handle nuance. Ultimately how an intelligent machine deals with ambiguity will be the key to their usefulness and thus how deeply we allow them into our lives. Researchers are beginning to understand more about the types of ambiguities that arise with machines and how they can be very different to those of humans. Image content processing and understanding is an area where we’ve seen a lot written about this year. The horrendous classification mistake by Google’s Photos App wasn’t a decision that would have even registered as ambiguous to humans, we would have classified correctly (unless deliberately making a racist slur). But the machine made a terrible mistake, one that rightly caused a lot of hard questions to be asked of the engineers. But that’s where I think the big problems lay for AI, what it may be ‘certain’ about could be an area of ambiguity for humans and things we are certain about may be very ambiguous for the machine, and in ways that don’t match what we understand about ourselves. How we ‘see’ and ‘understand’ an image, is very specific and it doesn’t mean that a machine when taught to ‘see’ and to ‘understand’ images will do it in exactly the same way we do — we don’t know enough about ourselves to truly model like for like. Ultimately the machine thinks differently and what it finds ambiguous can also be different — here’s a great piece in Nautilus on exactly that.
The other side of ambiguity is the need to embrace it, as it makes us the interesting nuanced creatures we are. And that also came up a lot this year. Mostly in reaction to Big Data. Mushon Zer-Aviv in a presentation for the HKW 100 Years Project talked about the desire for ‘disambiguation’. He speaks of how we look to big data as a perfect representation of the real world and how we employ reductive approaches to ‘disambiguated’ and create a common point of understanding. But how in doing so we lose all that is real, human and valuable. He rejects this and calls for a ‘reambiguation’ of things. Which reminds me of the Swiss historian, Jacob Burckhardt and his idea that “the essence of tyranny is the denial of complexity”. That’s from the late 1800’s when he feared the “terribles simplificateurs [simplifiers]”; the employment of generalization and abstraction to divide and categorise and ultimately remove individual agency. This reflects the fear of being analytically excluded through blunt categorization and normalisation of populations (without the nuances that come with ambiguities).
So in one form or another ambiguity was a big topic this year, and I believe next year we will see this debate focus further, driven by questions about how machines can accurately and safely make decisions that impact our lives.
Mavens
I don’t think there’s any escape form these lone wolves, isolated experts, and crazed egos, especially as even more is being said about the trials of working collaboratively and Collaborative Overload. However there were a few glimmers of hope. Mostly these take the form of opening up debate and questioning those who speak as authorities. We started seeing the beginning of this with people questioning the position of the storyteller (especially in data presentation). People revealing the mechanics, the tricks and role of the unreliable narrator. One of the keys to challenging these mavens is in opening up dialogue and enabling collaborative discourse. Not closed ‘truths’ (to be accepted) but data and facts open to secondary investigation — open to all of Carl Sagan’s BS detection.
In a great piece by Catherine D’Ignazio she asks What Would Feminist Data Visualisation Look Like? One point that particularly resonated with me, especially in regards to ‘mavens’ was that we “Make dissent possible”.
… one way to re-situate data visualization is to actually destabilize it by making dissent possible. How can we devise ways to talk back to the data? To question the facts? To present alternative views and realities? To contest and undermine even the basic tenets of the data’s existence and collection? A visualization is often delivered from on high. An expert designer or team with specialized knowledge finds some data, does some wizardry and presents their artifact to the world with some highly prescribed ways to view it. Can we imagine an alternate way to include more voices in the conversation? Could we effect visualization collectively, inclusively, with dissent and contestation, at scale?
So let’s keep tugging at the curtain and revealing the reality of these wizards, let’s ensure that their ‘facts’ are not blindly accepted but rather points for discussion and where necessary, dissent.
Emotional sensing
The use of emotional sensing for UX and experience research has continued to rise, but it’s not as prevalent or talked about as I expected. Much is still focussed on simply measuring the ‘effect’ of a design and other forms of evaluative research. In fact we are still battling over the use of empathy in business and battling even harder to convince many organisations to treat people with respect. There are many that still hold the belief that empathy has no place in business, economics, data or science and that it must always be a case of pure objective scientific detachment or left to the market to sort out. Personally I believe those people are deluded if they truly believe they can dehumanise themselves and the systems they create (I know, not a very empathetic thing to say). But emotional sensing isn’t really about a generalised idea of empathy, it’s probably more closely aligned to cognitive empathy. Cognitive empathy is about recognising and understanding another’s emotional state, sometimes know also as ‘perspective taking’ (Check out Indi Young for a practical guide to it’s use in design and business research).
It’s this kind of emotional sensing and cognitive empathy that’s been on my radar this year. There have been a lot articles about machines being able to recognise emotions. First up there’s a burgeoning category of applications or systems that analyse behaviour to access emotional state such as the use your smart phone behaviour to derive whether you may be suffering from depression. This is indirect emotional sensing. It’s machine learning, big data, pattern analysis after the fact and a form of diagnostic recognition. Not really about understanding the person, more just about profile matching in the model. However, there’s also been a fair amount written about the recognition of human emotions by machines in human machine exchanges. This opens up the interesting part, machines applying cognitive empathy, where they understand the emotions in the exchange and modify of their behaviour based on the emotional responses from the human. And that is key to ‘authentic’ feeling, human/machine exchanges and our acceptance of robot helpers, assistances and nurses. But it’s an area full of ambiguity and difficulty as often humans aren’t particular good at it either. The field is called ‘affective computing’. On one side you have companies like Affectiva who are building deep datasets for real-time recognition of “emotional responses to digital media”. Which does feel a little horrifying as linked closely to the Advertising world. On the other we have Microsoft and Azure recognising emotions in pictures. Many of the underlying techniques are being incorporated into services and technology right now, so I expect to hear a lot more about emotionally aware and emotionally responsive services, interfaces and systems this year. Mostly from the robotics domain as the costs for consumers are coming down, fancy one of these? — https://www.autonomous.ai/personal-robot.
But of course all this talk of emotions came in the year we had the brilliant Inside Out from Pixar. A film that squarely focussed our attention on an animated child’s emotional development.
Personal data rights
So that brings me to the last of my memos from 2015; Personal data rights. When I wrote about personal data rights I was thinking about the need for a shift in ownership. A model where the individual held (by default) the right of use and access to any data about them and more importantly created from or by them. There’s a great piece on data ownership here via the Quantified Self blog, but not much else out there.
However, back in June I visited QS15 (The Quantified Self Conference in San Francisco) and it opened my eyes to the sort of data people are collecting and sharing. It was incredible, and much of it was essentially people trying the hack their health and understand their minds or bodies better. Many of us share our steps, running and fitness data with the likes of Apple, Fitbit and RunKeeper but what about those other data sets from the body. At QS15 there were some amazing individuals willing to share their journeys collecting and analysing very personal data (great set of videos of the talks available here). People are tracking all manner of things, from the standard fare of location and activity, through to sleep, heart rate, blood pressure and heart rate variability (great resource for HRV analysis on Paul LaFontaine’s blog), and on to microbiome (and not just the gut), brain activity, detailed aspects of their menstrual cycle, blood markers, glucose and even building custom hardware for controlling diabetes, or monitoring the electromagnetic fields in their apartment. A lot of this was very individual and personal, often spreadsheets and notebooks and done to help with an existing condition or to try and improve performance or quality of life. But it has a very active community feel, with people sharing, helping and supporting each others’ efforts to arrive at techniques and best practices. In some cases services and companies have stepped in to support this, most notably uBiome that offers a simple and fairly cheap way to get you microbiome sequence. And a lot more are coming. Some linked directly to sensors and devices or focused on specific issues or data. Others are looking broader and at the use of self collected data to inform on mass, as more and more people start to see the benefits from experimentation, small group analysis and sharing multiple types of data to further understand very specific issues. One company that is making a play for this collaborative personal data space is We Are Curious (not public as of writing). They promise a way to pool your data and use it to ask questions about your well being and health and it’s CEO is Linda Avey (co-founder of 23andme the consumer DNA profiling company). These are all outside of the mainstream health industry and still (mostly) have a hacker or DIY ethos driving them. One of the conversations I sat in on at QS15 was concerned with the use of personal data by corporations and organisation to exclude people. The main concern was with how a deep but partial view of an individually means that they may be categorised as ‘abnormal’ essentially outside of the standard deviation for this of that measure (could be blood pressure), and thus flagged as higher risk. The fear many had was that this information may be used or shared without consent and thus end up being used to inform other machine driven scoring systems like health insurance costs or even access to resources. The problem here is that the ‘population’ is that you are measured against and those who already feel like outsiders fear further marginalisation and see this as an acute issue. And it’s not quite as paranoid as it sounds, if we think about how IBM’s Watson focus of late has been heavily on its use in health care (here’s XKCD’s thoughts on that) or the fact that Health and Life insurance is a $644 billion industry in the US. In fact insurance companies are leading the charge here. NPR ran an article in April on how John Hancock Insurers want you to trade data for discounts and in the auto insurance market, data on how you drive is quickly becoming the new model for how your premiums are calculated.
Ultimately that’s the key question, what do you get in return for sharing your data? Unfortunately that in itself can be a little short sighted as it focusses on the immediate one-to-one exchange, nothing is said of how that data will be used later. Often we have no transparency regarding how it will be mined for patterns, aggregated, deep learned and modelled, providing the as yet unknown insights and decisions to shape the business or organisation that wields it. The belief is that by understanding this big data they will be able to hold a mirror up to reality and judge your part in it. Those who are good and ‘play by the rules’ will be rewarded and those who don’t, punished by higher costs or exclusion. All driven by the algorithm, automated and no longer subject to human error or ambiguity. But as I mentioned before ‘ambiguity’ means that machines don’t always get things right and we must be careful not to simply yield to the idea of the perfect model, as the statistician George E. P. Box put it:
“The most that can be expected from any model is that it can supply a useful approximation to reality: All models are wrong; some models are useful”.
We are generating richer, more detailed, more specific and more personal data than ever before. Sharing it can be very beneficial but what will be the cost of blindly feeding the model? For me there’s still a lot that needs to be discussed about this secondary use of the data we share (whether or not it’s anonymised). I expect to see more this year on where this secondary use of data is seen as invasive or unethical and debate about who really owns it. | https://medium.com/design-strategy-data-people/revisiting-six-memos-dd2cd9e292a0 | [] | 2016-01-14 22:12:58.298000+00:00 | ['Technology', 'Artificial Intelligence', 'Design', 'Mgm Oldies'] |
Want to work in UX? Pay attention in English class. | Want to work in UX? Pay attention in English class.
With special guest Andy Welfle, Adobe Senior Content Strategist Lead
Building the Perfect Designer
There seems to be a bit of an arms race to standardize the skill sets that are required by employers when it comes to UX-ers, digital product designers, UI designers, or whatever you want to call “us”. I’m acutely aware of this arms race specifically, because I’m in it. Over the past few years we (Adobe and myself) have been working with the University of Utah on developing curriculum for their Digital Product Design program. The core question driving most of our decisions has been a simple one:
What do we want the designers that we hire in the future to look like?
Some of the answers to that question have been pretty straight forward. We need Adobe designers to have a strong understanding of design thinking principles. We need Adobe designers to have a process for discovering problems. We need them to be able to work through those problems with that process to deliver designs that improve the user’s experience. We need those designs to be visually compelling and thoroughly tested using best practices. We need those designers to be able to pitch those ideas through presentations to cross-functional stakeholders.
There’s a lot we need from our designers.
Being that this has been a topic that has been so top of mind for me, I’m constantly looking at parts of my day-to-day process and trying to incorporate the skills needed to complete those things into the curriculum we’re developing. And that’s when I had the “aha” moment a few days ago. There was one aspect of my job responsibilities that touches every single step of the process that we hadn’t accounted for: writing.
Sharpen Your Pencils
Want to set up user testing? You’re going to need to write an email. Are you defining a persona? You better write that thing up. What’s the user’s journey that you’re shooting for? You’re probably going to need to write a narrative. Need to sell some stakeholders on a design? You’re definitely going to be writing up a presentation.
However, if writing isn’t one of our strong suits, it can leave us in a bit of a quandary. So I pinged a colleague of mine, Andy Welfle, who is Senior Content Strategist Lead here at Adobe on the design team. He also happens to be a super awesome guy willing to bestow some of his writing knowledge on us. I asked him a few questions about writing in the UX world and he had some incredibly valuable insights not only just in terms of writing, but specifically for writing in a UX context.
Andy Welfle
As a UX content writer, what similarities have you observed between writing and digital product design?
It’s obvious when it’s pointed out, but a lot of times, designers don’t think about words being part of the user experience. At best, they blend seamlessly with the visuals and the UI, and at worst, they’re jarringly noticeable.
UX content strategy has a lot of parallels with UX design. In fact, when designers ask me what I do, I tell them that I write using design thinking. Instead of traditional copywriting which is usually very linear and finite (and often comes much later in the software development process), UX writing happens at or near the same time UX design does. We iterate alongside designers, and think about the system of language as they think about the visual system.
Here are a few similarities I see in our disciplines: | https://medium.com/thinking-design/want-to-work-in-ux-pay-attention-in-english-class-53245944cc30 | ['Kris Paries'] | 2018-05-21 18:03:33.369000+00:00 | ['User Experience', 'Creative Career', 'Writing', 'UX', 'Design'] |
Pickyourtrail, a self-service platform that lets travellers create, customise and book international vacations | Pickyourtrail has raised $3M in total. We talked with its co-founders Hari Ganapathy and Srinath Shankar.
How would you describe Pickyourtrail in a single tweet?
Pickyourtrail is a self-service platform that lets travellers create, customise and book international vacations in a jiffy.
How did it all start and why?
Pickyourtrail began as an attempt to break how vacations are currently been planned and booked by travellers. At our very heart, we want to create happiness and not sell packages. The seed to all this began in August 2012 when co-founders — Hari & Srinath went on a Europe trip.
Ardent travellers themselves, the duo had put in more than 2 months to fully craft their vacation. This included planning end-end, bookings, and visa processing. It was during their trip they stumbled upon fellow travellers who were all on packaged tours. Interacting with them, Hari & Srinath understood they were literally rushed between destinations and their urge to discover new experiences wasn’t fulfilled. The travellers, on the other hand, were amazed to see the flexibility Hari and Srinath had in their itinerary and that piqued their interest.
Once back home, the duo was bombarded with pings asking for details on how they went about planning the entire trip. These interactions slowly multiplied and this lead to an idea that will forever change their career roadmaps!
What have you achieved so far?
We have been growing 80% YoY. And aim to grow 5X over the next 2–3 years.
Products: http://pickyourtrail.com — our website | https://apple.co/2mBMqI0 — our mobile app for travel concierge.
We have had about 15,000+ travellers who have planned their dream vacation with us in the Free Independent traveller segment.
We are currently a team of 185 members.
What do you plan to achieve in the next 2–3 years?
Broadly 3–4 things we would be focusing on, one obviously acquire more customers on the digital medium, try and build you know our own digital acquisition channel that’s number 1.
Number 2 is you know on tier 2- tier 3 towns where digital penetration is still not very high but still there is a significant amount of people taking trips out there, how do we go to that audience and become top of mind for that audience, So if you look at it audience wise the digital-first audience is what we want to capture and then tier 2- tier 3 towns.
The next focus is on beefing up the tech team and overall tech capabilities. how do we ensure that our product innovation efforts continue to be one of the best in the industry?
There is also a larger focus on taking this product global. End of the day there is a European going to the US or an Australian going to Bali and the product is suitable even for them. We also want to try and understand what it takes to build traction in those markets.
While all this may not be possible in the next 6–10 months but this is the broad themes we want to focus and kind of drive over the next 2–3 years. | https://medium.com/petacrunch/pickyourtrail-a-self-service-platform-that-lets-travellers-create-customise-and-book-143c7a380cc6 | ['Kevin Hart'] | 2019-10-03 11:19:40.368000+00:00 | ['India', 'Planning', 'Entrepreneurship', 'Startup', 'Travel'] |
How I Build Machine Learning Apps in Hours… and More! | How I Build Machine Learning Apps in Hours… and More!
What is new in the AI world, the release of our book, and our monthly editorial picks
If you have trouble reading this email, see it on a web browser.
Happy Monday, Towards AI family! To start your week with a smile, we recommend you to check out “Superheroes of Deep Learning Vol 1: Machine Learning Yearning” by Falaah Arif Khan and Professor Zachary Lipton, an exciting, hilarious, and educational comic for everyone who is or has worked with data in the past.
If you are into research, NeurIPS recently posted its findings during the 2020 paper reviewing process, with some insights on the submission and historical data on primary subject areas, acceptance rate, ratings, and so on for the past two years.
Next, if you have a Ph.D. and you are in the job market for a faculty position, we recommend you to check out the faculty openings in the Machine Learning Department at Carnegie Mellon. They currently have multiple tenure track and teaching track opportunities for you researchers out there!
If you are into tinkering with data and you are interested in forecasting epidemics (specifically COVID-19 in this case). We recommend you to check out this post by Kathryn Mazaitis and Alex Reinhart on how to access COVIDcast’s Epidata API, which provides freely available data to CMU Delphi’s COVID-19 surveillance streams.
📊 For a limited time, we are taking discounted pre-orders on our book “Descriptive Statistics for Data-driven Decision Making with Python” — a guide to straightforward, data-driven decision making with the help of descriptive statistics. Ordering our book also gives you access to any future updates made to it — support Towards AI’s efforts and help us improve to provide you with better content. 📊 | https://medium.com/towards-artificial-intelligence/how-i-build-machine-learning-apps-in-hours-and-more-486955768aa1 | ['Towards Ai Team'] | 2020-11-12 21:32:15.603000+00:00 | ['Technology', 'Innovation', 'Artificial Intelligence', 'Education', 'Science'] |
Five Ways to Eliminate Writing Goofs | When competition is stiff—as it is in freelance writing, publishing, and marketing consultation—the losers can be doomed by details as small as apostrophes in the wrong place.
In fact, over my years of hiring writers and marketing support, attention to detail has consistently separated the strong from the weak. That means copy that’s free from grammar or punctuation errors and typos.
But the key is not being perfect. It’s knowing your weaknesses and having reliable tricks for compensating. Here’s how to come out a winner.
Where details still matter
You may think proper punctuation and the correct use of tricky combos like effect and affect or it’s and its don’t matter as much in these days of automated grammar checkers and, for instance, online news publications that prioritize quick content over editing.
But plenty of people who hire writers or marketers still care. Why? Because in many industries, details still matter. Try convincing a customer that your programmers can compile a million lines of error-free code when even your website displays glaring mistakes.
As a result, many corporate decision-makers are sticklers. A colleague of mine once rejected a writing candidate because his cover letter said, “I’m anxious to meet you,” rather than “I’m eager to meet you.”
“He should know the difference in the implications,” she explained. Since we often had to finesse fine shades of meaning, she had a point.
Many New York editors say their desks are so stacked with worthy manuscripts that acquisition decisions can be swayed by which require the least editorial work.
I’ve never been that militant, but I have deep-sixed résumés from freelance candidates and marketing support teams whose work had too many typos. Their skills couldn’t outweigh the extra work of policing their copy.
As a published novelist, I’ve also spoken with many New York editors who say that acquisition decisions can be swayed by which of several worthy manuscripts require the least editorial muscle. That matters if you’re working on publishing nonfiction in your business area of expertise. (Not to mention fiction, which is even more competitive.)
Five ways to look better
I know four good ways and one great one to find and fix slip-ups that otherwise might sink your chances:
Become familiar with the most frequent errors. Some are so common you may not know they’re wrong. Search for “tricky grammar mistakes;” this list and this one are a good start.
A key item on such lists: common verbal expressions that may fool your ear when it’s time to write them. One egregious example: “should of,” as in, “That writer should of had an editor.” It’s should have, but this goof is common.
The mistake is not being lousy at spelling or punctuation — it’s being unaware you need help from someone who isn’t.
2. Know your common troublemakers, such as it’s and its. (This combo defies even those who know the difference.) Using a search function to find them in the drafts. Thus isolated, they’re easier to reconsider and, if necessary, correct.
3. Change the look of your text to proofread it. Move it from your phone to computer or vice versa, or at least change its font, size, and color. Better yet, print it; putting it on paper or into a larger font often makes errors jump out.
4. Read your draft backward, one sentence or phrase at a time. Thus breaking the flow of what you thought you wrote makes errors more likely to show.
5. Best: Team up with a member of the Grammar Police—or at least a cold reader with copyediting savvy. (This is also the best way to identify your personal troublemakers.)
Remember, the mistake is not being lousy at spelling or punctuation — it’s being unaware you need help from someone who isn’t. Either team up with a colleague to find and correct your goofs or hire someone who can.
You don’t want to be embarrassed by your work, and neither do freelance writing clients. But you’d be shocked by how many upper managers delight in marking even dubious text errors in drafts meant only for content approval. (You’d think they had enough work running their companies, but this is so widespread I can only conclude that they don’t.)
It’s an ego game for them. But for you, writing—or selling your services as an entrepreneur—is a livelihood or identity, right? Use my five tips to find and fix more of your own typos this year, and you’re more likely to achieve success.
Plus you’ll delight Apostrophe Inspectors like me. | https://medium.com/swlh/five-ways-to-eliminate-writing-goofs-406012fa349 | ['Joni Sensel'] | 2020-01-01 12:40:58.384000+00:00 | ['Writing Tips', 'Writing', 'Entrepreneur', 'Startup', 'Freelancing'] |
How To Send Images Into Flask API via URL | After a little introduction, let’s start our project without wasting time. First of all, we will need some Python libraries to do our operations.
pip install Flask pip install pillow
After installing the necessary libraries, let’s design an API endpoint that will take as a input the image URLs with the HTTP-POST method.
By using the “<path:url>” specifier, we ensure that the string that will come after “send-image/” is taken as whole.
If you run this script and test it on your browser, the result will be as follows. | https://medium.com/python-in-plain-english/how-to-send-images-into-flask-api-via-url-7d4be51e8130 | ['Burak Şenol'] | 2020-12-18 19:07:16.123000+00:00 | ['Programming', 'Python', 'Flask', 'Image Processing', 'API'] |
7 Painful Reasons all Women Should be Angry | 1. They shame our powerful sexuality.
Women shine as glorious sexual beings, and when they diminish our sexuality, they reduce our humanity.
Judith Duerk, author of “Circle of Stones,” describes “the gift of sexual love” as being “the most sacred of the gifts bestowed by the Goddess.”
Far from celebrating the beautiful energy inside me, my brother called me a “whore” when I began wearing makeup at 13 years old. I didn’t understand this gross derogatory term people use for sexually-liberated women or the insecurity behind it. I just knew there must be something wrong with me.
2. They gaslight, causing us to question our reality.
A former sports doctor sexually abused over 260 young gymnasts over several decades. Several women were brave enough to share their stories earlier, only to be questioned and shamed by their families and communities. People accused these women of being attention-seekers and were unwilling to doubt the doctor’s reputation.
Women are not “crazy.” We’re intuitive and insightful. We’re brave enough to be vulnerable and real with our emotions. At an early age, we learn that our feelings are wrong. No wonder we struggle with codependency and self-trust.
This emotional abuse is not okay.
3. They belittle feminine values.
Patriarchal culture preaches the virtue of logic over emotion. Even today, men and masculine values overwhelmingly outnumber women as government officials, high school reading list authors, noble prize winners, and leaders.
We’re not promoted at work for being great team players, acting supportive, or nurturing team morale. They promote us when we boost numbers or hit business goals.
4. They teach us to be silent.
“Let a woman learn quietly with all submissiveness. I do not permit a woman to teach or to exercise authority over a man; rather, she is to remain quiet.” — 1 Timothy 2:11–12 (the Bible).
I grew up in a religious environment. No wonder I struggled to set boundaries and went on to be sexually abused, then felt guilty about it. After years of leaving the church and months in trauma therapy, I can’t read this verse without my entire body tensing with anger and my teeth gritting.
With or without the glaring religious sexism, women learn to make themselves smaller for fear of criticism. Have you ever been told that women talk too much? Being told to shut up is harsh, dismissive, and destructive. No wonder we struggle with self-worth.
What if someone told you that it’s healthy to have a strong voice and opinion, even when you’re angry?
5. We can’t win at work.
Have you worked in a male-dominated environment? Did you feel like men took the time to understand you and your values? Or did you feel like you were speaking in another language, struggling to earn respect?
On top of struggling to fit into a masculine structure, they treat us like idiots, talking over us or “mansplaining.” Is something wrong with me? I didn’t know, so I worked pushed myself to work harder, afraid of criticism.
Men far outnumber women in leadership roles, yet women score higher in essential leadership skills. And even when we are somehow successful in this environment, we become less likable. No wonder we’re stressed out.
6. They blame women for provoking abuse.
Most men believe that women who wear revealing clothes increase their chances of being harassed or assaulted, according to a 2019 study in the UK. This widespread myth causes women so much trauma. On top of being abused, we feel ashamed, embarrassed, and afraid that we are responsible.
Women who wear sexy clothes are not asking for harassment or assault, and we are not responsible for others’ behaviors.
Women’s wardrobes have long been used as an excuse for sex crimes, however, when you look at the data on why people rape, that doesn’t hold up. These are arguments are for transfering the responsibility of control and power from the perpetrator to the victim. — Sandra Shullman, Ph.D., a psychologist who specializes in harassment.
7. Our bodies are never enough.
They put us into a beauty competition we never asked to join. I grew up knowing that I could never gain weight because of all the judgment I heard. But guess what? I couldn’t be too thin either because they criticized me for that too. I learned to feel anxious about my body and skin, even with a healthy, culturally-pleasing body shape. Women cannot win at this game. | https://medium.com/an-injustice/7-painful-reasons-all-women-should-be-angry-6e06102358ff | ['Allison Crady'] | 2020-12-25 19:51:56.292000+00:00 | ['Culture', 'Mental Health', 'Anger', 'Psychology', 'Woman'] |
Everything Is Marketing is Everything | If you wait long enough, everything seems to evolve into a crab. Biologists call this “carcinisation”. On the internet, sooner or later everything seems to evolve into a marketing platform. In this article, I will show you how that happens, what consequences it has, and what it might mean for you.
Let’s start with some examples.
Facebook is an obvious one — a platform started basically as a clone of Hot-Or-Not, a platform for rating the attractiveness of your school mates — evolving into an easy way to build a personal blog/website, struggling to find its business plan even as it had millions of users. It finally found it: it is now one of the top advertising platforms worldwide, topped only by Google.
Google is, of course, known for its search engine — so much so, that “to google” is now a verb synonymous with online search. But what is their main revenue? It is advertising, plain and simple. Most of it comes directly from the search engine (“Google AdWords”), 10% or so comes from google affiliates that post the ads on their websites, and similar. And an ever-increasing part comes from Google-owned YouTube.
YouTube is a platform to upload and watch videos. For many, it has now become the main entertainment/infotainment channel, replacing traditional TV. How does it make money? By advertising of course. Just like “traditional” TV channels did before it.
So much for the obvious examples. How about this one: it started as an online bookshop, deemed by many to be a crazy idea. Now it sells everything and the kitchen sink (literally) and is arguably putting brick-and-mortar stores out of business. But did you know that Amazon makes a full 10 billion a year from advertising, taking the third place behind Facebook and Google? That, of course, apart from the fact that it serves as a major online marketplace, a logistics provider, an IT infrastructure provider, and more.
It is no surprise that “Yahoo!”, known mainly for its search engine, makes its revenue through advertising — see google — but did you know that Apple — yes, the iPhone manufacturer — has an advertising network and is making 2 billion a year from that?
This phenomenon is not limited to websites: the predominantly Asian mobile chat platforms WeChat and LINE are not only messenger apps like WhatsApp, they are also advertising platforms. And literally everything else: ride and delivery services platforms (like Uber or Grab), online market places, and, of course e-wallets, aka payment services.
By the way of Grab, the South-Asian Uber competitor: of course you can order delivery on Grab now. And Groceries. And book hotels. And coupons. In short, something that started as a ride-share application is now also quickly evolving into a crab, err, a general-purpose advertising platform. Payable, of course, via a built-in e-wallet. (Yes, Libra is coming quite a bit too late into the game. I have my suspicions as to why.)
Why does it happen?
One word: exposure.
Every platform that is used by many people daily has those people’s attention. Take Amazon, and its dozens of competitors (eBay, Lazada, Shopee, …). People go there to buy things — and do so regularly. People search for things there. People rate things there. These are all obvious points to insert advertising and targeting algorithms. After all, Amazon knows everything about your online shopping habits. Who is paying for the ads? Why, Amazon users who sell their products there, because Amazon is also an online marketplace: most products on Amazon are sold by people like you and me, who have to compete with one another in an attempt to gain exposure. Unlike Amazon, that also actually sells its own products, its competitors Ebay, Lazada, Shopee et al, are nothing but online marketplaces.
A lot of those things are true for Facebook — people go there to talk to their relatives and friends, but stay to click on articles about tagged and analyzed topics. It includes a chat application that you use to stay in touch with your friends and relatives — which of course knows the general topics that you are chatting about. Even if it does not understand the exact content of your chats yet, it can easily scan your conversations for keywords, and knows if you need to buy kitty litter once again, and allows some affiliate marketer to hand you a sponsored link. Though by now, it also has groups dedicated to trading things, ie, it, too, has become an online marketplace. Like the aforementioned LINE and WeChat.
Something similar is true for Twitter — you might not reveal some of your more private topics on there — some people do — and probably not going to buy things directly on there, but you reveal enough about your interests that the information is useful for advertisers, who will happily take you off Twitter to some darker corner to present you with their selection of watches.
And while you will probably not reveal quite as much to a ride-share service, you are still looking at that app every single day if you commute to- or from work, so it can sell you stuff. Two more words: captive audience. In fact, the same is true for any well-run, well-visited portal, as long as it actually provides an additional value to the user that he can’t easily get elsewhere. The latter is important:
If a portal serves no purpose but to gather clicks and ad views, it might do so, but it is going to be a lot less successful than pretty much any actual service. If the news site you visit is just going to dump ads on you, you are not going to visit it for very long, especially if there is an equivalent site that doesn’t.
This, coincidentally, also happens if you do offer a useful service — as soon as someone else has better exposure than you do. For example, Uber is close to useless in Thailand, because both Grab and LINE are better known and thus it is far easier to get a ride with one of those.
Wait… LINE? Isn’t that a Chat app? What do they have to do with Uber?… Yes, LINE is the dominant chat application in Thailand — dominant to the point where almost everyone has it, and almost no-one has anything else. They have a merchandise store with enormous teddybears and rabbits in the middle of Bangkok. Thais use it obsessively. You can’t walk very far without someone’s phone screaming “LLLine!” at you. Which means, the app gets incredible amounts of exposure. Which it, in turn, used to slowly grow to include everything — including a market place, an e-wallet that is now officially accepted by the BTS Skytrain system in Bangkok, and — a ride sharing and delivery plug-in that is slowly displacing a number of competing services. Including Uber.
Similar things are happening in other markets as well: for example, you can follow the jostling between Booking.com and AirBnB. You might think that they really cater to different markets — hotels vs. private people — but there are hotels that offer some rooms on AirBnB, and private people offering their one room on both. Both sites have ads for more than just rooms for rent — Booking.com will offer you everything from flights over airport taxi to guided tours and travel insurance, whereas AirBnB will also sell you packaged “experiences” at your destination. Which sometimes are also guided tours. In the end, both are simply niche advertising websites with some extra features and a lot of exposure. What they really charge the host for, is the exposure, because everything else they offer is relatively easy to imitate, and therefore easy to compete with.
Which is why there are firms like Agoda that successfully compete with both.
And why Uber spent almost 4 Billion (!!!) on marketing before its IPO.
What do we learn from this?
Everything online that has exposure has the potential to become a marketing platform, but — unless it is truly a household name, or has a truly unique product that allows it to rely on word of mouth— it needs to put effort into maintaining exposure, and one of ways to do this, is marketing. It seems, marketing is an ouroboros, a snake consuming itself.
Ouroboros (image: Alchemy UK)
What does it mean for a Startup?
Here is where things get unpleasantly fuzzy. Remember that 90% of all new online businesses fail within the first 120 days, and one of the largest reasons for this is lack of exposure — be it lack of marketing, or hundreds of competing products that all vie for people’s eyes. If you don’t make yourself visible, it doesn’t matter how superior your technology, or how good your product is, nobody will find you: there are literally millions of websites and millions of apps to chose from.
But it is good to remember that marketing and exposure are not the same thing, and sometimes don’t translate well into one another — for example, the above-mentioned Uber discovered to their chagrin that 80% of their marketing spending was apparently for nothing. On the flip side, some companies excel at creating exposure without any marketing whatsoever. One of the masters of this is Elon Musk, who maintains presence in the media by creating large amounts of controversy, just like some media stars used to do before him. But you don’t need a million Twitter followers to be able to do this. For example, a little-known German company called Teekampagne — that is the world’s largest importer and one of the largest distributors of Darjeeling tea — was started by a German university professor, who created their original exposure simply by announcing their business plan in a newspaper. The business plan was completely outrageous for its time, and was picked up and torn apart by the press: Imagine that — a mail-order tea store, one that even sells one single brand of tea! — which of course announced the Teekampagne, and their entire one-item catalogue, to the whole of Germany, 80 million people — entirely for free. In contrast, check and see how much you would pay for 80 million views of your online ad.
At the same time, if your company manages to grab eyes, you might limit yourself to selling online ads from different platforms, like millions of entrepreneurs before you — or you might expand your own offering to provide people with what they might also need while they are here. You could rely on Google AdSense, or you might offer your customers the opportunity to advertise to your other customers. To sell something. You, too, could become an Ad-ridden app, a marketing platform — or an online marketplace for goods and services, like WeChat, LINE, or, yes, Grab.
Because in the end, on the internet, everything evolves into a Grab. | https://medium.com/swlh/everything-is-marketing-is-everything-89a71a2d41dd | ['J. Macodiseas'] | 2020-11-15 18:11:05.399000+00:00 | ['Advertising', 'Marketplaces', 'Market', 'Startup', 'Marketing'] |
How to Have a Healthy Relationship with Writing | I’ve always loved writing, but I’ve been afraid of writing, too.
Because I love it. Because it’s important to me. Because it’s a core expression of who I am, so I don’t want to mess it up.
Most of my life, I’ve had a love-hate relationship with writing.
Actually, more of a love-fear, dread, avoid, obsess over relationship. In other words, not a healthy one.
And I know exactly why.
Our Unhealthy Beginnings
My relationship with Writing was sporadic, strained, and codependent. I was like a fangirl trying to date a superstar.
We’d go out, me and Writing. I’d be nervous and tongue-tied, awkward and unsure, and totally not myself. After the date, I’d feel miserable and stupid; I’d go over every single thing I did and said wrong. I relived each error, each embarrassment. And I’d swear that Writing was not for me. I’d avoid the phone calls. I wouldn’t answer the texts.
Then, after a few weeks, the memory would fade. I’d get the flutters and jitters and all those infatuated feelings. I’d agree to another date, certain it would go better this time.
And, of course, it would be the same story.
I was infatuated with Writing and thought I was in love. But I was also anxious, star-struck, and uncomfortable.
Of course, I couldn’t relax and be myself and have a good time with Writing. I was way too nervous.
When Things Changed
Things changed for me and Writing when I decided I couldn’t take it anymore. I wanted to break up, once and for all, but the idea broke my heart. So, instead, I decided to commit to something more serious. More regular.
We started seeing each other every week.
Then it was every day.
At first, I was as uncomfortable and awkward as ever. I felt like every date was a complete waste of time. I was always sure Writing wouldn’t call back.
Slowly, though, something changed. I changed.
Sheer repetition creates familiarity, and guess what familiarity does? It takes away the discomfort.
The more I hung out with Writing, the less nervous and unsure I became. The more I realized that Writing wasn’t some god, some unreachable pinnacle, some flawless wonderland. Writing was sometimes complex, challenging, and intimidating, but, just as often, simple, open, and fun.
Finding Mutual Love
The more I got to know Writing, the less intimidated I was. And I began to see another side to our relationship: as much as I wanted things to work out with Writing, Writing wanted the same.
I needed Writing; Writing also needed me.
We’re in a good place now, me and Writing. We have a committed relationship. We have our bad days, sure. Sometimes Writing frustrates the crap out of me, and I’m sure I do the same. We have conflicts, but we work them out.
I’ve learned that a healthy relationship doesn’t mean you always feel good, but it does mean you don’t feel scared.
Sometimes I’m not feeling it, so we take a break. But we never take a break for very long. Spending time together and communicating regularly is what makes our relationship work.
We’re a Work In Progress
Whenever we drift apart, I start feeling like I don’t know Writing anymore. Then I start feeling nervous and unsure about our relationship again.
It’s taken me a long time to realize that those fears don’t come from Writing; they come from me. And that’s okay, too.
It’s like any worthwhile thing; you have to work at it. But the more you work at it, the better it gets. The work turns into play. You move from frustration to flow. And one day you look up and realize you’ve got a good thing going. | https://anniemueller.medium.com/how-to-have-a-healthy-relationship-with-writing-55ec104bbc05 | ['Annie Mueller'] | 2020-03-05 15:17:31.229000+00:00 | ['Writers On Writing', 'Writing', 'Psychology', 'Writing Tips', 'Writing Life'] |
C# Design Patterns — Singleton. Providing one instance for the whole… | C# Design Patterns
C# Design Patterns — Singleton
Providing one instance for the whole application
Photo by Hitesh Choudhary on Unsplash
Design patterns are common coding practices defined to solve common software development problems.
The Singleton pattern was developed to provide a common way of providing a single instance of an object throughout the whole application lifetime. So, as long as the application is not restarted, this instance must be the same regardless of how many times you instantiate it.
Usage and Drawbacks
Examples
Some examples of singletons are objects that need to share resources between classes or threads, like:
Global state management
Logging service
.NET Core AppSettings
And others…
Drawbacks
Sometimes, with poor implementation, the Singleton pattern can actually become an anti-pattern , the reasons are:
It is really difficult to test if you are not using Dependency Injection , because it is statically created, so you can’t manually control it, as a result, you can’t mock it.
, because it is statically created, so you can’t manually control it, as a result, you can’t mock it. It can lead to memory leaks if dependencies are not properly disposed after usage
Let’s build it
Let’s imagine that we have a service interface IGreetingService.cs :
Now it was required that this service should not change throughout the whole application.
The classic approach
There are many ways to implement the Singleton pattern in C#.
Here I’ll show you three approaches and which one I would use.
Double-Checked Locking
I didn’t consider showing this approach without thread lock because it is very unsafe to use in multithreaded applications that way.
Here we have
Two private variables: a static variable that is the service instance, a read-only object that is going to work as the thread lock.
instance, a read-only object that is going to work as the thread lock. A private constructor to prevent a new service from being manually instantiated.
constructor to prevent a new service from being manually instantiated. A public and static property that is how we can access our singleton instance.
You can note that we have two null checks. The inner check is to prevent the instance from being recreated and the outer null check is for preventing lock every time we need to access the instance, thus increasing performance.
Also, we have the lock for preventing this code from being run by more than one thread at the same time, making it thread-safe .
Now if you execute it like this:
IGreetingService service = DoubleCheckedLockingGreetingService.Instance;
IGreetingService service_2 = DoubleCheckedLockingGreetingService.Instance; service.Greet("Singleton");
service_2.Greet("Singleton");
You will get the same output for both methods.
Lazy<T>
A second approach is letting the instance be created for the first time it is requested.
.NET has the Lazy<T> class, which provides a lazy initialization of objects for us.
Now if you execute it like this:
IGreetingService service = LazyObjectGreetingService.Instance;
IGreetingService service_2 = LazyObjectGreetingService.Instance; service.Greet("Singleton");
service_2.Greet("Singleton");
You will get the same output for both methods.
Object Eager Initialization
In C# there’s an option of assigning the instance for the static variable, making it possible for the object to be initialized when it is first needed.
We can go even further and not use a private variable anymore and use Auto-Property , which assigns a value for the property if it doesn’t have a value yet.
Note the static constructor. This is needed so the C# compiler will not mark the type as beforefieldinit . This will guarantee the class laziness.
Now if you execute it like this:
IGreetingService service = SimpleGreetingService.Instance;
IGreetingService service_2 = SimpleGreetingService.Instance; service.Greet("Singleton");
service_2.Greet("Singleton");
You will get the same output for both methods.
Modern .NET Dependency Injection
Modern .NET/.NET Core applications already come with a built-in dependency injection mechanism that automatically injects services with the respective life-cycle they need.
Transient — Injects a new instance every time it is created and lives as long as the parent
Scoped — Injected once per request and lives as long as the request lives
Singleton — Injected once per application and lives as long as the application lives
So, in a .NET Core WebAPI, for example, you only need to register the IGreetingService with the GreetingService as a Singleton in the ConfigureServices method in your Startup.cs file. Like:
services.AddSingleton<IGreetingService,GreetingService>();
And for the implementation of this service we have:
Note that we don’t need an Instance accessor property to access our singleton. This happens because we delegate the job of assigning this instance to the framework, so all you need to do is inject the IGreetingService where you need it and .NET will provide you the only instance it created. For example:
public MyClass (IGreetingService service)
{
// Do something or assign to a class member
}
Conclusion
You could see how easy is to implement the Singleton pattern from scratch with C#. Even though it is a pattern and has its applicabilities, it needs to be used with care because it can lead to many system issues like memory leaks. Also, due to many drawbacks, it is often recommended to not use the Singleton pattern because it can become an anti-pattern.
Thankfully, with the .NET dependency injection mechanism, many drawbacks can be avoided, like the difficulty of implementing unit tests.
I uploaded the code for the normal implementation in this repository. | https://medium.com/swlh/c-design-patterns-singleton-36d746bd7b6e | ['Andre Lopes'] | 2020-08-05 16:32:18.660000+00:00 | ['Singleton', 'Dotnet Core', 'Dotnet', 'Csharp', 'Design Patterns'] |
How To Use Analytics To Identify The Business Value of Your Website | How To Use Analytics To Identify The Business Value of Your Website
3 simple steps to setup and track business objectives on your website
You just finished designing the new website for your business.
You loved everything about it — the layout, the information, the type… It’s perfect.
However, while admiring the new design, you can’t help but think:
“I love this website, but how do I know if my customer and audiences love it as much as I do? Also, how do I know if this website is truly useful for my company? How do I know if it will really increase my revenue and traction after its launch?”
These are the questions you can answer with web analytics.
Today, we are going to lead you through a simple, 3-step exercise to setup the most basic analytics for your website, so that you can:
Understand the true business objectives of your website. Access key metrics to demonstrate the effectiveness of your new website. Measure whether the new website helped you achieve your business objectives.
Step 1 — Define the business objectives of your website
Your website is like “functional art.” It must serve a purpose for your business, whether that’s creating sales, generating leads, or getting traction for your brand.
Your mission in this step, should you choose to accept it, is to identify one or two concrete business objectives for your website.
If you are an ecommerce company, this objective is most likely increasing the number of customers who complete checkout. For a B2B company, it may be submitting a lead form. For a SaaS company, it’s probably signing up for free trial, etc.
When selecting these objectives, there are two important points that deserve your attention.
First of all, you should only select one or two objectives.
One of the biggest analytics problems for business owners is having too many objectives for their website. That’s why two business objectives should be the maximum.
Having too many objectives is bad for two primary reasons.
First of all, it creates a crammed website.
It is very hard to put enough information on your website that serves all of your objectives artfully without confusing your users. That’s why it is always better to be a specialist in one or two objectives than to be a master of none.
Secondly, too many objectives makes generating insights from analytics extremely difficult.
Very often, you will take action steps that make progress on one objective while setting you back in another, and too many objectives will make these trade-off calculations immensely complex.
As an added benefit, the exercise of condensing many goals to only two key objectives can also help you prioritize and truly understand what is important for your business. This makes executing your digital strategy much easier and more efficient.
Let’s go back to your mission for this step.
In addition to having a maximum of two goals, the second principle for setting goals is that they should also be “concrete”.
By concrete, I specifically mean that you should choose a goal that you can envision clearly and measure with ease.
For example, one of your objectives may be “gaining traction.” But what does that really mean?
In the context of your website, gaining traction could mean many things ranging from viewing your ads to signing up for a newsletter on your website.
Therefore, you should narrow down that goal by asking yourself what you mean by “gaining traction.” Then, define a clear action that your users can take that you will count as a goal completion.
This will help you not only avoid goals that do not have any concrete meaning, but also make setting up goals in your analytics tools (e.g. Google Analytics) easier down the line (see Step 2).
Here is an article we wrote if you want to learn more about how to choose the right business objectives for your website.
Step 2 — Setup Google Analytics (or Other Tools) to Access Key Metrics
Now that you have your business objectives defined, it’s time to set up the tools to actually measure how well you are doing on these business objectives.
There are multiple tools on the market that can help you analyze various aspects of your website traffic. These include your traffic statistics (Google Analytics), your search engine performance (Moz), and where people tend to click on your website (Hotjar).
A good overview of all those tools are displayed in the graphic below (credit to analytics legend Avinash Kaushik).
Out of all these options, I would always recommend starting your analytics journey with a web traffic (or clickstream) analytics tool such as Google Analytics. This is because 1) they are usually free or very cheap, and 2) it can give you a general idea of how other services fit into the big picture for your analytics (and what services you should use next).
Google Analytics is our go-to clickstream analytics tool not only because it is one of the most popular free tools out there with robust functionality, but also because it integrates very well with common analytics and advertising platforms such as Google Optimize and Google Adwords.
We have written a whole article about how to setup Google Analytics, so I won’t go in depth here, but it is a very easy and painless experience that can be done in as little as 10 minutes.
Just by setting up Google Analytics, you will already have access to a lot of information about your website including:
Who your users are, and how many there are What sources they are coming from How they are interacting with your website Which pages are performing best, and more
For a more detailed overview of all business questions that can be answered by Google Analytics please reference our 4 Business Questions framework below.
However, while all of this information is already very helpful to measure the functional success of your website, we can go a step further. The next step is to understand whether your website is meeting the specific business objectives you chose in Step 1.
Step 3 — Configure Conversion Goals To Measure The Effectiveness Of Your Website
No matter how well-designed your website is, no matter how much traffic you get, if no one “converts” (i.e. completes the business objectives of your website), your website is not useful for your business.
Therefore, if you were only to measure one metric about your website, you should measure “conversions.” Conversions define how well your website meets your business objectives defined in Step 1.
Luckily, Google Analytics offers a very easy way to track conversions on your website through its “Goals” feature, and you only need to go through two simple actions to set it up.
Firstly, you need to “operationalize” your business objective by defining a concrete user action that signifies the completion of that objective.
For example, for the business objective of “completing ecommerce purchase”, a very common way to track this objective is tracking how many users visit the thank you page after checkout, since it is only accessible after a successful checkout by a user.
Therefore, if you were only to measure one metric about your website, you should measure “conversions.”
After you have identified that specific action, all you need to do is setup a Google Analytics Goal to track that specific action as a “goal completion.”
You can find a tutorial on how to accomplish that in Google Analytics below. In short, you need to go to the “Goals” section of your Google Analytics Admin and simply create a goal that corresponds to that user action you chose.
With that, you are ready to track your business objectives!
Final Thoughts
With all the action steps explained in this article, you should have a simple but robust analytics system configured on your website to track how effective it is in adding value to your business.
In fact, in each of these three steps, you may have already found actions you can take to further improve the experiences of your website users. Now it is just a matter of implementing these action steps with a systematic plan.
As you are taking these actions, it is essential to keep track of relevant metrics on a weekly basis to make sure your actions are actually creating meaningful improvements for your website.
And with these incremental improvements, you will eventually create a website that is an engine of growth for your business.
At Humanlytics, we create tools that automate the processes explained in this article to make actionable analytics accessible in only a couple of clicks.
We are looking for beta testers to test our newest “conversion goal setting” tool that will automatically setup your Google Analytics and Facebook Pixel goals without complicated configurations and monitoring. | https://medium.com/analytics-for-humans/how-to-use-analytics-to-identify-the-business-value-of-your-website-c13f9c9675c7 | ['Bill Su'] | 2018-06-08 19:54:43.929000+00:00 | ['Critical Thinking', 'Google Analytics', 'Digital Marketing', 'Startup', 'Marketing'] |
Your Emotions Are Not Your Own | Whenever you are visited by an emotion, the emotion comes to you as a teacher and a guide. The emotion is guiding you to a part of yourself that needs attention and love. Not “healing” or “fixing” or “getting rid of.” Just attention and love.
These emotional guides often show up in the form of your child self, pulling on your sleeve until you look at them. And when you look at them, when you finally give them the attention they need, the attention you need, they will only ask one question,
“Am I okay?”
You see, this is the only question children are ever really asking. This is the only question your heart has ever really asked.
“Am I okay, Mommy?”
“Is it okay to be feeling the way I feel right now, Daddy?”
Perhaps you had parents that reflected your okay-ness back at you, or perhaps you had parents that implied or explicitly said, “No, you are not okay. It is not okay to feel that way,” or perhaps you had parents that left you with only a resounding silence.
Either way, it is your job to learn how to be your own parent, now. It is your job to turn toward every emotional archetype that pays you a visit and say,
“Yes, love. You are okay. It is okay that you are here right now. Stay as long as you’d like.”
When an emotion pays you a visit, it is an indication that there is a part of you in need of loving attention, nothing more. It does not mean you are faulty or you are going backwards, it means you are being led deeper.
“Deeper into what?” you ask?
Yourself, love.
Yourself.
It’s only all you’ve ever been looking for. | https://medium.com/just-jordin/your-emotions-are-not-your-own-258bf8b90e97 | ['Jordin James'] | 2019-09-11 15:41:42.879000+00:00 | ['Spirituality', 'Life', 'Mental Health', 'Psychology', 'Life Lessons'] |
You’re Not Writing Enough… And That’s Okay Right Now | It’s Day… Whatever of Lockdown. Everywhere around me people are reaching outside of their comfort zones, trying to adjust to the New Normal. Going above and beyond to try and stay healthy and connected during this confusing time. Probably using features and apps in ways you hadn’t ever planned on. (I, for one, never expected to receive a Zoom invite from my dad.)
What I’ve found to help me is to stick to some kind of loose morning routine. To have a sleep schedule. I’ve been trying to move throughout the day and get some sunlight if I can. But because of social distancing, there’s a lot more literal and figurative space in my life. Most likely, in all of our lives.Which — as a people-loving, introverted writer — is both a boon and a burden for me.
I don’t think I have to explain the burden. But the boon? Suddenly, I’m ordered to stay home. Automatically ridding my life of a whole lotta distraction. Which makes me excited. Because maybe I can finally sit down, organize all of the projects I want to work on and then start working on them.
Because, whether you’re conscious of it or not, we’re wired to believe that any amount of “free” time is — and should be — opportunity to “be productive.” And in a capitalist society like ours (don’t worry; I’m not going to launch into a civics lesson), we’re conditioned for “not enough.” So this pressure to constantly be getting more done. Which can be exhausting.
Brené Brown describes this in her book Daring Greatly:
[F]or many of us, our first waking thought of the day is “I didn’t get enough sleep.” The next one is “I don’t have enough time.” …Before we even sit up in bed… we’re already inadequate, already behind, already losing, already lacking something. And by the time we go to bed at night, our minds are racing with a litany of what we didn’t get, or didn’t get done, that day.
Sound familiar? Have you been plagued with a niggling sense of guilt recently, ever since social distancing? That you could be — and should be — doing more?
There’s no excuse, right? Many of us have been “gifted” with this extra space and time. We should be the most productive we’ve ever been! We should be writing multiple stories a day! We should clean the entire house! We should be crafting and cooking and figuring out what else we can broadcast ourselves doing!
Well… not exactly. See, the thing is, there is an excuse. There are several excuses. Why? Because we’re living through a pandemic, people. An experience that obviously threatens people’s physical wellbeing but also their emotional, mental, and psychological health.
And as someone who has lived with mental illness for the past 11 years, I’m well aware that all of the invisible hardships we’re being put through are most certainly taking their toll. It takes energy to process what the world is going through. It takes effort to change your life for the good of the community.
It’s all work.
While we’re all probably feeling some sort of pressure to be productive during this time, it seems that writers might be feeling it more acutely. Because it seems whenever a writer has any time on their hands, they’re expected to write. If they’re truly “serious” about being a writer. We could be working to put a new story out there. Update our blog. Work on our ever-looming book.
We put this special pressure on ourselves that if we have time and aren’t writing, then that’s time wasted. It comes from capitalism, sure, but also from one of the truest (and most annoying ) pieces of advice ever:
Wanna be a better writer? Fucking write, already!
Again, it’s true! As it’s true of most forms of art or skill. But for whatever reason, writers are plagued more by this truism than others. Maybe it’s because no matter where we are, we should be able to write. Notebook, phone, computer — it’s all portable. We really have no excuse. It’s not like we need special clothing or equipment. Get inspired on the beach? Start writing in the sand! Get inspired while locked in an empty room? Start writing on the walls in blood!
And maybe some of you out there have been able to use this time in a way that supports your writing. Maybe you’ve been able to attend online writing classes or have been able to practice your craft.
But if your writing productivity hasn’t exploded, that is okay! If you haven’t been able to write at all, that also is okay.
Did you hear me? It’s okay if you haven’t been writing more. The most important thing is keeping yourself, those around you, and your community healthy. To help prevent the spread of Covid-19. Writing can wait. It’ll be there whenever you’re ready. | https://medium.com/swlh/youre-not-writing-enough-and-that-s-okay-right-now-dcc51c62a13a | ['Rachel Drane'] | 2020-03-26 17:45:56.363000+00:00 | ['Life', 'Writing', 'Inspiration', 'Mental Health', 'Covid 19'] |
How to Find Your Voice as a Writer | Photo by Gian Cescon on Unsplash
How to Find Your Voice as a Writer
And say what you have to say.
I’ve been writing for a living for close to 2 years now. While it often feels like living the dream, it sometimes feels like I’ve completely drained myself and have nothing more to give.
You’ve probably heard that writers are supposed to do just that, bleed on the page. That’s exactly what I’ve been doing.
And in the process of bleeding myself dry, I’ve also discovered something quite wonderful: I’ve found my voice.
I’ve honed in my writing style, and what I have to say, the message I’d like to be known for. I’s been a painful, joyful, wonderful process.
This is how you can find your writing voice, too:
Set your voice free, and forgive it for what it has to say
Sometimes I look back at my writing and realize my voice has perhaps said things out loud I shouldn’t have let it.
That’s too much detail, I think. Too much information that matters the world to me and absolutely nothing to everyone else.
Thoughts flow through my head:
This is too personal. Who cares? It adds nothing to the story. It sounds bitter.
These thoughts set me against my own voice, the spontaneous version of it, the version I have let unbounded, free to tell the stories it feels it needs to tell.
In the process of finding my voice, I had to learn to set itself free, and forgive it for what it has to say. It’s when I let my voice speak freely that I come up with my most authentic work, and when I try to curb or polish what it has to say that I come up with my most uninspired writing.
The more personal, the more readers relate
The more you set your voice free, the more personal your stories will get.
It’s scary at first, to be this vulnerable in front of so many people, but once you do it, you quickly discover the more vulnerable you are, the more readers relate.
There’s no better feeling than receiving feedback on your writing along the lines of “I could see myself in your story. Thank you for telling it.”
Your story and your voice complement each other
What does that even mean?
It means my story so far builds my voice, and my voice shapes the story I’m telling from now on.
I understand my voice as a writer is still developing itself, and that it will undoubtedly change — heck, I’m looking forward to seeing it evolve — but now my voice feeds of my story, it draws from my struggles, and I couldn’t appreciate that more.
My voice will never cease to develop and change.
At least I hope not.
I hope I never cease to grow and develop as a person. I have embraced change as a positive aspect of life a long time ago, and I look forward to seeing it manifest and carry me on. I’d rather keep going than stand still, paralyzed by fear.
My voice isn’t perfect — and I’m sure it will never be. I feel I have as much to learn from it as it has from me.
My voice isn’t exactly me, it’s both part of me and an independent manifestation of my mind that often acts despite myself.
We complement each other, my voice and me. That’s why I have to forgive it for sometimes speaking of things that might not be entirely appropriate, for lacking in style, for not always knowing what to say — and that’s why you have to forgive your voice for saying what it has to say.
Censoring yourself is not how you discover what kind of writer you are. Only by letting your voice say what it feels important to say can you grow in this field.
It takes opening your metaphorical veins and letting them bleed freely on the page. | https://medium.com/sunday-morning-talks/how-to-find-your-voice-as-a-writer-20c1d1b9cb46 | ['Tesia Blake'] | 2020-08-25 16:50:14.597000+00:00 | ['Writing Tips', 'Self', 'Productivity', 'Life Lessons', 'Writing'] |
Buying Eco-Friendly Is Not Always Sustainable | If you’re looking for a sign to purchase those items in your shopping cart, chances are, you don’t even need them. And by need, I mean, it is neither a necessity nor something you really really, really want.
Each passing day, a new product, an innovation, no matter how trivial, is out in the market and aims to provide relief to humans’ unending thirst for convenience and satisfaction. The capitalist industry feeds on this human instinct of purchasing what we feel is needed. And factors more than the environment suffer the consequences of the now reversed ideology of “necessity is the mother of invention” — necessity has become the child.
Despite the growing measures to combat the effects of our likewise increasing consumption, we may be — in fact, we already are, doing the opposite.
The Perfect Marketing Recipe…You Shouldn’t Fall For
Consider trends in buying sustainable products.
Most of the time, these are marketed to the public, guilting people into buying the products, facing them with how their previous practices hurt the environment. Add to this, the consumer’s urge to fit in with trends, and a business has got the perfect recipe to increase their sales.
However, as a responsible buyer, you should be reviewing and assessing how purchases could affect not only you but all other factors of the world you live in, such as the environment and society.
If you think buying that bamboo utensil set is a step towards sustainability, think again.
So, Should I Hop on the Minimalist Lifestyle Trend?
The true essence of sustainability is somehow synonymous with minimalism. Only get what you need. And get what would last you for as long as you need it.
Buying those wooden utensils is not a step towards an eco-friendlier life, especially if you already have utensils at home that you can pack for outside-home errands — and I highly doubt it if you claim not to have spoons and forks at home. Buying the set defeats its purpose of being sustainable because you never needed them in the first place.
If you weigh the benefits of consumerism against its many cons, well, you don’t even have to because the cons would easily outweigh the pros.
Consumerism does big things for a region’s economic growth — there’s no doubt in that. And what keeps the market growing is the innovative and creative minds of entrepreneurship. However, as individuals, being consumed by consumerism brings more damage to the singular than what it does good to the collective.
The Pursuit of Conscious Consumerism
This generation has been treading towards conscious consumerism, one step at a time. However, society and the market should respectively understand and inform how conscious consumerism is not synonymous with purchasing “environmental” products.
Instead, it should be focused on how both the consumers and the market can bring change to their previous and rather destructive practices by giving more attention to the impacts of buying and the behind-the-scenes in manufacturing consumer products.
Being a member of both ends of consumerism, I believe I have a big responsibility and power to give light to more sustainable practices that should be well-disseminated to the bigger public.
How to be a Smart and Sustainable Consumer
As a consumer, you should be knowledgeable enough about how certain products came to be and what these have in store for you as a shopper.
In choosing from a wide array of selections, you should analyze first if the products can be used for long — and if it is even going to be used — before questioning yourself of its quality, cost, ingredients, and convenience. After all, what would examining these factors do if you will not even be maximizing the product’s purpose?
To specify this, take that pending “sustainable” utensil set purchase into account again. From what I can observe and conclude, buying such would be impractical now that we’re in the middle of a pandemic. Why? Well, for one, you will not be using it since we are mostly in a work-from-home situation.
Secondly, do you seriously not have utensils at home? You can just pack what you already have — make it portable for whenever you are prompted to leave the house.
Or lastly, but most probably, were you just enticed by how the packaging and marketing were presented? The same can be said when purchasing other merchandise. And since we’re on the topic anyway, I strongly urge you to start thrifting. If you want unique, thrift and vintage stores are a haven for the one-of-a-kind. | https://medium.com/climate-conscious/buying-eco-friendly-is-not-always-sustainable-82468bb0444 | ['Naddine Luci'] | 2020-12-21 14:03:07.080000+00:00 | ['Lifestyle', 'Sustainability', 'Minimalism', 'Activism', 'Environment'] |
How Do I Stop Being Lazy and Procrastinating ? | Photo of Person Holding Alarm Clock
So you wake up in the morning, have your morning coffee, sit down on your desk, determined to work, slug it out, switch on your computer, and just as you are about to sort out your day’s activities, your mind wanders to Twitter, Facebook or Youtube.
Today is the morning after the elections for the President of the United States, and the entire day, I had multiple tabs open in my computer browser of various News outlets, multiple YouTube videos who are live streaming the possible results of the presidential elections, simultaneously checking out the trends in Twitter and Facebook like a maniac, I literally got “ZERO WORK DONE”.
How did I go from “Just gonna quickly check out what’s going on with the elections and then get back to work” to spending hours of YouTube, Facebook and Twitter!?
Brain
Dopamine, its the hormone that is released when we check out our notifications in Facebook or Twitter. Its these little bursts of dopamine which gets released in our brain, which makes us feel good. It’s the same hormone which is released every time we have sex, eat fast food or play video games. Our brain has several prominent dopamine pathways, that light up the moment we indulge in such activities. These pathways are called reward centers. Our brain constantly craves this “Feel Good” hormone, to the point it becomes an obsession. Dopamine is also released by taking drugs like amphetamines, cocaine, heroine etc. The obsession for this feel good hormone is so extreme, that a habitual drug user literally finds it difficult to stop taking these harmful drugs, the reward centers which gets activated(The HIGH) is far too great, that the individual is willing to go homeless in his pursuit to get that high using drugs. The reward behavior pattern where the dopamine pathways light up every time a drug user uses drugs compels the user to seek those drugs without any consequences whatsoever. This is the same hormone which makes us constantly crave for sex, drugs, fast food, indulging in gambling, basically any Instant gratification activity that leads to the release of this hormone.
Researchers were surprised when they studied the brains of drug addicts and social media users, to find out that the same reward centers which were activated in social media users also gets activated in the drug addicts. This need for “Quick fix” or “instant gratification” is one of the reasons why we frequently check out Facebook and Twitter, the moment we get up in the morning. If left unchecked, this could also lead to a significant change in human behavior where there is a constant need to get that quick burst of dopamine, thus hampering your productivity. In my case, I found myself checking out Twitter and Youtube every 5 minutes. This is not a unique phenomenon at all. A colleague of mine literally had to uninstall his Instagram App every day, before coming to the office(Those were the days when we didn’t have covid or “work from home” policy) because he found out that he was constantly checking his Instagram at work, leading to a precipitous fall in his performance. He says he uninstalls the app in the morning before coming to the office and installs it as he leaves the office. It’s really fascinating and harrowing at the same time to see how this simple hormone called dopamine literally has such control in our lives.
So How do you self control ?
Identify the problem:
It’s really important to understand why you are procrastinating in the first place. Not every one has the same “social media” problem as I do. It maybe as mundane as “Today’s weather is not good, I don’t feel like working” to “I want everything PERFECT, but I don’t have my lucky pen, so I will work on it tomorrow”. This happens especially in jobs that requires “creativity”. Especially if you are a composer, painter, copywriter, artist etc. Identify the problem, actively recognize that you are procrastinating and work on it.
2. Incentives:
Train your brain in such a way that if you complete a certain portion of your work within a given deadline or work continuously for 2 hours(Set a timer) then you are going to reward yourself. The reward can be as benign as eating a chocolate chip cookie or playing a quick game of Counter-Strike: Global Offensive. This acts similarly to the reward centers which lights up when we check our notifications in Facebook or Twitter. This trains the brain and your mind to delay the urge for “Instant Gratification” and instead work for it and get the sweet cookie at the end of the day/work. This technique works great especially if you have something that needs to be done quickly. This reward incentivized behavior is how exactly companies motivate their employees to work harder for increase in their salaries or bonuses.
3. Con Your Brain:
This technique worked wonders for me. So how do you lie to your own brain? I found that as soon as I started my project, I somehow didn’t lose my concentration and really focused on the job till it got completed. Upon introspection, I realized the trigger point was doing an “easy” part of the project first.
You see, I am a programmer, and I was working on a certain app that I wanted to build. I was stuck somewhere (I won’t bore you with the details). I literally procrastinated for a week, every time I sat on the desk, my mind literally built obstacles, imagining how it would take me days before I could solve that particular problem. And then one day, as I was going through the code, I simply came across a function that I wanted to improve in the app, and Voilà , suddenly I am typing away and actually doing my work. Our minds are notorious for building and propping up obstacles, we delay starting that blog or that YouTube channel simply because we look at the end result(where we want to be) and focus on that instead of taking the first baby step. We are too engrossed on that 1 million subscriber channel and awed by the guy with six packs to realize that it all began with that one random video you shot in your iPhone or that first pushup. We look at the peak of the Himalayas and instantly begin building up obstacles and delaying instead of taking that one small step in front of you.
Take the first baby step, work on that one simple part of the problem or take that one simple step, and before you even realize, you’d be working on that project that you delayed for days.
4. Take a break:
Research recommends that people who took a 5 min break every 2 hours are more productive than people who continuously worked for 5/10 hours. Taking a break or a simple stroll really releases your creative juices. Steve Jobs would often take a stroll in the park every time he felt he was stuck in a problem. It cleared up his mind. Going out, taking a simple break and tuning off would do wonders to your productivity. Every time you feel like procrastinating, I would recommend you go outside, breathe in some fresh air or just simply close your eyes and meditate, it frees your mind and rejuvenates your brain to take on those obstacles that you were finding difficult to overcome.
5. Tune everything off:
If you feel you are wasting too much time checking out what Sam is up to in his vacation in Maldives or you catch yourself browsing Cat videos in the middle of day for no reason whatsoever then switch off your phone. Eliminate every distraction that puts you off course. Set a timer, it may be a three hour interval or two hour, and reserve that time exclusively for “Work”, where you are not distracted by the sounds of those pesky notifications from your mobile phone. Switch off all your electronic gadgets during that interval and just focus on your work.
I just want to end by saying its ultimately your attitude that plays a critical role, if you are uninterested in your work or not passionate about it, no amount of tips or techniques is going to work for you.
Finally I just wanna sign off by quoting a prominent ancient Indian philosopher, economist and strategist called Chanakya, | https://medium.com/age-of-awareness/how-do-i-stop-being-lazy-and-procrastinating-f47ec0765253 | ['Jacob Daniel'] | 2020-12-17 16:58:40.916000+00:00 | ['Work', 'Procrastination', 'Productivity', 'Motivation', 'Techniques'] |
The Price We Pay To Fit In | Hi, my sweet lonely feeling.
Hi, sweet Jordin. I’m here again.
Yes, you are. How come? Tell me more.
Well, I come to visit you any time you’re pursuing fitting in rather than belonging.
Oh shit. That’s deep.
It is everything. Belonging is everything. You crave it so much — all parts of you in here long to feel like they belong. I show up when some parts of you don’t feel like they belong.
[As I was writing this, I realized this part had some important things to say to me and I listen better to them when I journal with them instead of type it out. So at the point of the conversation, I moved to my journal].
Okay, lonely part, I can hear you better now. What were you saying?
I am a part of you that feels like it doesn’t belong. So I give you a lonely feeling to let you know I am in distress.
I’m so sorry you don’t feel like you belong. I know it is something I am doing and how our internal system is being run right now. So I want to make it better. Tell me more.
Well when you go see these friends, a bunch of your other parts jump in and push me aside. They are the parts that jump in and try to impress other people. When those parts are in charge, you become more interested in fitting in that including all of us. That makes those of us not included feel lonely. It makes us feel like like you did when you were alone in your room listening to the cool kids plan the game outside that you weren’t invited to play with them.
Oh, sweetie. I know that feeling. Wow. I am so sorry. What do you need from me to feel included?
I need you to not hide away parts of yourself, including me, in order to fit in. I need you to care less about fitting in with other people and more about belonging to yourself.
Damn. Yes. Thank you for framing it this way. Wow this is a huge realization. So what does that look like?
I guess it looks like checking in wtih the parts like me who feel lonely and hearing our voices, including us in the game or party, not trying to hide us away but taking the time to get to know us. We are important parts of you, too.
You are. I’m sorry — another part of me in here is jumping in saying there isn’t enough time one sec.
[Another part of me kept jumping in and reminding me of the time, worried I won’t have enough time to go into too much detail with this part. It also kept reminding me that my boyfriend wanted to spend time with me this morning and so I better hurry up and make sure I do that. I tried to get this part to step back so I could talk to my lonely part more but this part refused and kept getting louder. There is more work to do with this part but since I was short on time, I just had to be honest and start bringing the conversation with my lonely part to a close.]
Okay, this part is insistant and not stepping back. I bet this makes you feel even lonelier that this part is trying to drag me away from talking with you.
Yeah it used to make me feel lonelier but now I know not to take it so personally. I know this part is just trying to protect you also. What I’ve shared with you is enough for now. I’ll keep speaking up through loneliness if you’re out of alignment with belonging to yourself. But for now can you just check in with me when we’re at the party? That would really make me feel like you understand and are working to help us all in here feel like we belong together.
Yes of course. Thank you so much for being so tender and understanding and wise. I will check in with you tonight when I have some moments to myself. Give you a hug or a high-five!
That would be great, thank you! Thank you for stopping to talk with me even. I already feel like I belong more!
Great! Yay! Yeah, this is new for me so thank you for being my teacher. I know this conversation doesn’t solve the root of it and there is still much to discuss but I am committed to taking these small baby steps toward an inner system where all my parts feel like they belong.
Love it. Muah! Now let’s go have some fun! | https://medium.com/just-jordin/the-price-we-pay-to-fit-in-8d35aecb0d53 | ['Jordin James'] | 2020-10-31 16:52:07+00:00 | ['Psychology', 'Inspiration', 'Spirituality', 'Mental Health', 'Self'] |
I Took On 10 Projects at the Same Time, Here’s What I Learned | An exact explanation of what I did
Before I start, I want to elaborate that being in 10 projects at once does not mean that you would be doing it yourself. On the contrary being on 10 projects at once should help you to do the exact opposite. The things I did might sound unbelievable (or believable depending on who you are). But, if people like Elon Musk can do the things he does, I don’t see a reason why I can’t.
I am by no means comparable to Elon Musk, but I honestly look up to him and he is one of the inspiring figures I strive to be. Managing chaos is such an interesting thing. A project can equate to chaos because it can go wrong unexpectedly in a multitude of ways.
What kind of projects did you do?
Some software engineers reading this might be thinking, “this guy is lying no way he did so many things in one fell swoop”. Well, let me explain the projects then. I handled a project to integrate an existing application to a certain online video communication service, and I initiated a project to make a custom code generator because my office has a custom design pattern. The library I helped maintain was for the custom design pattern I mentioned previously. Because this was part of my full-time job, I worked on these projects 7–8 hours a day.
Lastly, I was the project manager for a learning management system (LMS). The LMS project required me to review code, and lead meetings about 3–4 hours a week depending on the work.
As for the other things I did. I am writing my thesis is for my bachelor’s degree, with a team of three. We have occasional meetings three times a week with a total of 11–15 hours. The academic paper I wrote required about the same amount of 11–13 hours a week.
I was selected as a soft skills trainer for University students on my campus, I have been volunteering to share my experiences for almost two years now. As for the TEDx event, I was the project manager. I had a team of course. My commitment to the TEDx event was about 4–6 hours a week in a span of 4 months, and that includes the meetings.
That explains what I did exactly. Now here is what I learned from this experience. | https://medium.com/the-ascent/i-took-on-10-projects-at-the-same-time-heres-what-i-learned-be8074310879 | ['Agustinus Theodorus'] | 2020-12-07 20:07:20.688000+00:00 | ['Productivity', 'Self', 'Opinion', 'Self-awareness', 'Project Management'] |
Four Analytics Trends To keep an eye on in 2018 | Courtesy:Flickr
What gets you out of bed in the morning when you think of analytics in 2018? Is it the prominence of AI in our lives, democratization of data or advanced analytics that keeps you excited? Let’s accept it, last year was quite an eventful year, with the rise of self-service analytics, IoT analytics and of course chatbots becoming smarter. Having sensed these developments, 2018 should become another year of accelerated innovation in analytics industry- with some expected and unexpected disruptions of course! Excited? Without much ado read on our top four analytics trends to watch out for in 2018!
AI chatbots are no more the ‘’newbie’’ in the town, soon to become major drivers of all operations!
“Siri, which movie should I watch tonight? Or ‘’Google, show me the best route to reach office’’ Familiar with these everyday conversations? Just imagine your life without them! Can you? Not quite possible right? considering their impact on our busy lives. In 2017, there was so much noise around smart recommendation, with AI chatbots identifying our emotions and respond to us accordingly. Not only about the updates of weather or traffic congestion, chatbots will evolve and might also help in scouring financial operational metrics or getting answers to ‘why’ and ‘what if’ questions, thereby enabling the transformation of business as well as consumer space. Although this might take a couple of years to mature, we can anticipate few success stories in 2018 as well.
Augmented Reality: From reel to real, Augmented reality is and will be changing the world around us
Remember in July 2016, how millions of people crashed through parks, walked over people’s graves and entered churches to hunt for augmented-reality versions of Pokémon characters. Still fresh in the memory, right? Although the Pokemon frenzy has faded, augmented reality hasn’t and we can see some more advanced and dynamic mode of AR in 2018. The human-machine interaction will boost up as businesses are already employing AR to enhance manufacturing and research processes or to offer new customer experiences. And, why does it matter to analytics industry? Well, according to Gartner’s VP David Cleary, “Augmented analytics is a particularly strategic growing area that uses machine learning for automating data preparation, insight discovery and insight sharing for a broad range of business users, operational workers, and citizen data scientists.” So, yes in few years all the resource draining and time sensitive analysis will significantly become easy and smooth with augmented analytics!
IoT Analytics: A silver bullet for every industry, in 2018?
2017, was a year of huge gains in ‘’connectivity’’. There were a lot of investments and adoptions around IoT, despite security issues galore. How about 2018? Will be as exciting as 2017 for IoT analytics? Not to ignore, IoT will continue to expand this year too, with more and more devices getting connected, almost every second. Although retail, healthcare, and industrial/supply chain industries have been using IoT to boost ROI, this year we can see an increasing number of companies use IoT for more personalized marketing efforts. Additionally, Business Insider predicts business spending on IoT solutions will hit $6 trillion by 2021. Going by this predictions, we will see many venture capitalists continue pouring funds into the promise of IoT — underscoring its potential to improve customer experience in almost every industry!
Block Chain Technology: Enabling new forms of data monetization
2017 was a year of tremendous growth for block chain, all agree? Many believe we are already in the “early majority” phase of adoption, and that we are on the aligned towards full adoption of blockchain. And as with any new technology, the importance of the data grows. This year we might see blockchain going more mainstream with sectors such as healthcare & retail also starting to use it to handle data to prevent hacking & data leaks. According to Bill Schmarzo, CTO of Dell EMC Services, blockchain technology also “has the potential to democratize the sharing and monetization of data and analytics by removing the middleman from facilitating transactions.” So, yes organizations will accelerate their data analysis process on these virtual currencies to unmask strong trends, frauds and insights and make informed decisions!
How to ride the Virtual currency rally? Read on to know more.
Though it is hard to say how fast these analytics trends will manifest in our lives, we are confident that 2018 will yet again be another eventful year. There will be issues around security, governance and most importantly consumer’s ability to accept and adapt these innovations and changes. The only thing that can be assured of is this year the future will be different and very promising! So stay tuned! | https://towardsdatascience.com/four-analytics-trends-to-keep-an-eye-on-in-2018-854646e390f6 | ['Karishma Borkakoty'] | 2018-03-16 06:44:30.904000+00:00 | ['Artificial Intelligence', 'Analytics', 'Big Data', 'IoT', 'Bitcoin'] |
The Mom Test by Rob Fitzpatrick [Book Summary PDF] | Here are 3 simple rules to help you. They are collectively called (drumroll) The Mom Test:
Talk about their life instead of your idea Ask about specifics in the past instead of generics or opinions about the future Talk less and listen more.
The questions to ask are about your customers’ lives: their problems, cares, constraints, and goals. You humbly and honestly gather as much information about them as you can and then take your own visionary leap to a solution. Once you’ve taken the leap, you confirm that it’s correct through Commitment & Advancement.
Avoiding bad data
There are three types of bad data:
Compliments Fluff (generics, hypotheticals, and the future) Ideas
Sometimes we invite the bad data ourselves by asking the wrong questions, but even when you try to follow The Mom Test, conversations still go off track. It could happen because you got excited and started pitching.
Asking Important questions
In addition to ensuring that you aren’t asking trivialities, you also need to search out the world-rocking scary questions you’ve been unintentionally shrinking from. The best way to find them is with thought experiments. Imagine that the company has failed and ask why that happened. Then imagine it as a huge success and ask what had to be true to get there. Find ways to learn about those critical pieces.
Pre-plan the 3 most important things you want to learn from any given type of person (e.g. customers, investors, industry experts, key hires, etc). Update the list as your questions change. Your 3 questions will be different for each type of person you’re talking to. If you have multiple types of customers or partners, have a list of each.
Don’t stress too much about choosing the “right” important questions. They will change. Just choose whatever seems murkiest or most important right now. Answer those will give you firmer footing and a better sense of direction for your next 3.
You might get answers 1–3 from customer A, answer 4 from customer B, answers 5–7 from customer C. There’s overlap and repetition, but you don’t need to repeat the full set of questions with every participant. Your time is valuable: don’t feel obligated to repeat questions you already have solid data on. Pick up where you left off and keep filling in the picture. | https://medium.com/bookcademy/the-mom-test-by-rob-fitzpatrick-book-summary-pdf-b0deeef61092 | ['Daniel Morales'] | 2019-10-08 19:50:47.711000+00:00 | ['The Mom Test', 'Summary', 'Pdf Book', 'Books', 'Startup'] |
Kicked Out of Your Own Company: What To Do | Kicked Out of Your Own Company: What To Do
It happens more than we admit: Entrepreneurs get kicked out of their own companies. Susan Strausberg shares what to do when it happens to you.
As the co-founder and CEO of EDGAR Online, I ran the company for thirteen years. For the vast majority of that time I was fully focused on the development and growth of the company, and firmly committed to remaining the CEO until I felt we’d achieved our vision. But after a very desirable acquirer backed out, my investors grew restless and pushed for a succession plan. Two months later, I was informed by the board that the succession plan had been accelerated and that our President would now be the CEO.
This sort of scenario happens far more often than either entrepreneurs or investors like to admit. Here’s how, when the time comes, you can be prepared:
Write Your Succession Plan
Whether your company is public or private, make sure you have a succession plan and that your interests are protected. Assume that your separation agreement will have a non-compete clause, and make sure you understand the terms of it. The non-compete should related to your company’s business specifically, and should not prohibit you from other types of ventures. Even if you live in California, where most non-competes are unenforceable, you don’t want to be heading to court just as you’re being ousted.
My non-compete restricted me from engaging in ventures in financial information, which I thought was reasonable. Since I had significant stock ownership in my company it would have been bizarre for me to try to complete with a company that I was hoping would help me realize a significant payout. In reality, broad or overly general non-competes rarely hold up in court, so it is in both parties’ interests to be clear and fair.
Keep a Stiff Upper Lip
When the rug gets pulled out from under you, you have to somehow keep it together. You have to immediately come to terms with the decision and behave with extreme dignity that befits the legacy you plan to leave. Above all, as unpleasant as it can be, you need to understand that you had anticipated this and had participated in the succession process.
When it happened to me, I called a trusted advisor for input and sympathy. His first question was, “Did you cry?” I said, “No,” and he congratulated me, saying that is the thing people fear most about woman CEOs. (Great.) His next question was, “Are you going to stay on the board?” That’s an important question, because as long as you remain on the board, you are subject to regulations concerning purchases or sales of stock in the company. You’re also setting yourself up for some pretty uncomfortable meetings. I remained on the board until it became absolutely clear that my input would either be ignored or worse.
Get Out of Dodge — At Least Mentally
You need to get a new perspective immediately. A fresh start will give you a sense of relief, which you will desperately need. I started a new venture, and we decided that New York was not the right place to do that. The conditions of my non-compete, plus my pride in EDGAR Online, made it impossible to engage my original founding team. We moved to Austin, Texas where we felt the environment would be more supportive.
Yes, You Could Do Better. Forget It.
Don’t stress over the problems of your former company. Maybe the stock is tanking and the new management is clearly clueless. You can’t do anything about it. Think about it this way: You want to concentrate on the new company, in which there are a myriad of things you actually can control.
Ask yourself this: Are you as passionate about your next innovation as you were about the previous one(s)? Are you driven to look forward, not back? Can you apply everything you learned at that earlier company to make your next one even more successful? The opportunity to feel the rush of a start-up is not limited to your first company. It’s the gift that keeps on giving.
Susan Strausberg is the co-founder of 9W Search, a next generation financial search engine aimed primarily at a mobile audience. 9W Search partners with IBM Watson. In 1995 Susan co-founded EDGAR Online, Inc. She served for 13 years as CEO until 2007, and additionally as President from 2003 to 2007. EDGAR Online is the first commercial internet distributor of SEC-based financial information. The company was a pioneer in the information industry revolution using cutting edge technology to bring high value to publicly available data and to democratize access to information that was formerly unavailable to non-professionals. EDGAR Online was acquired by RR Donnelley in 2012. Susan is an active member of the Austin technology community and participates in panels and programs focused on entrepreneurship. | https://medium.com/been-there-run-that/kicked-out-of-your-own-company-what-to-do-b3713a3a8c1a | ['Springboard Enterprises'] | 2018-10-02 15:01:01.117000+00:00 | ['Entrepreneurship', 'Startup', 'Women Entrepreneurs', 'Business Development'] |
What Product Marketers Can Learn from Product Management | The gospel we preach is clear and unchanging: “Deliver the right message to the right people at the right time.” As a product marketer, you possess a deep understanding of strategic positioning, the nuances of messaging, and the wants and needs of your customer.
You strategize your approach to pricing, ad copy, website content, tone of voice, and your product’s unique value proposition. You also leverage that holistic grasp on your customer’s profile to inform the product roadmap, and your approach to solving customer problems. Your daily conversations shift from “how do we run the show” to “what stage do we perform on, and who will sit in the audience?”
The best product marketers not only wear the hat of marketing wizard and all things internal team enablement, they can also slip effortlessly into the role of full-fledged product owner. And while product marketing and product management have tons of overlap in their day-to-day, there are a few key areas of PM expertise that we can benefit from.
1. Always be listening
Traditionally, product marketers communicate value to the market. They have a hyper-detailed knowledge of the competitive landscape, and can strategize about how it fits into the bigger picture. They define competitive messaging and strategic positioning based on the problems a product manager commits to solving.
It’s all about alignment for product marketers. Gluing together sales, product, support, engineering, and leadership with market/price fit, channel partnerships, and happy customers.
Product managers often manage the strategy behind the roadmap — marrying together raw customer feedback, stakeholder needs, and developer resources to ship the right product. The core skill here is a simple one: they are always listening.
Product managers are exceptional at gathering information. And while product marketers and product managers both involve collecting customer feedback, the two roles work to solve different problems. Product marketers and managers both communicate with the market to balance necessity and desire; what they do with the findings distinguishes the two roles.
The best product managers are constantly sourcing and learning from their target customers to ensure that the product they’re building addresses the most painful, and real, customer problems.
There’s a ton of information out there from product managers on how to get the right information out of customers, how to influence stakeholders and reach consensus across teams. But this TEDxVienna talk by Michael Stevens offers an excellent starting point for learning how we, as product marketers, can listen more and ask our customers better questions.
Further, if little to no relationship exists between PMs and PMMs within an organization, there exists an opportunity for those two functions to team up, share customer development techniques, and collaborate in research to power up their collective impact.
2. Always be testing, especially your own assumptions
A large part of what product managers do is decide what to build by testing assumptions and validating hypotheses.
In short, all marketers must adapt to taking this approach to planning. It’s too easy and too common for marketers to, say, hire a PR agency and promise stakeholders lofty placements in Business Insider or Huffington Post. We mean well, but failure to meet those expectations yields disappointment across the board, and perpetuates stereotypes of marketers being filled with hot air.
When marketers overpromise and underdeliver, it’s not always directly tied to positive or negative end results. It’s the failure to meet expectations. And there are things we can do to set better, clearer, and more realistic expectations for ourselves.
For example, at Kayako, I knew talk had swirled within the organization for years around introducing PR as a way for us to gain targeted awareness for our top of funnel initiatives such as our live chat statistics report. I wanted to try it once and for all, but I had zero information telling me that this would be a surefire way for us to achieve our goal of increased mindshare for key audiences.
So, I treated it like an experiment…
I explicitly defined three tiers of goals (indicating ideal, acceptable, poor results) for the outcome of a finite, 4-month contract with an agency I hired via personal recommendation under the condition that if we gained even a low level of early traction, we would continue working together.
Instead of getting the leadership team excited about finally introducing PR to the mix, I candidly and repeatedly positioned our venture into PR as an experiment with a defined hypothesis, goals articulated, and a course of action in place for every possible outcome — with failure as no exception.
When our 4-month trial campaign yielded very low quantifiable results, we were able to walk away still feeling proud of the experiment because it still yielded interesting results. We learned that PR actually plays a longer game, and a 4-month trial was insufficient for determining whether or not PR can actually work for us (a few bites actually trickled in months after the contract ended). Despite all this, the stakeholders were impressed with our scientific and proactive approach to solving this problem, and the experiment was considered a net success.
Photo cred: Furthermore UX Experts
This is the approach that all product managers take when trying to fix the problems they’re working to solve. Framing every statement, whether it be personal belief or widely accepted truths, must be poked, prodded, questioned, and proven to be true if it is intended to drive action.
3. Lean into deeper, more technical product conversations
It may not always feel second nature for us, but product marketers can, and should, take on the task of participating in more technical product development conversations, and back up input with findings from Voice of the Customer interviews. If you have the chance to own a customer development program, you create an opportunity to build and manage feedback loops at scale, as well as structuring process around crafting user stories and development direction.
Deepening your contribution to product-shaping conversations means you can better champion your customers both within your organization and across your communities.
As an added tip from my personal experience (or rather, something I’m actively working on in my own role as a PMM at Kayako), product marketers will benefit from developing a deep, detailed, and comprehensive understanding of competitive products as well as related technologies.
In the same way that an account executive with a strong technical understanding of their product might blow you away on a sales demo, the same goes for marketers with encyclopedic knowledge of the competition’s product offering. If you can bring this kind of deep competitor knowledge to the table, you could have a great impact on product decisions, and a chance to thicken the glue that binds your business.
While product managers still own the product development process and the roadmap in its entirety, there is an opportunity for product marketers to contribute to and influence the direction of business at a core level.
4. Put the customer at the heart of your role, at the core of every decision
Both product marketers and product managers work best by putting customers at the heart of their decisions. And it is worth our time, as product marketers, to collaborate with existing product managers across the business with a common goal of leveraging real, tangible customer insight to drive product direction. If no product manager exists at your business yet, it’s even better if you can bring this to the table yourself.
Focus on anything that puts your customer at the heart of your decisions, and you, and the business, will feel the benefit from customer happiness ratings to the bottom line. | https://medium.com/we-are-product-marketing/what-product-marketers-can-learn-from-product-management-2421cb77b45 | ['Alicia Carney'] | 2018-02-28 17:40:56.067000+00:00 | ['Product Management', 'Startup Marketing', 'Marketing', 'Entrepreneurship', 'Product Marketing'] |
Learn From Your Mistakes If You Want to Find Your Version of Success | Learn From Your Mistakes If You Want to Find Your Version of Success
It is possible to build a business or career despite hurdles and challenges like a mental or physical illness
Photo by krakenimages on Unsplash
I would blame my failure to stay focused this past week on the election, but the deeper I overthink the mystery, the more I see it’s the nature of who I have been in life that threw me for a loop.
I tend to jump from one thing to another, which has been a significant problem for me.
Take the last week, for instance.
Do you notice that I often write about my mental health? It is only because my illness’s severity keeps it top of mind. Here I am writing a book during NaNoWriMo about blogging by finishing an essay or article per day. More often than not, I’m writing about how illness makes it difficult to complete anything.
I intended to have a central theme where I would talk about the business of blogging and create a timeline that lasted two months through the end of December 2020. It would outline the rise or fall of my writing and blogging business as I push myself to find the success that eludes me.
But I find I want to talk more to the many people who follow me for one specific reason.
Many of you are so much like me that it’s scary. You battle illness daily, whether it be the horror of diabetes or the pain and fear you get from depression and anxiety. I have people who write to me with PTSD and OCD, which are debilitating, and they find writing almost an impossibility.
I find I want to talk more about building a writing or blogging business when you have too many challenges to mention. What about those who have to push themselves to produce and have to somehow find a way to focus when demons are dancing on their Medulla Oblongata?
I am figuring out that a more interesting book I could write would exist to help people with mental or physical challenges build a business and find success, whatever our definition of that word is.
The problem is that I jump from thing to thing, and I don’t want to waste any more of my valuable time spinning my wheels if I can help it. I’ve learned that if I want to be successful at anything, I have to stick with one thing and focus.
Why am I like this? | https://medium.com/free-thinkr/learn-from-your-mistakes-if-you-want-to-find-your-version-of-success-83345fa28893 | ['Jason Weiland'] | 2020-11-08 18:11:27.730000+00:00 | ['Writing', 'Mental Health', 'Success', 'Failure', 'Mistakes'] |
Image Compression using K-Means Clustering | What is K-Means Clustering?
K-Means algorithm is a centroid based clustering technique. This technique cluster the dataset into k different clusters. Each cluster in the k-means clustering algorithm is represented by its centroid point.
Left Image: Plot of the dataset, Right Image: Plot of the result of 3-means clustering, (Image 1)
The above image (image 1) describes how 3 clusters are formed for a given dataset using the k-Means clustering algorithm with the value of k=3.
Further, read this article to know more about the k-Means Clustering algorithm.
How does the K-Means Clustering technique compress the image?
In a colored image, each pixel is of size 3 bytes (RGB), where each color can have intensity values from 0 to 255. Following combinatorics, the total number of colors which can be represented is 256*256*256 ( equal to 16,777,216). Practically, we can visualize only a few colors in an image very less than the above number. So the k-Means Clustering algorithm takes advantage of the visual perception of the human eye and uses few colors to represent the image. Colors having different values of intensity that are RGB values seem the same to the human eye. The K-Means algorithm takes this advantage and clubs similar looking colors (which are close together in a cluster). Here’s an illustration of how this works:
Choosing some pixels from the input image, (Image 2)
In the above image (Image 2), few pixels are picked and expanded in the further images to continue the illustration.
Left: A maximized image of above-picked pixels, Right: Two nearby pixels x and y, (Image 3)
The pixels picked in (Image 2) from an input image is expanded in the left part of (Image 3). In the right part of (Image 3) two nearby pixels are picked (names as ‘x’ and ‘y’).
If the RGB value of ‘x’ and ‘y’ pixel is (130, 131, 140) and (127, 132, 137) respectively, then below is the illustration of how these two-pixel colors are visible to a human eye. The below illustration of RGB intensity to color is prepared from w3schools.
Above Image: Color for RGB(130, 131, 140), Below Image: Color for RGB(127, 132, 137), (Image 4)
In the above image (Image 4), it is observed that for some amount of changes in the RGB values the color resembles the same to a human eye. So k-Means clustering can club these two colors together and can be represented by a centroid point that has almost the same resemblance to a human eye.
The initial dimension of an image is 750*1000 pixels. For each pixel, the image has 3-dimension representing RGB intensity values. The RGB intensity values range from 0 to 255. Since intensity value has 256 values (2**8), so the storage required to store each pixel value is 3*8 bits.
Finally, the initial size of the image is (750*1000*3*8) bits.
Total number of color combination equals (256*256*256) ( equal to 16,777,216). As the human eye is not able to perceive so many numbers of colors at once, so the idea is to club similar colors together and use less number of colors to represent the image.
We will be using k-Means clustering to find k number of colors which will be representative of its similar colors. These k-colors will be centroid points from the algorithm. Then we will replace each pixel values with its centroid points. The color combination formed using only k values will be very less compared to the total color combination. We will try different values of k and observe the output image.
If k=64 then the final size of the output image will be (750*1000*6 + 64*3*8) bits, as intensity value ranges to 2**6.
If k=128 then the final size of the output image will be (750*1000*7 + 128*3*8) bits, as intensity value ranges to 2**7.
Hence it is observed that the final size of the image is reduced to a great extent from the original image. | https://towardsdatascience.com/image-compression-using-k-means-clustering-aa0c91bb0eeb | ['Satyam Kumar'] | 2020-06-17 20:53:37.475000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Towards Data Science', 'Image Processing', 'Data Science'] |
“Everything is Copy”- Nora Ephron | Taylor Swift quoted writer Nora Ephron the other night. “Everything is Copy” meaning: Every single thing that happens to us can be used as creative fodder.
I think we all know this as writers.
This is great news.
Yes. Every damn thing that happens to us can be a story.
I want to embrace this thought again.
I want to do what I’ve always done, which is write about it ALL.
The good
The bad and the ugly
The confusion
The scary shit
The humiliations
The sorrow
The joy
All of it. Not just the good stuff.
Oh, how boring it would be if writers only wrote happy fluffy bunny stories. lol. I know I cant relate to those writers much honestly.
I want to write about:
1. The fact that my fingers and brain haven’t been working well lately and for some reason I couldn’t write for over a month (after writing steadily for years).
2. The fact that my brain has never felt this overloaded — with family stresses, financial uncertainties, confusions in my creative life, and of course the insanity of the world.
3. The fact that my dad is dying and I don’t feel like talking to him or interact with him anymore.
4. The fact that I’m scared shitless sometimes that my boyfriend of over 24 years will drop dead (because he has some very serious health problems lately) and that I might end up a homeless bag lady with 4 cats on the side of the road holding a sign that reads:
“Homeless Artist/Writer/Musician will Work for food!”
5. The fact that while interviewing my mom for my Memoir recently she revealed some horrible unexpected family secrets about my dad, out of the blue, that spooked me to my core. This has been super hard to process. | https://medium.com/writing-heals/everything-is-copy-nora-ephron-65e812511f3 | ['Michelle Monet'] | 2019-04-27 04:23:41.765000+00:00 | ['Grief', 'Writing', 'Writing Tips', 'Productivity', 'Writing Life'] |
How To Turn a $12k Investment Into $1 Million | How To Turn a $12k Investment Into $1 Million
When Albert Einstein once said that humanity’s greatest invention is “Compound interest.”
The biggest excuse for not investing I hear is: “I have no money to invest.”
Part of the reason why people think they should hold back from investing until they have more money is that they think it’s risky.
They don’t know when to buy, what to buy, or sometimes how to buy.
Here is the main issue with that.
Nobody knows when a market goes up, down, sideways. Anyone who tells you they do is either lying, foolish, or both.
And having tons of cash will not give you that kind of expertise.
Solution?
Instead of striking one magical moment, buy regularly and consistently over a long period.
The fancy name for this is “compound interest.”
When Albert Einstein once said that humanity’s greatest invention is “Compound interest.”
He also called compound interest the “eighth wonder of the world” and stated that “He who understands it earns it; he who doesn’t, pays it.”
Compound interest is basically “interest on interest.”
It simply means saving early and letting investment compound over a more extended period.
It’s quite a liberating strategy, and it takes the pressure off needing to pick the right instant to buy.
In my view, it’s as close to financial magic as you can.
And on top of that, with this strategy, you are almost sure to put $1 million in your pocket.
The magic is in the simplicity of this strategy. It’s all about investing repeatedly and long enough to ignite the miracle of the compound effect.
For example, assume you put $100 per month into Bitcoin.
In 10 short years, your $12,000 invested will be worth $1.163 million. Yes, that’s million, with six zeros. Check out the compound calculator and play around with the numbers for yourself.
But are these expected returns realistic?
Based on historical data, they sure are.
Cathie Wood and her ARK is the most successful fund in the last five years. According to their research, Bitcoin’s return in the previous seven years was 90 percent. It’s one of the best-performing assets in a decade.
Yes, it’s that simple.
And you don’t need a big pile of cash to invest. With this strategy, you can dip your toes into the water and learn as you go.
This passive form of investing means you don’t try to buy or sell based on your research. Also, you never panic when the market crashes, but you’ll only lock in your temporary losses.
The magic is not in the complexity; the magic is in the doing of simple things repeatedly and long enough to ignite the miracle of the compound effect.
But as Jim Rohn would say, “What’s simple to do is also simple not to do.”
You have to take action. You’ve got to invest money to make money.
Because the biggest difference between successful investors and unsuccessful investors is successful investors are willing to do what unsuccessful avoid. | https://medium.com/the-innovation/you-dont-need-millions-to-start-investing-d6cfd5016acb | ['Ras Vasilisin'] | 2020-11-12 17:28:26.004000+00:00 | ['Innovation', 'Investing', 'Money', 'Entrepreneurship', 'Startup'] |
Apple Might Make a Big Jump | Apple is another name for Innovation! Be it Newton’s law of universal gravitation or Steve Jobs multinational technology firm Apple Inc. the roleplay of innovation has broadened the expanse of discovery.
For thousands of years, the most dramatic events centrally focus on human development through experimental analysis.
On comparing the previous history, there is a transcendental shift in human progress at personal fronts and a technological level. The reason for this progressive advancement is the “Think Different” and “Be Creative” approach.
In the information age, the individual’s approach to seeking knowledge, resolving queries, disseminating information, and communicating has touched the roots of modernization, especially through digital developments.
The technological company Apple Inc.’s growth boom in designing, developing and selling consumer electronics, computer software, and other online services centrally rest on thinking differently.
With this innovative and creative impulse, Steve Jobs managed to change the world of technology and design far beyond anyone’s imagination. And the company’s pace is speeding up with time, emerging as a fierce competitor and challenging the global markets.
This time Apple might make a potential-jump as a search engine giving competition to Google.
This came as no thunderbolt to me. Somewhere down the line, people might have considered the possibility of other search engines. But, precisely when was the uncertainty. Apple certainly has turned our probable analysis into reality. With technological advancements accelerating at a rapid pace, it is natural for the market to become competitive.
According to the Financial Times report — Apple has accelerated work to develop its own search engine that would allow the iPhone maker to offer an alternative to Google. For context, this behavior has been witnessed for a while as people have been observant about the feature popping up in beta versions of iOS. Jon Henshaw of Coywolf had noted back in August that the search volume is rising incredibly from Apple’s crawler.
As per the Financial Times, Apple is developing its own search engine technology as the United States antitrust authorities threaten multi-billion dollar payments, which Google makes to be the iPhone’s primary engine. As per the lawsuit, the tech giant misuses its power to shut down its competitor in search ads.
While Apple has been earlier focussing on its in-house search development, the lawsuit against Google made it explore the opportunity. To discover the opportunity requires critical analysis, market survey, taking calculated risks, and of course, thinking differently.
The Founder Steve Jobs greatly inherited these skills, and the legacy is transferred thereafter. He quoted in the Apples “Think Different” campaign:
“Here’s to the crazy ones — the misfits, the rebels, the troublemakers, the round pegs in the square holes. The ones who see things differently — they’re not fond of rules. You can quote them, disagree with them, glorify or vilify them, but the only thing you can’t do is ignore them because they change things. They push the human race forward, and while some may see them as the crazy ones, we see genius, because the ones who are crazy enough to think that they can change the world, are the ones who do.”
Thus, Apple’s success with progressing times is an eye-opener towards grabbing the right opportunities with creative thinking.
Presently delving into search engine technology, reports mentioned that Apple two years ago hired Google’s head of search, John Giannandrea, in a move designed to improve artificial intelligence capabilities and its Siri virtual personal assistant incorporated as a feature of Apple iPhones.
Siri’s increase in search activity could be explained by getting more search queries and acting as an interlocutor between Apple and other search services like Google or Microsoft’s Bing. Previously, Google began this disintermediation, had modified and expanded over the years to combat a similar kind of behavior from Siri.
As of now, unclarity resides in how Apple will execute its search engine application. It also becomes tumultuous because of Google’s global dominance in the technology industry and people’s trustworthiness.
The matter has taken a point of discussion with no conclusion. Some reports claim that Apple will compete with Google and have its own websites and apps for phones. Contrarily, other reports mentioned that it would just be a feature to boost iOS devices’ spotlight.
So let’s collectively reason out about the search engine happening, digesting, and analyzing the latest developments. Because as Steve Jobs rightly said — | https://medium.com/discourse/apple-might-make-a-big-jump-c5625634cfcd | ['Swati Suman'] | 2020-11-19 15:27:14.495000+00:00 | ['Technology', 'Innovation', 'World', 'Artificial Intelligence', 'Startup'] |
A student’s journey to one of the world’s coolest entrepreneurship events | What do you do to lighten up Helsinki in November? I got to take part in the world’s biggest tech event, Slush. I was a volunteer there and I want to tell you about my journey within Finland’s big startup community. I am studying in one of the most stimulating startup hotbeds in the world and I explored the event like a hungry learner. The first and foremost thing I took along was my entrepreneurial spirit. It’s not just about business opportunities, but about being part of the startup ecosystem. Here’s where you should dare to knock on doors that can lead to exciting, new things.
The volunteering experience of a lifetime
All volunteers had fun and got set to rock at Slush 2017 (Photo: Otto Jahnukainen — Slush Media)
Volunteering at Slush brought me to the event and more than that. Indeed, I got access to one of the world’s leading startup events where a bunch of opportunities awaited. The experience is valuable because we feel appreciated. Volunteer Day was held prior to the main event for those who were about to get their hands dirty for Slush. There were various activities that helped making teams stronger while contributing to their cohesiveness.
That’s the spirit!
People including entrepreneurs, investors, journalists all gathered in the darkest time of Helsinki just to celebrate entrepreneurship.
“Nothing normal ever changed a damn thing!” Just like this motto, the atmosphere at Slush blew my mind. People were open to talk, to share and to learn. At Slush, I felt a strong curiosity. Nobody can resist the temptation of knowledge. Gladly, this temptation was satisfied by all the great talks given by inspiring speakers. Even a field I had never been interested in is now stuck in my mind. The stories I heard at Slush are now echoing in my heart.
The Fireside Stage (Photo: Jussi Ratilainen — Slush Media)
What is greater than being surrounded by tons of technology startups? It wasn’t just about testing the latest tech gear and checking the latest products and software. Jumping into the valley of startups was like bungee jumping. You could never know what to expect, how far you would go and how much you would see from these innovative entrepreneurs.
They told me about their stories and motivations, which truly stimulated my entrepreneurial enthusiasm. They want to give a hand to medical and social health. They want to enhance travel experience. They want to help marketers increase productivity and save more time. They want to support children with learning languages in a more creative and interesting way.
All of them, with different colors and shapes, were creating an awesome entrepreneurial celebration that inspired and appealed a foreign student like me.
(Cre: Sami Heiskanen — Slush Media)
As a volunteer, it was amazing to know that I was contributing to build up the strong entrepreneurship community. We helped each other, supported attendees, made the stage run smoothly. We all delivered the spirit and values of Slush. I felt that I was a piece in the puzzle that forms this community.
It was the experience of a lifetime! | https://medium.com/the-shortcut/a-students-journey-to-one-of-world-s-coolest-entrepreneurship-events-63bd75fd9aa6 | ['Trinh Tran'] | 2017-12-13 14:00:48.166000+00:00 | ['Tech', 'Volunteer', 'Entrepreneurship', 'Slush', 'Startup'] |
10 Practical Tips for Effective Cross-Team Collaboration | 10 Practical Tips for Effective Cross-Team Collaboration
Actionable tips that you can apply to your multi-team projects
Photo by Marvin Meyer on Unsplash
It is not easy to be part of a project requiring multiple teams to work together to complete. If mismanaged, it can cost you and your organization valuable manhours and resources.
Throughout my career as a senior software engineer and engineering manager, I’ve had the privilege to lead the development of medium- to large-scale software that needs coordination between multiple teams and stakeholders. The ups and downs that I’ve experienced along the way have taught me a lot about leading cross-team initiatives — and I took notes and journaled about what I’ve learned. I thought it would good to share the common themes and what we did to minimize the chances of project delays and failures.
The tips here are not limited to teams within the same company. They could also apply to working with third-party teams; for example, your team needs to work on API integration with a team from a software service provider.
You don’t need to follow all the tips here. Pick and choose what you will need depending on the nature of your project. By the way, in this article, the word “project” could refer to a new product feature, a third-party integration, or any large piece of work that multiple teams in a company need to complete within a specific timeframe. Let’s get started. | https://medium.com/better-programming/10-practical-tips-for-an-effective-cross-team-collaboration-600fcd4e4143 | ['Ardy Dedase'] | 2020-11-11 18:09:13.408000+00:00 | ['Startup', 'Technology', 'Leadership', 'Productivity', 'Programming'] |
How Social Distancing Can Help You Live More Sustainably | How Social Distancing Can Help You Live More Sustainably
While living more simply, you’re also being more sustainable — which is good for you, and good for the planet
Let’s get one thing straight. Social distancing is not fun. A global pandemic is anything but peachy. Still, there’s nothing wrong with finding a silver lining in the clouds. As an environmental advocate who’s slowing down for the sake of all of us, sustainability is it. As it turns out, sustainability and social distancing interact in some compelling ways.
Using what we have
When we think about what’s really essential and what’s just nice to have, we tend to shop less and use what we have more. This goes for the can of vegetables in the dark recesses of the pantry just as much as the forgotten trousers in the closet. Simpler recipes reign, and yesterday’s styles reignite.
Bringing our own bags
Prepping for a trip to the grocery store now feels like preparing for battle. We’ve got the gloves, the mask (albeit homemade) and the sanitizer, just in case. We’ve got the credit card in our pocket and a pen for touching the keypad. Most importantly, we’ve got our own bags, which we’ll fill ourselves whenever possible. No one else needs to touch them.
Patching holes
We don’t need new jeans. We need a nice denim patch to iron on the inside. We need to fix what we have, not replace what could have easily been repaired. Going out is no longer the simple, convenient route. Repairing is.
Keeping a conservationist mindset
When there is only so much toilet paper to be had, we use less — or switch to a paperless bidet altogether. When there is only so much bread, we make our own, no production lines or plastic bags required. We are conservationists when we need to be.
Pivoting toward food growth and preservation
I write this with my spinach seeds planted and my cabbage on the road to fermentation. I write this with my granola freshly baked and jarred. Today, we are reminded of the importance of growing our own food and preserving what we have. We are reminded of the value of produce, and we must honor it.
The age of DIY
We can’t access masks, so we make them ourselves (or, in my case, we pay a teenage girl in the neighborhood who’s saving up for college to make them for us). We can’t go out just because we ran out of lens wipes, so we make our own cleaning spray. We sew reusable cotton rounds instead of buying a fresh bag of disposable ones. We do it ourselves, boldly and with gusto.
A smaller, gentler footprint
By driving less and walking more, our carbon footprint slims to stunning proportions. By staying within just a few miles of home, we make space for flora and fauna to thrive. We walk with calloused toes on living mulch with space for pores to breathe.
Building habits now for a more sustainable future
Habits are easier built than broken. As I return to my sustainable roots, I aim to use that to my advantage. By using this time of social distancing to make sustainability my norm, I hope to redefine my future — our future — beyond convenience. | https://medium.com/tenderlymag/how-social-distancing-can-help-you-live-more-sustainably-756fc1b5cc7e | ['Rachel Lewis Curry'] | 2020-05-22 16:01:00.984000+00:00 | ['Self', 'Environment', 'Lifestyle', 'Sustainability', 'Social Distance'] |
Using Machine Learning to Predict Subscription to Bank Term Deposits for Clients with Python | Using Machine Learning to Predict Subscription to Bank Term Deposits for Clients with Python
Bank Marketing with Machine Learning using Scikit-Learn
“No great marketing decisions have ever been made on qualitative data.” — John Sculley (CEO of Apple Inc.)
Credit: wpforms.com/best-google-analytics-plugins-for-wordpress/
Introduction
Marketing to potential clients has always been a crucial challenge in attaining success for banking institutions. It’s not a surprise that banks usually deploy mediums such as social media, customer service, digital media and strategic partnerships to reach out to customers. But how can banks market to a specific location, demographic, and society with increased accuracy? With the inception of machine learning - reaching out to specific groups of people have been revolutionized by using data and analytics to provide detailed strategies to inform banks which customers are more likely to subscribe to a financial product. In this project on bank marketing with machine learning, I will explain how a particular Portuguese bank can use predictive analytics to help prioritize customers which would subscribe to a bank term deposit.
In this project I will demonstrate how to build a model predicting clients subscribing to a term deposit using the following steps -
Project definition
Data exploration
Feature engineering
Building training/validation/test samples
Model selection
Model evaluation
You can see my code in the Jupyter Notebook provided on my GitHub (https://github.com/emekaefidi/Bank-Marketing-with-Machine-Learning).
This project was inspired by Andrew Long!(check him out- https://towardsdatascience.com/@awlong20).
Project Definition
Predict if a client will subscribe (yes/no) to a term deposit — this is defined as a classification problem.
Data Exploration
The data that is used in this project originally comes from the UCI machine learning repository (link). The data is related with over 40,000 direct marketing campaigns of a Portuguese banking institution from May 2008 to November 2010. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be (‘yes’) or not (‘no’) subscribed.
In this project, we are going to utilize python to develop a predictive machine learning model! Let’s begin by loading our data and exploring the columns.
Looking briefly at the data columns, we are can see that there are various numerical and categorical columns! These columns can be explained in more details below:
The most important column here is y , which is the output variable (desired target): this will tell us if the client subscribed to a term deposit(binary: ‘yes’,’no’).
Now let’s define an output variable to use for our binary classification. We will try to predict if a client is likely to subscribe to a term deposit.
Let’s define a function in order to calculate the prevalence of population that subscribes to a term deposit.
Here we see that around 11% of the population has a term deposit. This is known as an imbalanced classification problem so we will address that below.
From digging deeply to analyze the columns, we see there are a mix of categorical (non-numeric) and numerical data. A few things to note —
All the data inputted are non-null values, meaning that we have a value for every column
age, duration, campaign, pdays, previous, emp.var.rate, cons.price.idx, cons.conf.idx, euribor3m and nr.employed are numerical variables
default, housing and loan have 3 values each (yes, no and unknown)
Output (y) has two values: “yes” and “no”
We are discarding duration. This attribute highly affects the output target (e.g., if duration=0 then y=’no’). Yet, the duration is not known before a call is performed. Also, after the end of the call y is obviously known. Thus, this input should only be included for benchmark purposes and should be discarded if the intention is to have a realistic predictive model
Feature Engineering
In this section, We are going to create features for our machine learning model. In each section, we will add new variables to the dataframe and keep track of which columns of the dataframe we are going to engage as part of the features for the predictive model. We will divide this section into numerical and categorical features.
Numerical Features
These are numeric data. The numerical columns that we will use can be seen below:
Now, let’s check if there are any missing values in the numerical data.
Categorical Features
Categorical variables are non-numeric data such as job and education. To turn these non-numerical data into variables, the simplest thing is to use a technique called one-hot encoding, which will be explained below.
The first set of categorical data we will work on are these columns:
In one-hot encoding, we will create a new column for each unique value in that column. Now, the value of the column is 1 if the sample has that unique value or else 0 . For example, for the column job, we would create new columns (“job_blue-collar”, “job_entrepreneur”, etc). If the client’s job is blue-collar, the client gets a 1 under ‘job_blue-collar’ and 0 under the rest of the job columns. To create these one-hot encoding columns, we will utilize the get_dummies function provided by pandas.
A problem that arises is by creating a column for each unique value, we have correlated columns. That is to say, the value in one column can be figured out by looking at the rest of the columns. For example, if marital is not “married”, “single”, or “divorced”, it must be “unknown”. In order to fix this, we can use the drop_first option, which will drop the first categorical value for each column. Now we are ready to make all of our categorical features.
In order to add the one-hot encoding columns to the dataframe, we use the concat function. axis = 1 is used to add the columns.
Let’s now save the column names of the categorical data to keep track of them.
Feature Engineering: Summary
Through this process we created 62 features for the machine learning model! We separated the features to the following:
9 numerical features
53 categorical features
We will create a new dataframe that only has the features and the OUTPUT_LABEL
Building Training/Validation/Test Samples
Till this very point we have explored our data and created features from the categorical data. Now, It is now time for us to split our data. The reason why we split the data is so that you can measure how well your model would do on unseen data. We split into three parts:
Training samples: these samples are used to train the model
Validation samples: these samples are held out from the training data and are used to make decisions on how to improve the model
Test samples: these samples are held out from all decisions and are used to test (measure) the generalized performance of the model
In this project, we will split into 70% train, 15% validation, and 15% test!
Let’s shuffle the samples using sample in case there was some order (e.g. all positive samples on top). Here n is the number of samples. random_state is just specified so the project is reproducible.
We can use sample again to extract 30% (using frac ) of the data to be used for validation and test splits. An important note is that the validation and test come from similar distributions and this technique is one way to do it.
And now we can split into test and validation using 50% fraction.
The .drop function just drops the rows from df_test to get the rows that were not part of the sample. We can use this same idea to get the training data.
At this junction, let’s check what percent of our groups are likely to subscribe to a term deposit. This is known as prevalence. Ideally, all three groups would have similar prevalence.
Now we can see that the prevalence is about the same for each group.
At this point, we might suggest dropping the training data into a predictive model and see the outcome. However, if we do this, there’s a chance that we will get back a model that is 89% accurate. But wait, we never caught any of the clients that will subscribe to a term deposit(recall= 0%). How can this be possible?
What is happening is that we have an imbalanced dataset where there are much more negatives than positives, therefore the model might just assign all samples as negative.
Typically, it is best practice to balance the data in some way to give the positives more weight. There are 3 techniques that are typically utilized:
sub-sample the more dominant class: using random subset of the negatives
over-sample the imbalanced class: using the same positive samples multiple times
create synthetic positive data
Usually, you will want to use the latter two methods if you only have a handful of positive cases. Since we have a few thousand positive cases, let’s use the sub-sample approach. Here, we will create a balanced training, validatoin and test data set that has 50% positive and 50% negative. You can also try tweaking this ratio to see if you can get an improvement.
Most machine learning packages like to implement an input matrix X and output vector y, so let’s create those:
There can be troubles in machine learning models when the variables are of different size (0–100, vs 0–1000000). To combat this, we can scale the data. Here we will use scikit-learn’s Standard Scaler which removes the mean and scales to unit variance. Here I will create a scaler using all the training data, but you could also use the balanced one if you wanted.
We are going to need this scaler for the test data, so let’s save it using a package called pickle .
Now we can go ahead and transform our data matrices:
We won’t transform the test matrix yet, to prevent us from being tempted to look at the performance until we are done with model selection.
Model Selection
Fantastic! We had to do a lot of work to prep the data! Which is the norm in data science. You can spend up to 90% cleaning and preparing data before analyzing!
In this section, we train a few machine learning models and use a few techniques for optimizing them. We will then select the best model based on performance on the validation set.
We will utilize the following functions to evaluate the performance of the model — AUC (Area Under the ROC Curve), Accuracy, Recall, Precision, Specificity and F1!
Since we have a balanced training data, let’s set our threshold at 0.5 to label a predicted sample as positive.
Model Selection: Baseline models
In this section, we will first compare the model performance of the following 7 machine learning models using default hyperparameters:
K-Nearest Neighbors
Logistic Regression
Stochastic Gradient Descent
Naive Bayes
Decision Tree
Random Forest
Gradient Boosting Classifier
K Nearest Neighbors (KNN)
KNN is one the simplest machine learning models. KNN looks at the k closest datapoints and probability sample that has positive labels. This model is very easy to understand, versatile, and you don’t need an assumption for the data structure. KNN is also good for multivariate analysis. A caveat with this algorithm is being sensitivity to K and takes a long time to evaluate if the number of trained samples is large. We can fit KNN using the following code from scikit-learn:
We can evaluate the model performance with the following code:
To be brief, we will exclude the evaluation from the remaining models and only show the aggregated results below.
Logistic Regression
Logistic regression is a traditional machine learning model that fits a linear decision boundary between the positive and negative samples. Logsitic regression uses a line (Sigmoid function) in the form of an “S” to predict if the dependent variable is true or false based on the independent variables. One advantage of logistic regression is the model is interpretable — we know which features are important for predicting positive or negative. Take note that the modeling is sensitive to the scaling of the features, so that is why we scaled the features above. We can fit logistic regression using the following code from scikit-learn.
Stochastic Gradient Descent
Stochastic gradient descent is similar to logistic regression. Stochastic Gradient Descent analyzes various sections of the data instead of the data as a whole and predicts the output using the independent variables. Stochastic Gradient Descent is faster than logistic regression in the sense that it doesn’t run the whole dataset but instead looks at different parts of the dataset. We can fit stochastic gradient descent using the following code from scikit-learn.
Naive Bayes
Naive Bayes is a model traditionally used in machine learning. This algorithm uses Bayes rule which calculated the probability of an event related to previous knowledge of the variables concerning the event. The “Naive” part is that the model assumes that all variables in the dataset are independent of each other — meaning there are no dependent variables or output. This works well for robotics and computer vision, but we can also try it here! We can fit Naive Bayes with the following code.
Decision Tree
Another class of popular machine learning models is tree-based methods. The simplest tree-based method is known as a decision tree. The goal of using a Decision Tree is to create a training model that can use to predict the class or value of the target variable by learning simple decision rules gotten from training data. In Decision Trees, for predicting a class label for a record we start from the root of the tree. One advantage of tree-based methods is that they have no assumptions about the structure of the data and are able to pick up non-linear effects if given sufficient tree depth. We can fit decision trees using the following code.
Random forest
One disadvantage of decision trees is that they tend overfit very easily by memorizing the training data. Overfitting occurs when a model learns the detail and noise in the training data to the extent that it negatively impacts the performance of the model on new data. Random forests were created to reduce the overfitting. In random forest models, multiple trees are created and the results are aggregated. The trees in a forest are decorrelated by using a random set of samples and random number of features in each tree. In most cases, random forests work better than decision trees because they are able to generalize more easily. To fit random forests, we can use the following code.
Gradient Boosting Classifier
Boosting is a technique that builds a new decision tree algorithm that focuses on the errors on the dataset to fix them. This learns the whole model in other to fix it and improve the prediction of the model. A model that uses this technique combined with a gradient descent algorithm (controlling learning rate) is known as gradient boosting classifier. One advantage is the XGBoost library is the determining factor in winning a lot of Kaggle data science competitions! To fit the gradient boosting classifier, we can apply the following code.
Analysis of Baseline Models
The next step is to make a dataframe with the results of all the baseline models and plot the outcomes using a package called seaborn . We will utilize the AUC to evaluate the best model. This is a good data science performance metric for picking the best model since it captures the trade off between the true positive and false positive and does not require selecting a threshold.
As we can see most of the models (except decision tree) have similar performance on the validation set. There is some overfitting as noted by the drop between training and validation. Let’s check if we can improve this performance using a few more techniques.
Model Selection: Learning Curve
In this section, we can diagnose how our models are doing by plotting a learning curve. In this section, we will make use of the learning curve code from scikit-learn’s website with a small change of plotting the AUC instead of accuracy.
In the case of random forest, we can see the model has high variance because the training and cross-validation scores show data points which are very spread out from one another. High variance would cause an algorithm to model the noise in the training set (overfitting).
Depending on the learning curve, there are a few strategies we can employ to improve the models
High Variance:
- Reduce number of features
- Decrease model complexity
- Add regularization
- Add more samples
High Bias:
- Add new features
- Increase model complexity
- Reduce regularization
- Change model architecture
Model Selection: Feature Importance
A way of improving your models to understand what features are important to your models. This can usually only be investigated for simpler models such as Logistic Regression or Random Forests. This analysis can help in certain areas:
— inspire new feature ideas : assists with both high bias and high variance
— obtain a list of the top features to be used for feature reduction: helps with high variance
— point out errors in your pipeline: helps with robustness of model
We can get the feature importance from logistic regression using the below.
We can take a look at the top 50 positive and top 50 negative coefficients to get some insight.
After reviewing these charts, I realized the features that have more impact on the predictive outcomes of the model are cons.price.idx, and euribor3m due to their high importance score. cons.price.idx is the consumer price index which measures changes in the price level of a weighted average market basket of consumer goods and services purchased by households. A lower the price index will encourage clients to subscribe to a term deposit. Similarly, euribor3m is the Euribor (Euro InterBank Offered Rate) which is the average interest rate banks provide on short term loans (3 months). This is a metric that shows clients’ ability to pay off short terms loans.
In a high variance situation, a technique that can be used is to reduce the number of variables to minimize overfitting. After this analysis, you could apply the top N positive and negative features or the top N important random forest features. You might need to adjust N so that your performance does not drop drastically. An example is only using the top feature will likely drop the performance by a lot.
Feature importance plots may also alert errors in your predictive learning model. You may have some data leakage in the cleaning process. Data leakage can be explained as the process of accidentally including something in the training that allows the machine learning algorithm to artificially cheat. Similar things can also happen when you combine datasets. Supposing when you merged the datasets one of the classes ended up with nan for some of the variables.
Model Selection: Hyperparameter Tuning
Hyperparameter Tuning is the process of searching for the ideal model architecture. These are parameters which define the model architecture. We are only going to optimize the hyper parameters for stochastic gradient descent, random forest, and gradient boosting classifier. We will not optimize KNN since it took a while to train. We will not optimize logistic regression since it performs similarly to stochastic gradient descent. Similarly, We will not optimize decision trees since they tend to overfit and perform worse that random forests and gradient boosting classifiers.
A good tool for hyperparameter tuning is Grid search — where grid values are tested using all possible combinations. This is a computationally intensive method. Another option is to randomly test a permutation of them. This technique is called Random Search and is also deployed in scikit-learn.
Now, we can create a grid over the random forest hyperparameters.
To implement the RandomizedSearchCV function, we need something to score or evaluate a set of hyperparameters. Here we will use the AUC.
The three important parameters of RandomizedSearchCV are
scoring = evaluation metric used to pick the best model
n_iter = number of different combinations
cv = number of cross-validation splits
Note that increasing the last two of these will increase the run-time, but will decrease chance of overfitting. The number of variables and grid size also influences the runtime. Cross-validation is a method for splitting the data multiple times to get a better estimate of the performance metric. For the purposes of this project, we will limit to 2 CV to reduce the time.
Let’s fit our Randomized Search random forest with the following code.
We can analyze the performance of the best model compared to the baseline model.
In the same way,we can optimize the performance of stochastic gradient descent and gradient boosting classifiers.
Here We can aggregate the results and compare to the baseline models on the validation set.
Looking at the results, we can see that the hyperparameter tuning improved the models, but not by much. This is most likely due to the fact that we have a high variance situation.
Model Selection: Best Classifier
In this phase, we will chose the gradient boosting classifier since it has the best AUC on the validation set. You won’t want to train your best classifier every time you want to run new predictions. Therefore, we need to save the classifier. We will use the package pickle .
Model Evaluation
Now that we have chosen our best model (optimized gradient boosting classifier). Let’s evaluate the performance of the test set.
Lastly, The final evaluation is shown below!
Additionally, We can create the ROC curve for the 3 datasets as shown below:
Conclusion
Through this project, we created a machine learning model that is able to predict how likely clients will subscribe to a bank term deposit. The best model was gradient boosting classifier with optimized hyperparameters. Our model’s test performance (AUC) is 79.5%. A precision of 0.82 divided by a prevalence of 0.50 gives us 1.6, which means the model helps us 1.6 times better than randomly guessing. The model was able to catch 62% of customers that will subscribe to a term deposit. We should focus on targeting customers with high cons.price.idx (consumer price index) and euribor3m (3 month indicator for paying off loans) as they are high importance features for the model and business. Therefore, we save time and money knowing the characteristics of clients we should market to and that will lead to increased growth and revenue.
References
S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22–31, June 2014
A. Long. Using Machine Learning to Predict Hospital Readmission for Patients with Diabetes with Scikit-Learn. October 2018 | https://medium.com/swlh/using-machine-learning-to-predict-subscription-to-bank-term-deposits-for-clients-with-python-aec8a4690807 | ['Emeka Efidi'] | 2020-06-15 02:10:24.324000+00:00 | ['Data Science', 'Python', 'Machine Learning', 'Marketing', 'Scikit Learn'] |
The Brave Writer Submission Guidelines | How to become a writer?
When your post is ready, submit via this form.
We will add you as a writer within 48 hours or so. Requests to contribute via email, Facebook message, or any other means will be ignored.
Once added as a contributor, you will be able to submit your drafts to The Brave Writer directly from your Medium Profile. If you don’t know how to do it, read Medium guide here.
If you don’t hear back from us within three business days, please assume that we kindly passed on your submission.
How can improve my chances of getting approved as a writer?
Other than following these guidelines, you can also take a look at our Editor’s Notes Column.
In this column, our editors share their advice on how to edit your pieces, how to pitch your stories, and how to improve your chances of having your work accepted for publication.
Check it out here. | https://medium.com/the-brave-writer/write-for-us-8d2e05a028d5 | ['Maria Angel Ferrero'] | 2020-12-29 11:14:28.471000+00:00 | ['Writing Tips', 'Write For Us', 'Self Improvement', 'Writing', 'Marketing'] |
Apple Once Passed on Acquiring Tesla | When notes leaked that Apple was planning to begin production of an electric “iCar” by 2025, Elon Musk responded. At first, he questioned Apple’s tactics for producing safe and efficient batteries and whether they actually gave the company a competitive advantage over Tesla.
Then, he dropped this bomb:
In 2017, perhaps the time period in which Musk is referring, Tesla wasn’t far from death. On “Axios on HBO,” Musk said his company was “within single-digit weeks” of folding as they struggled to ramp-up Model 3 mass production.
Musk also said he “personally redesigned the whole battery pack production line and ran it for three weeks,” trying to paint the picture of how dire the company’s situation was.
Musk said the amount of work he put in to save the company hurt his brain and his heart and that no one should put in the amount of work he did.
These reflections explain why Musk reached out to Apple. The company was bleeding money, as Musk said. While Tesla probably wouldn’t have actually died in 2017, it would have had to get money from somewhere.
An acquisition from a large-cap company like Apple would have taken all that stress off of Musk. He would have the financial backing that could keep Tesla afloat.
In a separate interview in 2018, Musk said ramping up the Model 3 production was a “bet-the-company” situation — a situation he didn’t see Tesla being in again.
Two years later, Tesla is worth over $612 billion, making it the most valuable car company in the world — more valuable than the next six car companies combined, mainly due to its stock market returns.
Musk said Apple would have acquired Tesla at 1/10 of its current valuation, meaning around $60 billion. Hindsight is 20/20, but that is obviously a huge miss for Apple.
Apple CEO Tim Cook passing on the meeting doesn’t make much sense. Tesla, a car manufacturer, is obviously in a much different space than Apple is with its technology. But part of what makes Tesla so unique is that it is, too, a technology company.
And, according to the Reuters “iCar” report, the two companies seem to now have similar goals.
The report states that Apple’s new car plans, referred to as “Project Titan,” have been on-and-off since 2014 when the company originally planned to produce a vehicle. Apple eventually stepped away from the plan but reignited it in 2018 by re-hiring veteran employee Doug Field, who returned after working at Tesla, to head the project.
After more than a year of building a team behind the scenes, Apple now feels comfortable moving forward with making a consumer product — an electric, self-driving vehicle.
There is doubt within the report, however, that Apple could see a profit on vehicle production in any reasonable timeframe. It took Tesla 17 years to become profitable. As discussed earlier with Tesla, mass-producing these types of vehicles can become a money pit.
Apple will supposedly not begin production until 2025, giving them plenty of time to find suppliers of technology and manufacturers for the car. By that time, though, every other company working to produce electric and autonomous vehicles will be miles and miles ahead of Apple.
“As we see with Tesla and the legacy auto companies, having a very complex manufacturing network around the globe doesn’t happen overnight,” Trip Miller, managing partner at Gullane Capital Partners, said in the report.
In 2020, Musk became the second-richest person in the world thanks to Tesla’s extreme success in the stock market — returning nearly 700% in the past year.
Had the company been sold to Apple three years ago, Musk would not be in the position he is in now and Tesla might not be either.
Apple is doing just fine with a valuation of over $2.2 trillion. Moving forward, though, it has to try to find a way to compete with the tech-auto giant that it could have had a stake in. | https://medium.com/swlh/apple-once-passed-on-acquiring-tesla-d78b2f58d389 | ['Dylan Hughes'] | 2020-12-24 22:33:20.102000+00:00 | ['Transportation', 'Technology', 'Apple', 'Sustainability', 'Tesla'] |
Achieving material efficiency | A new economic model which puts the emphasis on the recycling and reusing of materials and products is emerging, as concerns for the environment escalate. The circular economy calls for a radical shift in production and consumption. Continual cycles recover and restore products, components and materials through strategies such as reuse, repair, remanufacture and, ultimately, recycling.
Material efficiency is an essential part of the circular economy. It consists of the preservation of materials by making products more durable and repairable. It also facilitates the recovery and recycling of material at the end of the product life. The ultimate objective of material efficiency is to keep materials in use for as long as possible — and potentially forever.
Material efficiency can be placed into a hierarchy during a product’s use and waste phases. The most favourable strategies call for the design of products associated with a longer product life using the least amount of natural resources, while the least favourable strategies represent the loss of a material resource by incinerating the material and recovering its energy. In a truly circular economy, landfills are not an acceptable option.
Hierarchy in the make and use phases
The highest value is given to strategies associated with longer product life and the minimal use of natural resources. Products should be constructed to consume the least amount of resources and be designed to last for very long periods of time. In the use phase, strategies are identified to keep materials in use by extending the lifetime of a product.
Strategies associated with making the product can be extending the lifetime of products or using less raw materials. This is possible by designing products that make use of fewer raw materials and that can last for very long periods of time.
Strategies associated with the use phase of the products are developed such that the lifetime can be extended through for example, reuse, repair and upgrades, as well as refurbishment and the remanufacture of products. However, repair is preferred over refurbishment since the product is only minimally changed and thus fewer resources and energy are needed. With a repair, the product provides the same function, and resources are only used to bring it back to working condition. With refurbishment, however, additional resources are needed to bring the product to its original condition in addition to the resources needed for the resale, delivery and installation of the product.
Hierarchy in the waste phase
When a product reaches the waste phase, much of the value of the material has already been lost since the product is no longer in use.
While it is possible to keep the materials of the product in use through recycling, a significant loss in the value of the product has occurred. Recycling should therefore be viewed as an option of last resort since significant amounts of resources and energy will be necessary not only to recycle the materials themselves, but also to make a new product from the recycled materials.
As can be expected, the greatest loss of materials occurs when the material is incinerated and the energy recovered, or when it is disposed in a landfill since it is no longer in use: the circular cycle is broken.
Designing products for material efficiency
Manufacturers can address material efficiency when designing their products. Each stage of the use and waste phases of a product should be taken into consideration to allow for material efficiency to be facilitated.
In the design phase, manufacturers should consider the materials used in the construction of a product. For example, they can try to reduce the amount of materials used by optimizing the product design, and by selecting recycled materials or reused components. Focusing on the use phase, products should be designed in such a way that their lifetime can be extended by making them easy to repair and upgrade or reuse. Since products will be reused (including refurbishment or remanufacture), and thus have multiple owners, manufacturers should also facilitate upgrades of soft- and hardware and the removal of sensitive data. Also, parts should be designed to endure multiple cleaning and dis- and reassembly cycles.
Products should be designed for an efficient end-of-life. This means that useful materials and components can be easily and safely recuperated by, for instance, making the product easy to disassemble.
How standards can help
The IEC is examining the requirements for material efficiency.
To facilitate products to last longer, standards are needed to ensure that, amongst others, product safety, performance and reliability are sufficiently taken into account. Issues such as data removal and security must also be considered as products are reused and change ownership. Moreover, a holistic approach is needed to ensure that the protection of the environment is not detrimental to areas such as product safety, EMC and performance.
Legislation is expected to require the increased use of used parts as well as products that can more easily be repaired or remanufactured. We will need standardized methods and tools to assess aspects such as the proportion of reused components or recycled content in a product, and how to assess the ease (or difficulty) with which a product can be repaired or remanufactured. Also, standards will be needed to guarantee the properties of the used material, as well as to define the requirements for parts reliability.
Within the IEC, several committees have developed standards that support material efficiency for electrical and electronic products. Some examples include:
IEC TR 62635 with information on product end of life, including the recyclability rate calculation.
IEC TR 62824 with guidance about material efficiency considerations in the eco-design of products.
IEC 62309 which examines the dependability of products containing used parts.
IEC 63077 which specifies the process for ensuring the performance and safety of refurbished medical imaging equipment.
Additional standards are currently under development. For example, in TC 111 a proposal for a new standard to assess the proportion of reused components in products is currently under vote. TC 111 is also preparing a standard covering principles of product circularity in environmental conscious design while TC 62 and TC 2 are developing standards on the refurbishment of medical equipment and rotating machinery, respectively.
New standards covering requirements for material efficiency in the design of products, such as circular ready design, are needed and plans are underway to start such standardization work in the IEC. | https://medium.com/e-tech/achieving-material-efficiency-7c31a426aac0 | [] | 2020-04-08 10:23:54.563000+00:00 | ['Economy', 'Environment', 'Circular Econonomy', 'Material Efficiency', 'Sustainability'] |
Spring Boot Microservices — Implementing Circuit Breaker | Spring Boot Microservices — Implementing Circuit Breaker
In this article, we will learn the fundamentals of one of the must pattern in the Microservices world — Circuit Breaker. We will do the sample implementation based on Spring Boot, Spring Cloud & Resilience4j. This is the sixth part of our Spring Boot Microservices series.
What is Circuit Breaker?
As the name suggests, the pattern derives its inspiration from the electrical switches, which are designed to protect an electrical circuit from damage, caused by excess current from an overload.
When a particular microservice or resource is not responding, this pattern helps in registering the fault, switching off the communication, and restoring it back when the service is ready to serve the requests. This helps the microservice ecosystem in multiple ways —
It handles the service failure and exits gracefully It helps in reducing the overload on the service, which is already stressed It stops the spread of failure across other services
Let's try to understand the pattern with a real world example. We are working in the e-commerce domain and our system is built on Microservices Architecture. For simplicity, let's consider two services.
The first is Product Catalog Service, responsible to manage product lifecycle through — create, update, delete, and get operations. And the second is Product Inventory Service, responsible to manage product inventory through add, update, and get operations.
Get Product Details — Api
Here’s the typical call from our e-commerce store. It calls the Product Catalog Service to get the product details. It has the basic product information including title, description, price but not the available quantity. To get the quantity, it calls the Product Inventory Service.
What if the Product Inventory Service is down? If you don’t handle this scenario, it will result in the failure of Product Catalog Service too. The situation can get worse if the Product Inventory Service is slow in responding. This means it's already consuming resources to its limit.
Let's say, another service called Order Management Service is consuming the Product Inventory Service at this point in time. If the Inventory Service is bombarded with multiple calls from Product Catalog Service, at the same time, it will result in the failure of Order Management Service too.
This is an example of cascading failure, which can propagate to the whole system if not handled correctly. Simple exception handling is not good enough in this case. The Circuit Breaker pattern provides an elegant, maintainable, and flexible approach to handle such failures.
How does the pattern work?
Resilience4j has done a good job in explaining how the pattern works. The Circuit Breaker pattern is implemented with three states: CLOSED, OPEN and HALF-OPEN.
Circuit Breaker — An Illustration
The Circuit Breaker sits right in the middle of the call to external service/resource. In our case when Product Catalog Service calls the Product Inventory Service, the call will go through the Circuit Breaker component.
Circuit Breaker will be in a CLOSED state by default. Let's say the configured threshold is 10%. This means if 10 out of 100 requests are failed, it will be assumed that the failure threshold is reached. At this point, the circuit breaker will move to the OPEN state. After a while, it will move to the HALF-OPEN state to check the status of the Product Inventory Service. At this point, it will open the communication channel at a limited rate. If the rate of failure continues to be above the threshold value (10%), it will move to the OPEN state again. If not, it will move to the CLOSED state and the expected communication will be resumed.
With this pattern in place, you can always exit gracefully and control the rate of transaction flows as per the service availability.
Sample Implementation
Let's do the sample implementation to see how it works on the ground. We will be doing the implementation based on Spring Boot, Spring Cloud, and Resilience4j. This implementation is also based on Project Reactor, which means we will be using Spring Web flux and Spring Cloud Reactive Circuit Breaker. I have divided the exercise into three following parts.
Building Product Inventory Service — We will implement add, update, and get inventory APIs of this service. Building Product Catalog Service — We will implement create, update, delete, and get product APIs of this service. Our primary focus will be on GET API though. Implementing Circuit Breaker — We will implement the circuit breaker in the get Product Details API of Product Catalog Service.
Also, please ensure that you have Java 11 and Maven 3.x. Any version of Java beyond 1.8 should work but I did not validate for other versions.
Building Product Inventory Service
As discussed our Product Inventory Service will consist of addProductInventory, updateProductInventory, and getProductInventory APIs. We will use MongoDB for data persistence. Let's go to the spring initializer and generate the project structure.
Add Spring Reactive Web, Spring Data Reactive MongoDB as dependencies. Generate, download, and unpack the archive at your local system.
spring initializer — product-inventory service
Let's implement the restful APIs in ProductInventoryService.java . Update the file with the following code.
@RestController
public class ProductInventoryService { public class ProductInventoryService { @Autowired
private ReactiveMongoTemplate mongoTemplate; private ReactiveMongoTemplate mongoTemplate;
public Mono < ProductInventory > addProductInventory(
return mongoTemplate.insert(product);
} @PostMapping ("/inventory")public Mono < ProductInventory > addProductInventory( @RequestBody ProductInventory product) {return mongoTemplate.insert(product);
public Mono < ProductInventory > updateProductInventory(
return mongoTemplate.save(product);
} @PutMapping ("/inventory")public Mono < ProductInventory > updateProductInventory( @RequestBody ProductInventory product) {return mongoTemplate.save(product);
public Mono < ProductInventory > getProductInventory(
return mongoTemplate.findById(id, ProductInventory.class);
} @GetMapping ("/inventory/{id}")public Mono < ProductInventory > getProductInventory( @PathVariable String id) {return mongoTemplate.findById(id, ProductInventory.class); }
We are using ProductInventory as the data object, so let's create another class for this — ProductInventory.java
@Id
private String productId;
private int quantity; public class ProductInventory {private String productId;private int quantity; //getters and setters ... }
We will be running this service on port 8082. Update src/main/resources/application.properties file with the server port and MongoDB connection details. I am using the hosted MongoDB instance, available as part of the free trial from MongoDB. You can choose any other MongoDB instance, as per your choice.
server.port=8082
spring.data.mongodb.uri=mongodb+srv://xxx-user:xxx-pwdY@cluster0.nrsv6.gcp.mongodb.net/ecommerce
Our Product Inventory Service is ready to function now. Start the service with the maven command mvn spring-boot:run . You can use addProductInventory API to insert some sample records. Here is the sample post request I used.
{
"productId": "test-product-123",
"quantity": 30
}
Building Product Catalog Service
Similar to the Product Inventory Service, create the Product Catalog Service. We are going to add two more dependencies — Spring Cloud Circuit Breaker (with Resilience4j), and Spring Boot Actuator (optional).
spring initializer — Product Catalog Service
Let's create the class ProductCatalogService.java to implement our restful APIs as below.
@RestController
public class ProductCatalogService { public class ProductCatalogService { @Autowired
private ReactiveMongoTemplate mongoTemplate; private ReactiveMongoTemplate mongoTemplate; @Autowired
private WebClient webClient; private WebClient webClient; @Autowired
private ReactiveCircuitBreakerFactory cbFactory; private ReactiveCircuitBreakerFactory cbFactory;
public Mono < ProductDetails > getProductDetails( @GetMapping ("/product/{id}")public Mono < ProductDetails > getProductDetails( @PathVariable String id) { Mono < ProductDetails > productDetailsMono = mongoTemplate.findById(id, ProductDetails.class); Mono < ProductInventory > inventoryMono = webClient.get().uri("http://localhost:8082/inventory/" + id).retrieve()
.bodyToMono(ProductInventory.class); Mono < ProductDetails > mergedProductDetails = Mono.zip(productDetailsMono, inventoryMono,
(productDetails, inventory) - > {
productDetails.setInventory(inventory);
return productDetails;
}); return mergedProductDetails;
} @PostMapping("/product")
public Mono<Product> addProduct(@RequestBody Product product){
return mongoTemplate.insert(product);
}
... }
If you take a close look at the getProductDetails API, we are first getting the basic product details and then calling the Product Inventory Service (running at port 8082) to get the inventory details. We are combining both the results before returning the response.
Update the src/main/resources/application.properties with the MongoDB connection details as discussed above. Start the service with mvn spring-boot:run command. The Product Catalog Service is ready for use now.
Insert the sample product records with the createProduct API. Here is the sample post request, I used.
{
"id":"test-product-123",
"title":"test-product-1",
"desc":"test product 1",
"imagePath":"gc://image-path",
"unitPrice":10.00
}
Let's call the getProductDetails API by accessing http://localhost:8080/cb/product/test-product-123 . We should get the result something like —
{
"id":"test-product-123",
"title":"test product for circuit breaker",
"desc":"test product updated",
"imagePath":"gc://image-path",
"unitPrice":10.0,
"inventory":{
"productId":"test-product-123",
"quantity":30
}
}
Great! So our base platform is set. Both the services are working perfectly as expected. It's time to implement the Circuit Breaker pattern.
Implementing Circuit Breaker
Before implementing this pattern, let's turn down the Product Inventory Service and see how it impacts. While this service is not running, access the getProductDetails API of Product Catalog Service, browsing the page at http://localhost:8081/cb/product/test-product-123 . You will see an error, somewhat similar to this —
[70118947-6] There was an unexpected error (type=Internal Server Error, status=500).
This will be the result if we do not handle the service failures. With the Circuit Breaker pattern, we will be implementing a fallback mechanism. In our case, it's going to be “returning a blank object”. With this, the store can still display the product details, with no availability and we will be handling the worst-case scenario gracefully.
Let's update the getProductDetails API as below.
@RestController
public class ProductCatalogService {
@Autowired
private ReactiveCircuitBreakerFactory cbFactory; public class ProductCatalogService {
public Mono < ProductDetails > getProductDetailsV2(
Mono < ProductDetails > productDetailsMono = mongoTemplate.findById(id, ProductDetails.class); @GetMapping ("/product/{id}")public Mono < ProductDetails > getProductDetailsV2( @PathVariable String id) {Mono < ProductDetails > productDetailsMono = mongoTemplate.findById(id, ProductDetails.class);
.uri("
.retrieve()
.bodyToMono(ProductInventory.class)
.transform(
it - > cbFactory.create("inventory-service")
.run(it,
throwable - > {
return Mono.just(new ProductInventory());
}
)
); Mono < ProductInventory > inventoryMono = webClient.get().uri(" http://localhost:8082/inventory/ " + id).retrieve().bodyToMono(ProductInventory.class) Mono < ProductDetails > mergedProductDetails = Mono.zip(productDetailsMono, inventoryMono, (productDetails, inventory) - > {
productDetails.setInventory(inventory);
return productDetails;
}); return mergedProductDetails;
} }
In the highlighted code, we are updating the call for Product Inventory Service with the following behavior change —
When the exception occurs in the call, its returning blank ProductInventory object.
With the help of ReactiveCircuitBreakerFactory , we are creating the Circuit Breaker instance based on inventory-service configuration. We need to include the following configuration in src/main/resources/application.yml —
resilience4j.circuitbreaker:
instances:
inventory-service:
failureRateThreshold: 50
minimumNumberOfCalls: 20
slidingWindowType: TIME_BASED
slidingWindowSize: 10
waitDurationInOpenState: 50s
permittedNumberOfCallsInHalfOpenState: 3
We will touch upon all these attributes in a while. Before that, let's restart our Product Catalog Service and access the getProductDetails API. Voila! you got the product details even though the product inventory service is down.
{
"id": "test-product-123",
"title": "test-product-123_1",
"desc": "test product updated",
"imagePath": "gc://image-path",
"unitPrice": 10,
"inventory": {
"productId": null,
"quantity": 0
}
}
Though Spring Cloud and Resilience4j made it look easy, a lot of magic is happening behind the scenes, to make the pattern work.
As discussed earlier, the Circuit Breaker will be in a CLOSED state initially. As we hit the getProductDetails API first time and Product Inventory Service is not available, it executes the fallback operation — to return the blank object. Let's see how each of the attributes are defined in application.yml are helping us.
failureRateThreshold —The circuit breaker will continue to be in a CLOSED state till the failure rate threshold reaches. In our case, this value is 50%, which means if 5 out of 10 requests are failed, the threshold will be reached. This will move the circuit breaker in an OPEN state, which means it will stop the unnecessary network calls to call Product Inventory Service.
minimumNumberOfCalls — This attribute ensures the failure rate is calculated once a minimum number of calls are executed. In our case, 20 requests must be catered for before the failure rate calculation starts.
— This attribute ensures the failure rate is calculated once a minimum number of calls are executed. In our case, 20 requests must be catered for before the failure rate calculation starts. slidingWindowType — This attribute configures the failure rate calculation mechanism. The failure rate can be either calculated based on time or the count of requests. For instance, we can say consider the request in the last 5 minutes or consider the last 50 requests. In our case, we are defining the time-based window — TIME_BASED
— This attribute configures the failure rate calculation mechanism. The failure rate can be either calculated based on time or the count of requests. For instance, we can say consider the request in the last 5 minutes or consider the last 50 requests. In our case, we are defining the time-based window — slidingWindowSize — With this attribute, we are defining the window size, which is 10 seconds in our case. If the 10 requests processed in the 10-second window and 4 of them are failed, the failure rate will be 40%.
— With this attribute, we are defining the window size, which is 10 seconds in our case. If the 10 requests processed in the 10-second window and 4 of them are failed, the failure rate will be 40%. permittedNumberOfCallsInHalfOpenState — Once the circuit breaker moves to OPEN state, after a while it moves to HALF-OPEN state. In this state, it communicates with the product inventory service in a limited manner. This attribute defines this limit. In our case, the limit is 3, which means only 3 requests will be processed in a 10-second window.
Hurray! We just implemented a real-life example of the Circuit Breaker pattern. What next …?
Next Steps
The primary objective of this article is to develop a high-level understanding of how the Circuit Breaker pattern works. The article covered the basic implementation of this pattern but as you adopt it in your services, you might need to customize it from multiple aspects.
We just used one of the modules of the Spring Cloud Circuit Breaker library. It supports other implementations too including Netflix Hystrix, Sentinel, and Spring Retry. Check out the Spring Cloud Circuit Breaker documentation for more details.
Our implementation was based on reactive code, but Spring Cloud supports non-reactive implementations (for e.g Spring MVC) as well. You can again find more details on this at Spring Cloud Circuit Breaker page.
Resilience4j provides a configurable and cleaner option to implement a Circuit Breaker for java based applications. We used only a few of the attributes for configuration. You can refer to the complete list of attributes, available with Resilience documentation.
Also, Circuit Breaker is just one feature offered by the library. It also offers RateLimiter, Retry, Bulkhead, Time Limiter, and Cache decorators. You can combine multiple decorators to make the service calls. For instance Bulkhead, RateLimiter, and Retry decorator can be combined with the CircuitBreaker decorator. For more details visit the documentation here. | https://medium.com/swlh/spring-boot-microservices-implementing-circuit-breaker-16018781ce70 | ['Lal Verma'] | 2020-12-07 00:58:30.597000+00:00 | ['Microservices', 'Spring Boot', 'Circuit Breaker', 'Spring Cloud', 'Software Engineering'] |
SaaS Leaders’ Top 9 Marketing Trends to Watch in 2017 | Sangram Vajre
8. Storytelling and the Value of In-Person Connections
In-person events and conferences
I have a renewed appreciation, value, and understanding for what in-person events can do for companies. And it doesn’t need to be on the scale of Dreamforce — companies should be doing their own events whether there are 20 attendees, 200 attendees, or 20,000 attendees. At Terminus, we have put on the FlipMyFunnel roadshow conference almost 10 times and created a massive conference with over 25 sponsors and 35 speakers in Atlanta last December. We focused entirely on thought leadership and the concept of FlipMyFunnel and account-based marketing (ABM), not on pitching Terminus products. We focus on the problem and attendees know where to find a solution.
We were able to do all of this with a small marketing team and when Terminus was still very young. I can’t stress enough the importance of being in-person with your customers and the people you want to be your customers.
Storytelling
Every 5 years, we have seen a major shift in the marketing landscape. 2000 was the year of email marketing and marketing automation became the new shiny object around 2005. In 2010, we started to have lead overload, so predictive lead scoring became the new thing. Now in 2017, you look at all of this innovation and we’re still engaging with people over email and phone calls. ABM has become the new marketing technology and strategy because we now know what accounts to go after and we are being smart about engaging people “on their terms” through traditional and emerging channels.
I believe in the future, though, we will go back to the Mad Men era of marketing that focused on storytelling. Your marketing is not going to work if you’re not putting the right message in front of the right person with the right story.
Storytelling will become the main job of the marketer.
Sangram Vajre
CMO & Co-Founder, Terminus | https://medium.com/high-alpha/saas-leaders-top-9-marketing-trends-to-watch-in-2017-5ca907e2cac9 | ['Drew Beechler'] | 2020-02-11 21:29:26.767000+00:00 | ['Digital Marketing', 'Marketing', 'Venture Capital', 'Startup', 'SaaS'] |
How To Build a Payments Data Team | Paying with Apple Pay on a Square Device
In the last couple of years, we have seen a growing number of acquisitions in the Payments Industry. From large investors buying established global companies, like San Francisco Partners -> Verifone, to strategic acquisitions for capabilities, geo-expansion or consolidation purposes, like ING -> PayVision, WorldFirst -> Wyre or Worldline -> SIX. However, the companies that don’t seem to be bought up just yet, are the ones who have made leveraging their Data a key strategic objective. In this blog, I share how Payments (applies to other FinTech’s and Tech as well) companies, can build a Data Team and outperform their competitors.
Necessity vs. Luxury
To be fair, the payments industry has changed drastically in the last ten years, and like all changes this wasn’t because of the large enterprise companies, but because of lean startups innovating and finding new ways to include payments in their product and process. The enterprise “startups”, we look up to now, including Facebook, Spotify, Uber and Netflix, have been extremely crucial in defining new E-Commerce categories, that didn’t exist ten+ years ago. The major thing these new enterprises bring, is their data-driven way of working.
Instead of focussing on how much revenue was being processed, these companies focused more on the efficiency of their overall platform. From conversion rates to authorisation rates. Using their data-driven way of working, they influenced a lot of the “new” payments companies, to provide as much data as possible. As these new companies are overtaking some the older companies, Payments companies both old and new have to adapt and continuously provide data.
Building a Payments Data Team
Payments is a technical product, which means that especially in the beginning of building a Gateway or Acquirer, the Development team calls the shot. If I were to build a new payments company today, that wouldn’t be very different. However, by keeping in mind that at one point Business Users are going to start asking questions, it is best to start with an infrastructure that will be able to retrieve the data and insights necessary to answer those questions.
1 . Hire a Manager of Data (Head of Data, VP of Data, etc.), preferably with experience in Payments.
Because the business is going to have to transition from Development focus to Business focus, you need a Manager of Data, who understands both sides, knows how to communicate with both and is able to assemble a team to execute on the strategy and vision. The beginning period of a Manager of Data, is spend with both management and business users. Understanding the business and what management wants to achieve, while also having conversations with the business users, to understand what they need to do their job. Having gathered all the requirements, the Manager should develop a Data Strategy, which should be the vision for the business to build on.
2. Hire a Data Architect
Too often companies try to run before they can walk. The reason you hire a Data Architect, is because the infrastructure that is in place, has been build for the Development side of the business. Payments has for a long time, been very structured in it’s data, however due to E-Commerce, M-Commerce and IoT-Commerce, an increasing size of data is becoming more unstructured. That is why if you want to retrieve data and derive insights, you need to build a separate Data Infrastructure. A Data Architect, will be able to review the current infrastructure, leverage existing parts, and design an infrastructure that does not interfere with the existing infrastructure, while at the same time give the Data Team the resources necessary to support the organisation.
I often get the question, why can’t I just let my Data Engineer design the infrastructure, to which I reply, if you are building a house would you hire an architect first or a builder. The Architect makes the plan, takes into consideration all the pro’s and con’s, uses his experience (mostly as a Data Engineer), to design an infrastructure that works in the short-term and can be build upon in the long-term.
3. Hire a Data Engineer
A great Data Engineer, is up-to-date with the latest Cloud-Technologies, and at least a master in SQL and Python. Most Data Engineers will prefer working in AWS, as this is the most technology driven platform, which has been around the longest. However, with the increase in more Modern Cloud Technologies like Matillion & Fivetran for ETL’ing (Extract, Transform & Loading) and Snowflake and Google BigQuery for Data-Warehousing-as-a-Service, a lot of infrastructures are becoming less about complexity but more about speed, agility and quickness. With the rapid change of data in its volume, velocity and variety, Data Engineers who know how to find the right tool for the job, get more done and help get your team the right results.
4. Hire a Data Analyst and or BI Analyst
After some time designing and building, the next hire should be a Data Analyst, who preferably has some business background, but is far more advanced in using Databases and querying hard to retrieve data. Using programming languages like R or Python, to do more complex Data Analysis than Excel is able to do. Another reason you want a Data Analyst with at least some programming skills, is that in most cases the data could use some cleaning and transformation, before it becomes the type of data that can be analysed by either the Data Analyst or Business User.
As the capabilities of this team will continue to grow and the number of similar query requests increases, the Data Analyst will be key in transforming his queries into Dashboards, which can be distributed throughout the organisation. By translating the language of the business into a Modelling Layer and using a Data Platform like Looker, the role of the Data Team will start to shift from controller to liberator of the data, by giving all Business Users the ability to ask as many questions as they want and the freedom to explore the data, how they see fit.
5. Hire a Data Scientist
Only when the previous four hires have been successful in developing a Self-Serve Data Platform, is when the Data Scientist can come in and help the organisation become truly Data-Driven. A great Data Scientist, can focus on working with teams to develop Data-Driven Applications. In the Payments Industry, that could mean developing Predictive Applications like Acquiring Routing (based on historical data and weights, select the optimal route), Dynamic Authorisation (inputting or removing data, before submitting a transaction) or building a Fraud Engine, that is able to predict which transactions are fraudulent and which are not.
Have you build your Data Team yet?
If you are a business owner or data falls under your responsibilities, you might be wondering, if this is actually worth it. If we look at companies like Facebook, Uber, Netflix and Spotify, we can see that being in control of your data, doesn’t just lead to better decisions and improved operations, but it can actually make you more money. For the payments industry that is no different, leaders in the space include Stripe, Square, Transferwise and Plaid, the common denominator, they use Data better than all their competitors.
So, have you build your Data Team yet?
Thanks for reading ;) , if you enjoyed it, hit the applause button below, it would mean a lot to me and it would help others to see the story. Let me know what you think by reaching out on Twitter, Linkedin or at DataBright. Or follow me to read my posts on Data Science, Payments and Product Management. | https://towardsdatascience.com/how-to-build-a-payments-data-team-8c76e0048e0 | ['Dwayne Gefferie'] | 2019-06-19 11:54:38.940000+00:00 | ['Analytics', 'Data Science', 'Payments', 'Artificial Intelligence', 'Startup'] |
Logs in Kubernetes: expectations vs reality | It’s 2020 already, and there is still no common solution for aggregating logs in Kubernetes. In this article, we would like to share our ideas, problems encountered, and solutions — all these with the help of real-life examples. Generally speaking, most of the things described here can be applied not just to Kubernetes but any kind of modern infrastructure…
For a start, I’d like to note that there are radically different ideas of what logging is:
someone would like to see security and audit logs only;
someone else prefers the centralized logging of the entire infrastructure;
while the other one wants to see the application logs excluding, for example, load balancers.
Here is how we have implemented different functionality and overcome various constraints. However, let’s start with a brief theory.
A bit of theory: Logging tools
Origins of the components of the logging system
Logging as a discipline boasts an eventful and rich history. In the end, we have got the methodology for collecting and analyzing logs that is actively used today. Back in the 1950s, the analogue of standard input/output streams was introduced in Fortran that helped programmers debug their programs. These were the first attempts at logging, and these tools made life easier for programmers of those days. Today we consider them one of the earliest components of the logging system — the source, or “producer” of logs.
Meanwhile, computer science continued to evolve: computer networks and first clusters have emerged… The first complex systems consisting of several machines have been introduced. Now, system administrators were forced to collect logs from multiple machines, and in some particular cases, they could even accumulate OS kernel messages to investigate a system failure. In the early 2000s, RFC 3164 was introduced. It standardized remote_syslog and provided the basis for defining centralized logging systems. That is how another essential component — the collector of logs and a mechanism to store them — has emerged.
The growing volume of logs and the widespread adoption of web technologies have led to the question of how to present logs to the user in the most accessible form. Simple console tools (awk/sed/grep) were replaced by more advanced log viewers. It was the third component of the modern logging system.
The increasing amount of logs made it clear: we need logs, but not all of them. Moreover, it turned out that different types of logs have varying levels of importance: some can be deleted the next day, while others need to be stored for five years or more. Thus, the logging system has acquired the data filtering and routing component; we will call it a filter.
The storage has also seen numerous improvements: regular files were replaced by relational databases, and later, by the document-oriented storages (e.g., Elasticsearch). As a result, the storage has been separated from log collecting.
In the end, the concept of the log itself has been broadened to some abstract stream of events that we would like to save for history, or rather, keep it in the case we need to conduct an investigation or make some analytical report.
Eventually, in a relatively short time, logging turned into a rather important subsystem that might be rightfully classified as Big Data.
In the past, you could implement a “logging system” with simple prints. The situation has dramatically changed today.
Kubernetes and logging
When Kubernetes became a part of modern infrastructure, the problem of collection of logs manifested itself with renewed vigor: the management of the infrastructure platform was both streamlined and made harder at the same time. Many established services started migrating to microservices approach. Regarding logging, this resulted in a growing number of log sources, their unique life cycle, and the need to track interactions of all system components via logs.
Looking ahead, I would like to note that currently there is no standardized logging option for Kubernetes that stands above the rest of the pack. Here are the most popular schemes sought by the community:
deploying an EFK stack (Elasticsearch, Fluentd, Kibana);
stack (Elasticsearch, Fluentd, Kibana); using the recently released Loki or the Logging operator;
we (perhaps, some others as well) prefer our own tool, loghouse.
In Flant, we tend to use the following bundles in K8s clusters for self-hosted solutions:
I am not going to tell you how to install and configure them. Instead, I will focus on their shortcomings and the situation with logs in general.
Logging routines in K8s
Day-to-day logging explained…
Centralized logging of a large infrastructure requires considerable resources that will be spent on collecting, storing, and processing data. While operating a diverse range of applications & platforms, we have to satisfy various requirements and deal with operating problems arising from them.
Case study: ClickHouse
Let’s consider centralized storage in the case of an application that generates a lot of logs: say, over 5000 lines per second. We will be using ClickHouse to process them.
While attempting to ensure real-time collection, we would run into the problem: it turns out that the 4-core server running ClickHouse cannot handle such a load, and its disk subsystem is severely overloaded:
The high load is caused by the fact that we are trying to perform writing into ClickHouse as fast as possible. The database responds with the increased disk load, which can cause errors such as:
DB::Exception: Too many parts (300). Merges are processing significantly slower than inserts
The fact is that MergeTree tables in Click House (where logs are stored) have their own complications when writing is performed. The inserted data generates a temporary partition that is later merged with the main table. As a result, writing tasks tend to be very disk-demanding. Hence the error represented above: no more than 300 sub-partitions can be merged per second — that’s 300 inserts per second.
To avoid such complexities, we should use as large chunks as possible when writing into ClickHouse while at the same time decreasing the frequency of writes to one time per two seconds, at least. However, writing in large batches brings another risk: the risk of buffer overflow and loss of logs. The obvious solution is to increase the Fluentd buffer. However, in this case, memory consumption will increase.
Note: Another problem of the ClickHouse-based solution mentioned here was that in our case (loghouse), partitioning was implemented via external tables linked by a Merge table. When selecting long time intervals, excessive amounts of RAM are consumed, since the meta-table scans all partitions — even those that are known not to contain the necessary data. However, for ClickHouse versions starting with 18.16, this approach has become obsolete.
Thus, it becomes clear that ClickHouse-based real-time logging solution can be resource demanding and won’t be a reasonable option for many applications. Also, you will need an accumulator (more about it later). The case we describe here is based on our real-life experience. At that time, we could not find a reliable and stable solution for collecting logs with a minimum delay that would have suited the customer.
What about Elasticsearch?
Elasticsearch is known to handle high loads. Let’s try it for the same application. Now, the load looks like this:
Elasticsearch has successfully digested the data flow. However, the process has been very CPU-demanding. It is not a problem from a technical standpoint — you can solve it by reorganizing the cluster. But we end up using a whopping 8 cores for a mere log collection and getting an additional highly loaded component in the system.
To sum it up, such an approach is quite viable in case of a large project and if the customer is ready to spend significant resources on a centralized logging system.
Then it’s fair to ask:
What logs do we really need?
What if we reframe the task itself? Say, logs must be informative and cover only the required events.
Suppose we have a thriving and successful online store. What logs do we need? We need as much information as possible from a payment gateway, obviously. Wherein, the service for showing images in the product catalog is not so critical: we can limit logging to errors and overall monitoring (for example, the percentage of 500’s errors that this component generates).
The main takeaway is that centralized logging does not always make sense. Often, the customer wants to accumulate all logs in a single place, although, say, only 5% of the messages have any relevance (only those that are critically important for business):
Sometimes it is as simple as, say, configuring the log size and an error collector (e.g., Sentry).
Often, error alerts and a comprehensive local log are enough to investigate incidents.
We have had projects that used functional tests and error collection systems only. Developers did not need logs at all — they used error traces to figure out what’s happening.
Real-world example
Here is another great example. One day, we received a request from the security team of one of our customers. This client had some commercial solution that had been developed long before implementing Kubernetes.
They wanted us to integrate a centralized log collection system with security information and event management tool, QRadar. This tool receives logs using the syslog protocol and collects them from the FTP server. However, our attempt at integrating it with the remote_syslog plugin for fluentd did not work out (as it turned out, we were not unique in this). Problems with configuring QRadar were on the side of the customer’s security team.
As a result, part of the logs critical to the business was uploaded to QRadar FTP, while the other part was redirected directly from the nodes via remote syslog. To do this, we have even created a basic chart (it could be useful for someone else). Thanks to the implemented flow, the customer has been able to receive and analyze critical logs (using its favorite tools), and we have been able to reduce costs related to the logging system by keeping data for the last month only.
Here is another example of what you shouldn’t do. One of our clients implemented a multi-line unstructured output of data to the log for every user-related event. As you can easily guess, it was challenging to store and analyze these logs.
Requirements for logs
The above examples lead us to the conclusion that besides choosing the log collecting system, we also have to define the standard for logs themselves! What are the requirements?
Logs must be in a machine-readable format (e.g., JSON).
Logs must be compact and support the ability to adjust the level of logging to debug potential problems. Accordingly, in production environments, a logging level such as Warning or Error must be set by default.
Logs must be normalized, meaning that all lines in the log object must have the same field type.
The unstructured logs might lead to problems with loading logs into the storage and a halt in their processing. To illustrate, here is an example with a 400 error (which, I believe, many of our readers have encountered in the fluentd logs):
2019-10-29 13:10:43 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="400 - Rejected by Elasticsearch"
The above error means that you are sending a field with an unstable type to an index with defined mapping. A prime example of that is a field in the nginx log containing the $upstream_status variable. This variable can contain either a number or a string. For example:
{ "ip": "1.2.3.4", "http_user": "-", "request_id": "47fe42807f2a7d8d5467511d7d553a1b", "time": "29/Oct/2019:16:18:57 +0300", "method": "GET", "uri": "/staff", "protocol": "HTTP/1.1", "status": "200", "body_size": "2984", "referrer": "-", "user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.70 Safari/537.36", "request_time": "0.010", "cache_status": "-", "upstream_response_time": "0.001, 0.007", "upstream_addr": "10.100.0.10:9000, 10.100.0.11:9000", "upstream_status": "404, 200", "upstream_response_length": "0, 2984", "location": "staff"} { "ip": "1.2.3.4", "http_user": "-", "request_id": "17ee8a579e833b5ab9843a0aca10b941", "time": "29/Oct/2019:16:18:57 +0300", "method": "GET", "uri": "/staffs/265.png", "protocol": "HTTP/1.1", "status": "200", "body_size": "906", "referrer": " https://example.com/staff ", "user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.70 Safari/537.36", "request_time": "0.001", "cache_status": "-", "upstream_response_time": "0.001, 0.007", "upstream_addr": "127.0.0.1:9000", "upstream_status": "200", "upstream_response_length": "906", "location": "staff"}{ "ip": "1.2.3.4", "http_user": "-", "request_id": "47fe42807f2a7d8d5467511d7d553a1b", "time": "29/Oct/2019:16:18:57 +0300", "method": "GET", "uri": "/staff", "protocol": "HTTP/1.1", "status": "200", "body_size": "2984", "referrer": "-", "user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.70 Safari/537.36", "request_time": "0.010", "cache_status": "-", "upstream_response_time": "0.001, 0.007", "upstream_addr": "10.100.0.10:9000, 10.100.0.11:9000", "upstream_status": "404, 200", "upstream_response_length": "0, 2984", "location": "staff"}
Logs show that the 10.100.0.10 server responded with a 404 error, and the request was redirected to another content storage. As a result, we have got the following value in the logs:
"upstream_response_time": "0.001, 0.007"
This problem is so widespread that it was even awarded a special mention in the documentation.
What about reliability?
There are cases when all logs are equally vital. Typical approaches for collecting logs in K8s discussed above have some problems with this.
For example, fluentd cannot collect logs from short-lived containers. In one of our projects, the container with a database migration has run for less than 4 seconds, and then it has been deleted according to the respective annotation:
"helm.sh/hook-delete-policy": hook-succeeded
Because of this, the log for running a migration has never made it into the storage (the before-hook-creation policy might help in this case).
Another example is Docker’s log rotation. Suppose there is an application that is writing to the logs intensely. In normal conditions, we can process all the logs. However, when some problem arises — like the problem with the wrong format described above — processing halts, and Docker rotates the file. As a result, business-critical logs can be lost.
That’s why it is important to split streams of logs so that the most valuable logs would go directly to the application, ensuring their preservation. You can also create some kind of a log accumulator — it would preserve critical messages if the storage is briefly unavailable.
Finally, we should not forget that every subsystem needs thorough monitoring. Otherwise, it is easy to encounter a situation when fluentd ends up in the CrashLoopBackOff state and stops sending logs, which can lead to the loss of valuable information.
Takeaways
As you can see, we leave aside SaaS solutions such as Datadog. Many of the problems we have touched upon have already been solved by commercial companies that specialize in log collecting. However, SaaS solutions are not suitable for all for various reasons (such as high costs or legal considerations in some countries).
At first glance, the centralized collection of logs looks like an easy task, but it isn’t. Here are a few considerations to remember:
The detailed logging is justified for critical components only. For other systems, you can set up monitoring along with the error collection.
In production, it makes sense to minimize logging to avoid an excessive burden on the system.
Logs must have a machine-readable, normalized, and strict format.
A separate and autonomous stream should be used for critically important logs.
The idea of a log accumulator looks promising. It can help you in case of load spikes and would make the load on the storage more uniform.
These simple rules, if applied wherever appropriate, allow the approaches described above to work smoothly, even if they lack some critical elements, such as a log accumulator.
Those who feel that these principles are excessive might find themselves in a situation when another highly loaded and inefficient component emerges in the system: the logging.
This article has been originally written by our engineer Nikolay Bogdanov. Follow our blog to get new excellent content from Flant! | https://medium.com/flant-com/kubernetes-logging-challenges-aad3f45d8eed | ['Flant Staff'] | 2020-02-01 03:52:12.226000+00:00 | ['Logging', 'Microservices', 'Fluentd', 'Loghouse', 'Kubernetes'] |