title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
If You Can’t Meditate, Bake Bread | If You Can’t Meditate, Bake Bread
The act of making something every day can be a powerful form of anxiety relief
Photo: Theme Photos/Unsplash
A recent comic by Luke McGarry shows a man in an apocalyptic hellscape, trying to get past a pair of armed and intimidating gatekeepers. “Please — grant me safe passage,” the man says. “I can trade medicine and precious metals.” To which the gatekeepers reply: “Ha! Fool! Don’t you know the currency of the future is homemade sourdough?”
You’re not just imagining it. Everyone is baking bread right now.
The search terms “bread,” and “baking bread,” have spiked to a 14-year high on Google Trends. You probably know a Bread Guy (or Bread Gal), and may even be one yourself.
But why have so many people suddenly become pursuers of the perfect olive loaf or French baguette during this pandemic? One explanation is that making bread can be an existentially healing endeavor. Stephen S. Jones, director of Washington State University’s Bread Lab, tells Wired that, over the years, he has received handwritten letters from three different people who, after visiting his lab, turned to breadmaking to cope with the grief of losing a child. He believes that baking bread is akin to a spiritual experience. “Bread is alive” and “you become one with this thing,” he suggests.
To lessen our anxiety and improve our well-being, we’re often prescribed mindfulness-cultivating practices, like yoga and meditation. But another option is to make something every day. You may choose to bake a loaf of bread or pick up an old hobby, or you could try a completely new creative pursuit altogether. Your daily creative task might be to write a poem or sketch a picture, or to fold origami.
In this thread on Hacker News, the user Internetvin writes about how the emotional overload of his father-in-law passing away within days of his son’s birth led him to make a song every day. The healing effect of a creative outlet inspired him to build Futureland, a project network for people to record their progress of making something every day.
Similarly, as a means of coping in the aftermath of the 9/11 attacks, the designer Michael Bierut started drawing something every day. This practice led him to start The 100 Day Project with his students at Yale. The project eventually found its way online, where participants have shared more than 1.4 million Instagram posts (and counting) of their projects to date. This year’s 100 Day Project starts on April 7.
The goal of making something during times of hardship isn’t to be “productive,” or to even achieve anything specific. Rather, the aim is to tap into your creativity to make meaning from the situation — or to find an outlet for your energy that makes it easier to maintain a calm, positive attitude.
Your daily creative practice can be as short as 20 to 60 seconds. The key is just to pick something you can do every day. For example, one of Bierut’s students, the art director Zak Klauck, set out to design a poster every day in under 60 seconds. Internetvin has tweeted about setting himself the goal to write just a single line of code within 20 seconds. And the artist Mike Winkelmann has so far followed through on his mandate to draw something every day for nearly 5,000 days straight, even on the day his daughter was born.
If you’re interested in setting up a daily creative practice, but have no idea how to get started, here’s some advice:
Whatever you pick, you don’t have to do it for 100 days — even 10 days will suffice. Much like mustering the energy for physical exercise through this pandemic, you will thank yourself for exerting the effort.
Instead of seeing this isolation as time that’s lost, you can decide to do something with it. And who knows, maybe you can even make something that you love. Once this is all over, you may look back on your daily creative practice as the one thing that made your isolation bearable. | https://forge.medium.com/if-you-cant-meditate-bake-bread-b99dda9dc6da | ['Herbert Lui'] | 2020-04-03 16:02:29.565000+00:00 | ['Habits', 'Meaning', 'Creativity', 'Psychology', 'Coronavirus'] |
20 Ways to Unlock Facebook Chatbots for Business in 2019 | Facebook opened the gates to build Facebook Messenger bots in 2016.
Today there are 400K Facebook chatbots in the world, helping businesses get more leads, close more sales, recruit team talent, and save money with automation.
A Facebook Messenger marketing chatbot has a wide range of functionality and at the end of the day, marketing bots have a direct impact on boosting the bottom line.
But how?
And more importantly, how can you leverage Facebook Messenger automation for your own company?
Read on for 20 ways to use Facebook chatbots for business today.
You’ll see real-life Facebook chatbots that businesses are using for a lot of these applications.
Don’t have a bot just yet? Build a free chatbot using MobileMonkey in under 5 minutes. It’s easy. Monkey see, monkey do.
How to Put Facebook Chatbots to Work for Business
Building a bot is surprisingly easy. Achieving your goals with a Facebook chatbot takes a little more strategy.
We’ve broken it down into three goals. Pick a focus to get started.
Goal I. Prospecting & Nurturing Leads with Bots
Get more contacts, qualify leads and nurture leads to customers with chatbot development.
1. Grow Your Contacts to Generate Leads
Anytime someone responds to the chatbot on your site, you’ll get them in MobileMonkey as a contact. This happens automatically. But, you can take that one step further with Facebook auto-responders. Basically, these posts auto-respond anytime a friend or fan interacts with your content.
Learn how to hack growth organically with Facebook auto reply bot here. Facebook advertisers can use Facebook Click-to-Messenger Ads to grow contacts with ad budget and laser-focused audience targeting.
Twelve ways to get more contacts for your Facebook chatbot are lined up here.
2. Run Giveaways to Get More Contacts
Who doesn’t get excited over a product giveaway? Use your bot to inform your audience and encourage them to enter. Achieve that 20% average click rate.
This golden unicorn of a chatbot giveaway generated 200 quality leads that turned into major sales:
When you’re ready to launch your own Facebook Messenger chatbot contest take a look at MobileMonkey’s guide and featured examples.
3. Share Insider Knowledge via Facebook Chatbots
Who doesn’t like being on the VIP list? Offer your future customers a way to stay on the inside track with your company. Then make your audience feel important with Messenger updates.
Send out a chat blast to let them know about a new store opening or an exciting product announcement first. Sure, put the news on social media and send an email, but give a heads-up to your Facebook Messenger bot users first to treat them like the insiders they are.
Give Gary Vaynerchuk’s VIP Facebook chatbot a spin to see how Gary Vee keeps Messenger subscribers on the inside track.
4. Make Messaging Interactive & Engaging
Messenger chatbots are a great opportunity to showcase your brand voice and personality. Keep text succinct. Use GIFs, entertainment, and witty copy to connect with users.
Want to get on the list for MobileMonkey’s engaging chat blasts? We always demonstrate the ways you can use Messenger chatbots in engaging and interactive ways. Subscribe to our updates and see what’s possible with Facebook chatbots 2x a week!
Yeah, we think our bot is pretty engaging, but don’t take our word for it. Check out what Baby Got Bot’s Kelly Noble Mirabella has to say about ways to humanize your conversational chatbot to make it super engaging and interactive.
5. Personalize Messaging with a Facebook Chatbot
With Messenger chatbots, you know who you’re talking to. That’s because it’s tied to the Facebook user. Optimize their customer journey to increase sales by sending customized content and products based on their interests.
It’s easy for marketers to leverage personalization with Facebook chatbots because you can use system attributes and custom chatbot attributes right in the messaging and to segment your audience and serve up relevant content, too.
6. Revamp Email Marketing
You have a whole new channel to reach your customers. And guess what? It has much higher rates of engagement than email. So, leverage that power to sell. Make your Messenger bot the new email. Test out drip campaigns and experiment with messaging and psychological marketing principles to get results.
7. Qualify Leads using Facebook Chatbots
Screen your inquiries with questions so that by the time they get to your customer service team, there’s already information to act on.
Ask questions of your Facebook chatbot contacts and engage them in two-way conversations where you can find out what they like and need. This is an important lead qualification that your Facebook chatbot can do in a natural, mobile-friendly conversational interface.
Follow-up with your leads with a chatbot funnel designed just for them and with your sales and marketing as appropriate.
Goal II. Increase Conversions & Customer Retention with Chatbots
Improve and groom customer relationships.
8. Highlight Promotions via Facebook Chatbots
And do it more than once! Customers are more likely to open what you send out on a Messenger blast than they are the average email. Send out a sale announcement, as well as a 1-hour left reminder, or any combination that you’d like.
There’s an art and a science behind using Facebook chatbots to send promotional messages. Read the guide to Facebook Sponsored Messages Ads to learn everything there is to know.
9. Extend Special Offers to Key Audiences
Send out discounts and offers through Messenger, exclusively to your chatbot audience.
Check out how IMStonegifts sends exclusive offers to Facebook chatbot subscribers to drive sales and brand loyalty.
10. Send Exclusive Invitations via Facebook Chatbots
Have a rewards program or a beta test running? You can target specific customers and invite them through your Messenger chatbot. It sounds simple, but it can do a lot for your customers’ experience.
Celebrity and social media influencer Christina Milian uses a Facebook chatbot as an ecommerce storefront and a channel for sending exclusive invitations to connect more deeply.
11. Make Checkout / Conversion More Engaging
Improve your customer’s experience with order processing. Using Facebook Messenger chatbots, you can offer up engaging order status updates every step of the way. Showcase your brand voice and make the waiting step of the process more enjoyable for everyone.
12. Offer Add-Ons and Upsells via Facebook Chatbots
You know you can increase sales by upselling. So, don’t limit add-ons and upsells to email marketing and your site interface. Use your Facebook chatbot. And make it fun. If you know what customers ordered, you can now target them with complementary products.
13. Get Customer Feedback using Facebook Chatbots
Ask for feedback and reviews with pre-filled buttons and easy-to-click designs. And then actually listen to that feedback. Not only can this help improve your product, but it’ll show you care about what your customers have to say.
This simple survey Facebook chatbot example will show you how easy it is to take a survey with a bot. And here’s the guide to how to run a customer survey with a Messenger bot.
Goal III. Automate to Boost Efficiency
Your Facebook chatbot is up and running. Now what? Make your job easier and start automating. This strategy is a no-brainer. Improve customer experience and save your team time.
14. Send Order Updates or Event Reminders in Messenger
If your customer or contact is connected to you in Messenger, or even signed up for that event or asked to receive order updates through Messenger, you can keep the chat convo going. Integrate your chatbot to your other business applications to give your customers a seamless experience while you save time through automation.
15. Be Available 24/7 with Answers to FAQs with Facebook Chatbots
Got a list of FAQs? Repurpose the content and add it to the Facebook chatbot. Not only will this create fast responses for your visitors (and everyone appreciates instant information) but it will improve efficiency. Now, your customer service team doesn’t have to answer one question dozens of times and can focus on other important tasks.
Yes, you already have your hours and location on your site. But, don’t make your visitors search. Add the basic details to your Facebook chatbot: hours, location, contact information, maps. For easy accessibility, install the best Facebook chatbot on your website.
Ready to roll? Here’s how to answer FAQs in Messenger in 3 steps using MobileMonkey.
16. Fix a Service or Support Bottleneck
Is there an info@company.com email address that never gets answered? A few negative reviews because of a customer service line that’s not 24 hours? Look for holdups and incorporate solutions in your Facebook Messenger chatbot. You can even notify a sales or customer support team if someone talking to your chatbot needs help. Jump into the chat conversation with live chat takeover any time. The beauty of automation.
17. Drip Content in Chat for Increased Reach using Facebook Chatbots
White papers, eBooks, guides, infographics, tools like Google Chrome Extensions — if you’ve got it, send it. Push out in automated messaging sequences (aka drip campaigns) encouraging users to download the tool or content most useful to them. The more personalized, the better.
Designing and launching Facebook chatbot drip campaigns are free to do with MobileMonkey! This interactive chatbot drip campaign example will give you a sense of what the experience is like for customers. Create a chatbot drip campaign for your business with this guide.
18. Get Better Survey Results using a Facebook Chatbot
Sending out surveys is a breeze with chatbots. You can even set them up on a drip campaign so customers get them 3 days, 10 days, or any interval, after their experience with you. Get more responses by pre-filling answers so all users have to do is click. See chatbot templates here, including a survey chatbot.
19. Repurpose Content
Don’t reinvent the wheel. If you have content, you can get creative and reuse that content to delight customers with your chatbot. Segment users and push content related to their purchase or interests.
20. Automate Appointment Scheduling using Facebook Chatbots
In a service-based industry? Have workshops or tutorial classes? Automate sign-ups and appointment scheduling for a more seamless process.
When Sephora implemented its appointment scheduling Facebook chatbot, the rate of appointments rose 11%!
Be a Unicorn in a Sea of Donkeys
Get my very best Unicorn marketing & entrepreneurship growth hacks:
2. Sign up for occasional Facebook Messenger Marketing news & tips via Facebook Messenger.
About the Author
Larry Kim is the CEO of MobileMonkey — provider of the World’s Best Facebook Messenger Marketing Platform. He’s also the founder of WordStream.
You can connect with him on Facebook Messenger, Twitter, LinkedIn, Instagram.
Do you want a Free Facebook Chatbot builder for your Facebook page? Check out MobileMonkey!
Originally posted on Mobilemonkey.com | https://medium.com/marketing-and-entrepreneurship/20-ways-to-unlock-facebook-chatbots-for-business-in-2019-30d12d56fa97 | ['Larry Kim'] | 2019-09-14 09:56:01.518000+00:00 | ['Entrepreneurship', 'Marketing', 'Chatbots', 'Facebook', 'Bots'] |
What Climate Models Tell Us About Future Global Precipitation Patterns | What are climate models?
As described by the Geophysical Fluid Dynamics Laboratory, climate models are mathematical representations of the major components of the climate system (the atmosphere, land surface, ocean, and sea ice) and their interactions. Each component of the climate models are described as such:
The atmospheric component simulates clouds and aerosols and plays a large role in the transport of heat and water around the globe.
The land surface component simulates surface characteristics such as vegetation, snow cover, soil water, rivers, and carbon storage.
The ocean component simulates ocean current movement and mixing. It also simulates biogeochemistry because the ocean is the dominant reservoir of heat and carbon in the whole climate system.
The sea ice component modulates solar radiation absorption (the albedo of the planet) and air-sea heat and water exchanges.
All climate models may contain and track different components. However, an article by the University at Albany explains that the atmosphere, hydrosphere, biosphere, cryosphere (ice-covered regions, includes ground in permafrost), and lithosphere (upper layer of Earth’s crust, both continental and oceanic), are the main drivers of climate change and are all generally represented to some extent or another in most climate models to ensure accurate simulations.
How do climate models work?
Climate models divide the Earth’s surface into a three-dimensional grid. Each grid contains equations that represent each component of the model. Therefore, there are atmospheric equations, hydrospheric equations, lithospheric equations, and so on. These equations represent how each component of the climate works in a given area for a given set of variables. These variables represent different conditions and changes to those conditions. Variables may either be calculated by the equation or may be parameterized (hard-coding the variable to save on computing power).
Because there are so many equations and variables at play, a lot of computing power is required to run the system. In other words, the smaller the individual cells of the grid, the more complex and detailed the model, the more computing power is required.
Climate models also include the element of time, which allows models to simulate climatic conditions over a range of time units. These complex mathematical equations which describe how the different components of the atmosphere interact, and within themselves, how the materials within these components interact, give scientists the ability to “look into the future” to see what the climate will be like decades into the future.
Currently, there are three types of climate models: energy balance models, intermediate complexity models, and general circulation models.
Energy balance models forecast climate change as a result of Earth’s energy budget. This type of model looks specifically at how surface temperatures are affected by solar energy and Earth’s natural reflectivity (albedo). Using this type of model, scientists use equations that represent the exchange of energy between Earth and the Sun to better determine how heat is stored on Earth.
Intermediate complexity models share similarities with energy balance models although they include Earth’s biogeochemical components to better simulate large-scale climate scenarios. Scenarios such as glacial melting, changes in ocean currents, and changes in greenhouse gas emission composition of the atmosphere can be simulated to determine how they affect Earth’s climate on the whole.
Finally, global circulation models are the most complex, time-intensive, and computing-heavy models that offer precise predictions about climate change. These models incorporate each component listed above, including equations concerning chemistry, the carbon cycle, and the makeup of isolated areas. This type of model would use a small three-dimensional grid which offers more precision when it comes to understanding how these grids interact with each other.
The difficulty of predicting precipitation changes with climate models.
A chapter discussing the evaluation of climate models published by the Intergovernmental Panel on Climate Change (IPCC) discusses how the modeling of large-scale future precipitation patterns has improved since the days of the AR4 (the IPCC’s fourth climate assessment report conducted in 2007), although the modeling of regional precipitation remains subpar. According to their studies, the IPCC has found that climate models are prone to underestimating the sensitivity of extreme precipitation events to changes in global temperature. This implies that models could underestimate the projected increase in extreme precipitation events in the future.
The chapter goes on to discuss how the simulation of precipitation is one of the most rigorous tests a model can conduct because precipitation characteristics depend heavily on various other processes (such as cloud formation, radiation, atmospheric moisture, and aerosols) that must be parameterized (the processes are defined by a specific variable in the code instead of letting the model calculate the value of the variable itself — saves on computing power when dealing with processes that can occur over much shorter scales than an entire grid cell) in the model. In some cases, where precipitation is concerned, this parameterization of variables must include a range of values to better account for variations in the cell in question, such as topography.
In other words, due to the ranges of error involved in modeling precipitation, there can be varying levels of certainty when predicting the future precipitation of a region. This results in studies having to use several estimations of future precipitation to draw somewhat solid conclusions that may end up being changed a few years down the road. Because of this variation in the regional results of different models (one model could say that a region will become wetter while another model will say that the same region will become drier), a multimodel mean is required to get a rough answer to questions concerning the future of precipitation patterns.
What climate models tell us about future global precipitation patterns.
Without using precipitation-focused climate models, scientists can already infer a couple of things about how precipitation will change in response to a warming climate.
According to a paper published by the National Center for Atmospheric Research, increased planet temperatures would give rise to increased evaporation and, as a result, desertification. Greater evaporation would contribute to the severity and duration of droughts. As the planet warms, the atmosphere’s carrying capacity for water also increases. This, coupled with increased evaporation, means that storms, whether they be thunderstorms, cyclones, or hurricanes, will unleash even greater amounts of precipitation. Vast amounts of precipitation that fall on parched ground will find it hard to be absorbed, resulting in greater amounts of runoff and severe flooding events.
The article goes on to mention how climate models suggest that dry areas will become drier, and wet areas will become wetter, due to a lack of change in the planet’s wind patterns as forecasted for the foreseeable future. This means that while copious amounts of water will be added to the atmosphere, its distribution as precipitation will be uneven.
When it comes to making predictions using climate models, the results generally mirror the phenomena described above. As previously discussed, a model mean is generally required to get the average prediction about future precipitation levels for a given region. However, there are some parts of the world where 9 out of 10 models will agree about future precipitation patterns. Currently, models agree that tropical and high latitude regions will see an increase in precipitation. Models predict that locations such as India, Bangladesh, Myanmar, and northern China will observe greater amounts of precipitation. Regions that will likely see drier conditions include the Mediterranean, southern Africa, and parts of Australia and South America. North America, Europe, and the greater part of northern continental Asia appear to be slightly harder to predict due to seasonal variation, with precipitation prediction amounts being on the higher end during the winter, spring, and fall and predicted precipitation amounts being lower during the summer months.
Regardless of regions becoming wetter or drier, a consensus among climate models suggests that intense precipitation events will increase worldwide by the end of the century. It’s currently projected that a particular intensification of precipitation events will occur in Eurasia and North America. | https://medium.com/predict/what-climate-models-tell-us-about-future-global-precipitation-patterns-e7b52d2447aa | ['Madison Hunter'] | 2020-11-28 00:54:09.170000+00:00 | ['Climate Change', 'Technology', 'Environment', 'Future', 'Science'] |
Finale — The IDFA conundrum. In my previous articles, I wrote about: | In my previous articles, I wrote about:
The ecosystem around the IDFA A deeper look into why the ad industry needs a reboot
With that, let me close the series with a few thoughts for developers to mitigate any perceived effects from Apple’s recent announcement:
Internal Data Audits
Start by getting a good sense of your application’s data policies. As developers/marketers, we tend to accumulate in-app engagement events beyond what we need. Too often, I hear requests from cross-functional divisions to collect user events..“just in case”. Before you know it, the app is configured to send events that end up getting stored for little to no use.
Tip: Reach out to data experts in your division to perform an audit of events/parameters collected. As a team, you need to comb through the events and ask the questions:
Are we using this event/field today? If so, does it add value? If not, do we have a tangible use-case in the future? Can we quantify said value by running a quick experiment? Can we repurpose a combination of other existing fields to achieve the same outcomes as predicated by collecting the event?
Going through this exercise on an ongoing basis should make the process easier and productive for the long haul.
3P Education
More often than not, you may find that your app collects information unbeknownst to your team in the form of SDKs/Frameworks introduced by 3P advertising networks. Educate yourself about your ad partner’s data policies. Focus on the principles of data collection and usage alongside their retention policies. Ensure that whatever gets collected on your behalf is what your business needs and not the other way around.
Tip: Look past assurances around enforced policies such as GDPR. Simply adhering to such guidelines should not become the gold standard.
Own your Channel
Advertising is a great way to acquire users (or) monetize your apps. It should be done tastefully. Why would you work hard on building incredibly engaging content only to lease out your space to sub-optimal advertising?
Work closely with your marketing agencies to focus on content as much as performance Obsess over the copy/verbiage/demo to highlight key value Increase and diversify fill by working with your mediation partners. Focus on ad quality as much as your eCPMs Keep close tabs with your customer experience team — Highlight and penalize errant networks that boost fill with trashy ads
Switch to Subscription Models
I like how Ben connects product adoption to the user’s willing-to-pay for content. Now might be a good time to understand your audience whatever be your business vertical and gauge your ability to focus on users who truly value your content. Switching over to a subscription model could fetch a premium cost with higher margins.
The flip side is true though— Subscription models tend to narrow down the focus of your offering which takes away the broader reach and growth.
Tip: Analyze your customer behaviors and rethink the right opportunity for your app. Ensure that you go deep on engagement. Skimming the surface with top-line metrics such as CAC and LTV could take you down the wrong path. You need to have a solid read on your customer journeys with your product to make an informed decision.
Aim for Transparency
One approach is to be straight up transparent to the user. Customers are more receptive to giving you the benefit of doubt when:
You are ultra transparent about your intent Provide value-driven options and let the user make the decision
In my own anecdotal experience, developers have improved conversions when they communicate their intent upfront. Take push notifications for instance. Which one of these do you think resonates well with your audience:
Obviously the one on the left is going to perform better. You may want to use the same strategy to convince your users to opt-in to sharing their IDFA. You just need to be honest and transparent.
Good Luck!
Feel free to reach out should you have additional thoughts on this topic. Cheers :-) | https://medium.com/macoclock/finale-the-idfa-conundrum-3aebb7740cd6 | ['Abishek Ashok'] | 2020-12-08 07:17:48.464000+00:00 | ['Advertising', 'Marketing', 'iOS', 'Apple', 'Startup'] |
5 Things You Buy When You Read The Sober Lush | The book that reminded me to enjoy all the things, not just one.
I’m not a sober person. Just in case you’re not sober, I don’t want you to think you can’t read this book or this essay. In fact I think you should read both. The Sober Lush, by Amanda Eyre Ward and Jardine Libaire is, in my opinion, required reading on the syllabus of adult life regardless of whether or not you drink. Do you ever think about how we just kind of stop all version of formal instruction at the very moment we enter adulthood, arguably a rather tricky path to navigate without a guide? I’m very grateful that a podcast listener of mine suggested I read The Sober Lush, it’s a book that delves deeply and truthfully into sobriety, but it also reminds you of all the delicious, varied, and luscious ways we can consume life, that have nothing to do with consuming alcohol.
As I read the book, I stumbled upon ideas, concepts, and products that were so profoundly interesting to me, that I wanted to experience so much, that I couldn’t keep reading until someone had my credit card information and shipping address. There are moments in this book that forgive you for overlooking countless joys of life in lieu of alcohol, which seems to be the most widely understood form of adulthood indulgence and celebration. The book forgives you, making it easier for you to forgive yourself, thereby freeing you to dive into things you love that maybe you haven’t let yourself love, or remembered to love, in awhile.
In finishing the book, I felt free. More free than I’d felt in a long time. Which is cool, considering the reason I read the book in the first place was my curiosity around alcohol moderation, which I’d always thought would make me feel more limited. Instead, I felt a profound freedom to be myself, and enjoy the things that are enjoyable to me, with a uniqueness and a recklessness I haven’t felt since I was kid. On top of that, I was experiencing rediscovery at the same time. I really like a lot of things! It feels weird to type it, but I was kind of ashamed of how frequently I’d previously associated feeling good with alcohol, and not much else. Which is counterintuitive, when you factor in the following mornings.
It feels like I’m operating inside a new world, which is particularly beautiful when you remember that there’s a pandemic and the only place I’ve been this year is home. I feel renewed, and a bit like an explorer rediscovering things I used to love or will start to love soon. Enjoy The Sober Lush, let it remind you of all the things there are in the world to enjoy, and if you need specifics, here are a few favorites of mine:
Raw Honey: Honestly, just jars of things in general. Do you know how many wonderful things come in jars? Take a spin around the dry goods section of the grocery store next time you’re there, with a particularly keen eye alert in the jams. When raw honey was described in this book, I was more excited than I’d been on…oh let’s just say half of the first dates I’ve ever attended. Honey is truly a gift to humans on this planet, probably as compensation for other bullshit like taxes and Twitter. Buy a jar of good raw honey, and remind yourself why everyone freaks out so much about the bees. We need them, we need their righteous work. Taste it for yourself and remember why.
*Pro tip: Drizzle some raw honey on the OUTSIDE of your grilled cheese as you cook it, or atop your bowl of pasta instead of finishing salt. Trust me.
Incense & Scented Candles: I’m big on smells. I blame my faulty eyes for an overly acute nose. I hadn’t heard anyone exalt the merits of incense since like…college, when we thought burning it qualified as making us cool and interesting. Incense is glorious. I literally cannot stop indulging in it. I now see no reason not to live in a space where smoke is perpetually drifting from the top of a stick. It’s a scent, a sight, and a mood, more powerful and transportive than redecorating your entire house. My favorites come from P.F. Candle Co but I strongly encourage you to shop around and find scents that make you feel something. You’ll know what I mean when you get it right. The scent will take you to a place or a memory that hasn’t been center stage in your mind for quite awhile. One kind of incense reminds me of summer camp in Ojai, the other one reminds me, and for the life of me I don’t know why, of a performance of Sleep No More.
I’ve typically kept a scented candle or two around the house, but I always felt guilty about buying them. So much money on something I’ll burn through so quickly always felt weird to me. But not when you realize how much you personally value the pleasure they provide. Scent matters to me, it’s allowed to matter to me, and why was I so willing to spend $25 on a bottle of wine that would be gone in a couple of hours but not a scented candle that would increase my enjoyment of my own home and not make me feel like garbage afterward? My apartment smells amazing these days, and it’s because a book reminded me that it can. Now, the difference for me between burning a candle or not burning a candle is much like the difference between a house where music is playing in the background and one where it’s not. I pay a lot more attention to all the ways I can enjoy my time inside my home, and the cost of them no longer feels like guilt, it feels like living more fully.
Press-On Nails: While not mentioned in the book, I feel like they very much fit into the Sober Lush lifestyle. I perpetually wear a full set of the most luscious press on nails. I change them whenever I want, I buy the styles and colors that appeal to me (usually glitter). No unpleasant smells, drying time, or salon expense at all. Press-on nails aren’t necessary, not even practical at certain lengths given that I type for a living, but valuable for the enjoyment and pleasure I take in how they look and feel. I’ve got boxes of them now. Colorful little tiles I adhere to my hands effortlessly, and feel 10x more glamorous as a result. Did I almost loose one in a batch of cookie dough the other day? Yes. Was it worth it to be able to love what I see every time I look at my own hands? Also yes.
Good, Rich Chocolate: All sorts of precious flavors and moods are mentioned in the book. Chocolate is certainly one you’d expect, but I’d challenge us all to try and remember the last time we really thought about what chocolate tastes like. The last time you had some, was that all you were doing? Enjoying it, identifying it’s pleasures, or where you just popping some into your mouth while you did something else? I chose to explore my Sober Lush-inspired relationship with chocolate via a rich bar of milk and dark swirled together with bits of butterscotch laced throughout. Just stand there when you eat it, and clear your mind of anything that’s in the way of you and a moment of melting joy. You guys…chocolate is fantastic.
Halloween Decorations: Again, not explicitly mentioned, but fully within range of lush behavior. Halloween brings me joy, and sometimes in the past I’ve felt guilty about that joy, as if I shouldn’t love or celebrate Halloween so much because it’s childish. This book reminded me that what brings us joy and pleasure isn’t childish, instead it’s actually necessary for us to live full, joyous lives where we’re not denying ourselves the full range of life’s experiences. Alcohol is a super adult thing to partake in, but it hurts. None of my childhood amusements and indulgences hurt. Maybe reverting back a little bit isn’t such a bad thing. Anyway, my living room looks like a haunted mansion, and I’m thrilled about it.
Life is a lot of different tastes and scents and textures and experiences. I didn’t think a book about abstaining from alcohol would remind me just how much of life is worth getting up to our elbows in, but here we are. If you need to open the aperture of your enjoyment a little wider, and remind yourself of all the places pleasure and joy can be found, read The Sober Lush, as soon as possible.
____________
Shani Silver is a humor essayist and podcaster based in Brooklyn who writes on Medium, pretty frequently actually. Links above to The Sober Lush are affiliate links. | https://shanisilver.medium.com/5-things-you-buy-when-you-read-the-sober-lush-9d1570ee3af | ['Shani Silver'] | 2020-09-13 14:59:29.911000+00:00 | ['Alcohol', 'Nonfiction', 'Books', 'Life Lessons', 'Writing'] |
Our FAQs | Writers
What happens when I submit my article to TDS?
Thank you so much for taking the time to submit your article to our team! We will review it as soon as we can.
If we believe that your article is excellent and ready to go, this is how you will be able to add your post to our publication. If “Towards Data Science” shows up after you click on “Add to publication” in the dropdown menu at the top of the page, that means we have added you as an author and are waiting for you to submit your article. Once you have submitted your article, it will be reviewed by an editor before a final decision is made.
If we think that your article is interesting but needs to be improved, someone from our team will provide you with feedback directly on your submitted Medium article.
Please note that we only respond to articles that were properly submitted using either our form or via an email that exactly follows the instructions listed here. We don’t respond to pitches or questions already answered in our FAQs or on our Contribute page. We also ignore articles that don’t comply with our rules.
If you haven’t heard from us within the next five working days, please carefully check the article you submitted to our team. See if you can now submit it directly to TDS and look for any private notes from us that you may have missed. You should also make sure to check your spam folder.
If you just can’t reach us, the best thing for you to do is submit your article to another publication. Although we’d love to, we can’t provide customized feedback to everyone because we simply receive too many submissions. You can learn more about our decision here and submit another post in a month. | https://medium.com/p/462571b65b35#79d4 | ['Tds Editors'] | 2020-11-19 01:16:58.476000+00:00 | ['Writers’ Guide', 'Tds Team', 'Writers Guide'] |
Visualize multi-dimension datasets in a 2D graph using t-SNE (Airbnb bookings dataset as an example) | Visualize multi-dimension datasets in a 2D graph using t-SNE (Airbnb bookings dataset as an example) Paul Lo Follow Jan 15 · 6 min read
Using 31 numeric features in the user booking dataset which has 12 different travel destinations for prediction— and yeah, I know it’s really messy, at least we immediately know we have some feature engineering work to do :D
t-Distributed Stochastic Neighbor Embedding (t-SNE) algorithm
First of all, what is t-SNE and when and why are we using it? It is an unsupervised and non-linear dimension reduction algorithm, people usually use it during exploratory data analysis , an early stage in the whole machine learning pipeline. It helps us surface high-dimensional datasets (e.g. many features) through a 2D or 3D plot (or other relatively low numbers), and thus get a quick intuition about the data. It is NOT designed in a way to apply it directly to a classification task.
How about PCA? In the dimension reduction area, people often compare it with PCA , or Principal Component Analysis . Actually, t-SNE is a much newer approach that was developed by Laurens van der Maatens and Geoffrey Hinton in 2008 (see paper “Visualizing Data using t-SNE” here), while PCA was developed by Hotelling H. back in 1933 (Analysis of a complex of statistical variables into principal components), almost 3 generations ago!
As mentioned in the t-SNE paper, there is certainly some limitations for the linear type of models like PCA, “For high-dimensional data that lies on or near a low-dimensional, non-linear manifold it is usually more important to keep the low-dimensional representations of very similar datapoints close together, which is typically not possible with a linear mapping.”
To understand better about this, we can take a look at the underlying algorithm (and many great ‘PCA vs t-SNE’ articles online), aside from the Algorithm section in the original paper, I also highly recommend this An illustrated introduction to the t-SNE algorithm read, which provides a very intuitive but mathematical perspective to the model. In order to move on to the coding section, here, let’s just say t-SNE is more effective in handling certain types of complicated data, compared with PCA ‘s linear approach. As shown in the following pictures, a Kaggle script created by puyokw demonstrates t-SNE’s capabilities clearly.
Code
For R
Step 1: Install and load Rtsne package
install.packages("Rtsne") # Install Rtsne from CRAN library
library(Rtsne)
Step 2. Load a dataset for our example use case
> iris_unique <- unique(iris) # remove duplicate rows
> head(iris_unique) Sepal.Length Sepal.Width Petal.Length Petal.Width Species
1 5.1 3.5 1.4 0.2 setosa
2 4.9 3.0 1.4 0.2 setosa
3 4.7 3.2 1.3 0.2 setosa
4 4.6 3.1 1.5 0.2 setosa
5 5.0 3.6 1.4 0.2 setosa
Step 3. Fit with t-SNE and visualize
Yes — it’s really that simple
> iris_matrix = as.matrix(iris_unique[,1:4]) # note: we can only pass in numeric columns
> tsne_out <- Rtsne(iris_matrix)
> plot(tsne_out$Y,col=iris$Species) # graph is now generated
t-SNE visualization for a simple Iris dataset, the three types of flowers are clearly divided into different 3 clusters
Now, let’s try with another real-world dataset, but much more complicated — Airbnb’s user booking dataset in a Kaggle competition. With the following code, we can check out its visualization in a 2D space.
Step 1. Load the data: Airbnb dataset (there are 213,451 rows in training dataset)
> library(readr)
> df_train = read_csv(“train_users.csv”) # subset numerical features
> numeric_columns = sapply(df_train, is.numeric)
> countries = as.factor(df_train$country_destination)
> df_train = df_train[, numeric_columns]
> df_train$country_destination = countries # put destination column back
> df_train_unique <- unique(train) # de-duplication
> dim(df_train_unique) [1] 213451 31
Step 2. Fit t-SNE and generate the plot
> matrx = as.matrix(df_train_unique)
> tsne_out <- Rtsne(matrx)
> plot(tsne_out$Y, col=countries, main=’t-SNE of airbnb dataset on Kaggle (31 features)’)
31 numeric features and 12 different target variables
It took 20–30 minutes on Rtsne() and unique() , and the resulting graph was bad, which implied that I hadn’t come up with good features yet to separate 12 different country destinations for an accurate prediction.
(The speed can be improved with parameters like theta, max_iter, etc.)
> colnames(df_train) [1] “timestamp_first_active” “age” [3] “signup_flow” “-unknown-” [5] “Android App Unknown Phone/Tablet” “Android Phone” [7] “Blackberry” “Chromebook” [9] “Linux Desktop” “Mac Desktop” [11] “Opera Phone” “Tablet” [13] “Windows Desktop” “Windows Phone” [15] “iPad Tablet” “iPhone” [17] “iPodtouch” “total_elapsed_time” [19] “-unknown-_pct” “Android App Unknown Phone/Tablet_pct” [21] “Android Phone_pct” “Blackberry_pct” [23] “Chromebook_pct” “Linux Desktop_pct” [25] “Mac Desktop_pct” “Opera Phone_pct” [27] “Tablet_pct” “Windows Desktop_pct” [29] “Windows Phone_pct” “iPad Tablet_pct” [31] “iPhone_pct”
Python code
For Python folks, we’ll be using TSNE package under sklearn.manifold , a simple use case looks like the following, while there are optional parameters including learning_rate, n_components (dimension of the embedded space, default=2), n_iter (maximum number of iterations for the optimization) to play with
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt X_tsne = TSNE().fit_transform(df_train_unique)
scatter(X_tsne[:, 0], X_tsne[:, 1],
c=countries,cmap=plt.cm.spectral,alpha=.4,
edgecolor='k')
The Tradeoff: High time and space complexity
When I passed in completed 150+ features (including categorical fields added by dummyVars() ) of 200K data points to build a 2D t-SNE visualization, it took forever and ate up 25GB of memory in my MBP, pretty scary….!
Other than that, t-SNE is computationally expensive. As shown in the sklearn documents, in the same Manifold Learning (non-linear dimensionality reduction) family, t-SNE can take 6–100 times more compared with other models such as Spectral Embedding (SE) and Multi-dimensional Scaling (MDS).
t-SNE is very computationally intensive, even compared with other non-linear dimensional reduction models
When we think about the underlying algorithm, t-SNE has to compute the distances between all the points and maintain a pairwise N by N distance matrix (N = # of examples). Therefore, its space and time complexity are at quadratic level, O(n²), and this problem naturally becomes another popular research domain people are trying to optimize, one example is Fast Fourier Transform-accelerated Interpolation-based t-SNE (FIt-SNE), the details can be found in the paper here.
To be continued……
Going back to the Airbnb dataset example, we should be able to generate a better visualization result with t-SNE — please stay tuned, let me share more next time!
Reference
End Notes: | https://medium.com/analytics-vidhya/note-visualize-multi-dimension-datasets-in-a-2d-graph-using-t-sne-airbnb-bookings-dataset-as-824541cc5388 | ['Paul Lo'] | 2020-01-15 12:23:46.859000+00:00 | ['Machine Learning', 'Dimensionality Reduction', 'Clustering', 'Data Visualization', 'Data Exploration'] |
Our FAQs | Writers
What happens when I submit my article to TDS?
Thank you so much for taking the time to submit your article to our team! We will review it as soon as we can.
If we believe that your article is excellent and ready to go, this is how you will be able to add your post to our publication. If “Towards Data Science” shows up after you click on “Add to publication” in the dropdown menu at the top of the page, that means we have added you as an author and are waiting for you to submit your article. Once you have submitted your article, it will be reviewed by an editor before a final decision is made.
If we think that your article is interesting but needs to be improved, someone from our team will provide you with feedback directly on your submitted Medium article.
Please note that we only respond to articles that were properly submitted using either our form or via an email that exactly follows the instructions listed here. We don’t respond to pitches or questions already answered in our FAQs or on our Contribute page. We also ignore articles that don’t comply with our rules.
If you haven’t heard from us within the next five working days, please carefully check the article you submitted to our team. See if you can now submit it directly to TDS and look for any private notes from us that you may have missed. You should also make sure to check your spam folder.
If you just can’t reach us, the best thing for you to do is submit your article to another publication. Although we’d love to, we can’t provide customized feedback to everyone because we simply receive too many submissions. You can learn more about our decision here and submit another post in a month. | https://medium.com/p/462571b65b35#1204 | ['Tds Editors'] | 2020-11-19 01:16:58.476000+00:00 | ['Writers’ Guide', 'Tds Team', 'Writers Guide'] |
2020 Sucked, But We Also Made Some Astounding Scientific Progress | 2020 Sucked, But We Also Made Some Astounding Scientific Progress
In a year of lows, it’s important to remember the highs
Photo sources: enot-poloskun, Tang Ming Tung, Odd Andersen, Mlenny, and dowell via Getty Images
In a recent conversation about the year in science, a colleague made the inevitable joke: “There was this little thing called the coronavirus…” Grim, but impossible not to smirk: Ten, twenty years from now, when we look back on this moment in science, we will remember Covid-19 and the maddening dash to understand it.
But now and in the future, it will be important to remember that the dominant narrative of 2020 was in fact a culmination of science stories we’ve been aware of — and complicit in — for a long time.
Humans have been well aware of their destructive tendencies for millennia — this year, our relentless march into the habitats of wild animals created the conditions that scientists have long warned would allow a zoonotic disease to spread. A disproportionate number of Americans who died from Covid-19 in the U.S. were Black or other people of color, a result of the racist healthcare access and geographical redlining that is part of this country’s dark legacy. Now, as two vaccines roll out across the nation, we face the consequences of mounting mistrust in public health and government: hesitation, in many cases warranted, to receive vaccines that scientists have vetted as safe and effective.
Despite itself, 2020 was a staggering year for scientific achievement. You might have just missed it because of all the bad news.
Covid-19 was just one culmination of what we already knew. There will inevitably be others if we don’t change the way we react and respond to what scientists are continually telling us. But that’s not to say that there aren’t people out there, building on the knowledge that previous research has amassed. Despite itself, 2020 was a staggering year for scientific achievement. You might have just missed it because of all the bad news.
In a year of lows, it’s important to remember the highs: In 2020 in particular, scientific progress was a reminder of what people are capable of if we “trust the process,” to quote NBA star Joel Embiid. Just look at the vaccines from Pfizer and Moderna, which were developed in record-breaking time and vetted by multiple panels of experts as safe and 95% effective: There’s so much we can do if we decide we actually want to do it. My hope for 2021 is that we’ll finally want it badly enough.
The Lows
The botched U.S. public health response to Covid-19. When the global pandemic was declared in March, it was surreal to watch U.S. leaders flip-flop on the utility of shutdowns and mask-wearing despite the strong recommendations of scientists. There was a shocking dearth of Covid-19 tests and the distribution of faulty tests; today, there still aren’t enough. Contact tracing, a tried-and-true public health intervention that helped get the virus under control in countries like Taiwan and Japan, has ramped up unevenly and is still not considered robust enough to stop transmission. As of this writing, there are over 17.2 million cases and over 310,000 deaths in the U.S., and while Vice President Mike Pence has received the vaccine on public television, President Trump still hasn’t shared plans to get vaccinated.
The U.S. left the Paris Climate Agreement. This year tied 2016 as the hottest year on record and had the worst West Coast wildfire season on record. It was the worst possible time for the U.S. to leave the Paris Climate Agreement, which it did, officially, in November, at the behest of Donald Trump’s three year-old promise. This means the U.S. is no longer bound to keep global temperatures below 2 degrees Celsius above pre-industrial levels. The Trump administration is also responsible for rolling back numerous other environmental regulations, like those protecting the country’s largest pristine landscape from drilling, and most recently rules for limiting soot pumped into the air by power plants. President-elect Joe Biden has said he will rejoin the Paris Agreement on his first day in office.
Gene editing human embryos shown to be unsafe. Gene editing holds promise for treating and eliminating genetic disease — perhaps even permanently. But researchers at the Francis Crick Institute in London used the technique in human embryos this year, to disastrous results. To be clear, researchers had no intention of allowing these embryos to develop; they just wanted to investigate the role of a single gene in early development. They learned that making even tiny edits can lead to major unintended edits — changes that could lead to genetic disease or cancer later in life, as Emily Mullin wrote in June. “This is a restraining order for all genome editors to stay the living daylights away from embryo editing,” Fyodor Urnov, PhD, an expert in the field, said at the time.
The pandemic within a pandemic. U.S. data on Covid-19 made it clear that the coronavirus disproportionately impacts Black people and people of color in the U.S. This discrepancy, as Drew Costley wrote in April, is the sad consequence of longstanding systemic racism, which reaches into every realm of daily life: As a result, Black and POC Americans experience a lack of access to healthcare, clean air, and healthy food; the tendency for these groups to have essential jobs that force them onto the frontlines. These inequalities, which cleared the way for the deadly virus to wreak its most severe consequences, were well documented long before the pandemic. Covid-19 and the Black Lives Matters protests, which displayed the full ugliness of racist police brutality, forced us to confront them. Now, the rollout of the Covid-19 vaccine is testing whether any of these difficult truths have been taken to heart: Experts have recommended prioritizing minorities for the vaccine, but whether states will implement this guidance in their distribution plans remains to be seen.
Waving goodbye to the Arecibo Observatory. In November, the National Science Foundation announced plans to demolish Puerto Rico’s Arecibo Observatory, the renowned radio telescope facility that had served the astronomy community for almost 60 years. At the beginning of December, the massive telescope collapsed, bringing the legendary facility to a devastatingly pitiable end. One researcher, speaking to the New York Times, angrily insisted that its destruction wasn’t inevitable: “If they had properly maintained it, it’s likely that wouldn’t have happened.” Funding issues had plagued Arecibo in the last few years, with the NSF transferring its care to the University of Central Florida. Its end marks the end of an era in which public funds were routed toward this kind of public research, wrote Brian Merchant in Future Human. The baton has been passed to private companies like SpaceX and Blue Origin, for better or for worse.
The Highs
Vaccines were developed in record time. The development, testing, and distribution of multiple safe and effective Covid-19 vaccines this year was an astounding achievement. Normally this process would take at least a decade; to accomplish it in fewer than 12 months is essentially miraculous. Pfizer’s and Moderna’s vaccines, both approved in the U.S., show roughly 95% efficacy — experts had hoped for 50%. They’re both also the triumphant conclusion of three decades of research on mRNA-based vaccine technology, which has never been applied in a human vaccine until now. It paves the way for a vastly more nimble and efficient vaccine-making process in the future.
Proof that an HIV cure is long lasting. In March, researchers announced that a man who became the second person to be cured of HIV in 2019 is still free of the virus after 30 months. The man, previously known only as the “London patient,” revealed his identity to the New York Times as Adam Castillejo, wishing to become an “ambassador of hope.” (Timothy Brown, the “Berlin patient,” was cured in 2011.) The technique involved infusing Castillejo with stem cells carrying a genetic mutation that conferred protection against HIV. Doing so essentially replaced his immune system with one that was HIV-resistant. The treatment won’t be available to the majority of people, but it opens the door to genetic therapies involving the same resistance-conferring mutation.
SpaceX sends people to space. In May, a few months before Arecibo’s closure sounded a death knell for publicly funded space exploration, the private company SpaceX successfully carried two NASA astronauts to the International Space Station, marking what many called a “new era” of spaceflight — one where private vehicles primarily bring people to orbit, and NASA’s role diminishes. While there are many reasons to be critical of privatization — especially with super-billionaires like Elon Musk helming space companies — it could rapidly speed up the development of new spacefaring technology, much as private pharmaceutical companies accelerated a Covid-19 vaccine.
CRISPR pioneers win the Nobel prize. In October, Jennifer Doudna, PhD, and Emmanuelle Charpentier, PhD, won the Nobel Prize for Chemistry for their pioneering work on the gene-editing tool CRISPR — the first pair of women ever to win a science Nobel. CRISPR, which allows scientists to make precise changes to the genomes of living organisms, is considered a revolutionary discovery. When perfected, it could allow scientists to eliminate genetic disease, engineer climate change-resistant crops and animals, and diagnose illness. In a conversation with Emily Mullin in October, Doudna discussed one exciting future of CRISPR: using it to regulate, but not permanently edit, human genomes. | https://futurehuman.medium.com/2020-sucked-but-we-also-made-some-astounding-scientific-progress-bc247aa9fcfa | ['Yasmin Tayag'] | 2020-12-21 19:20:10.300000+00:00 | ['Science', 'Future', 'Climate Change', 'Space', 'Health'] |
Inspired Writer’s Christmas Challenge Winners Announced! | Inspired Writer’s Christmas Challenge Winners Announced!
The best of the best of our third writing contest
Image source: Massonstock on Freepik — Caption: Canva
We’re happy to announce that our third writing challenge was a success and that we have chosen our winners!
Thanks to all those who participated and went the extra mile to make their stories shine like diamonds! You gave me and our dear chief editor Kelly Eden a wonderful time as we immersed ourselves in your creative and thought-provoking stories. That alone makes all of you winners in our hearts.
Naturally, having dozens of outstanding entries to choose from, it was tough to make up our minds for our twelve finalists. Selecting the winners was even more challenging. To see what I mean, imagine being asked which of your children you love most. Okay, maybe it wasn’t that hard or dramatic, but you get the idea.
Before we unveil the names of our winners, we would like to thank The Writing Cooperative for making this possible. It was great to see so many of you in The Writing Cooperative’s Facebook group supporting each other with entries.
Enough chit-chat! Here’s what you came here for: These are Inspired Writer’s 2020 Christmas Writing Challenge winners, editors’ pics, and honorable mentions! | https://medium.com/inspired-writer/inspired-writers-christmas-challenge-winners-announced-9058e3fdbd49 | ['Joe Donan'] | 2020-12-22 05:37:17.561000+00:00 | ['Writing', 'Nonfiction', 'Challenge', 'Creativity', 'Writing Tips'] |
Startup Spotlight Q&A: Deepzen | Taylan Kamis is the CEO and co-founder of DeepZen, an artificial intelligence company focused on publishing and producing audiobooks.
He is an experienced tech leader and entrepreneur. He has consulted with private equity and venture capital-backed tech companies as their CFO and was part of the international online media division leadership team at Microsoft where he led strategy, FP&A and business planning for 50 plus markets.
He is curious about exploring the limits of deep tech and excited by the challenge of applying it in real-life scenarios to make life better for all humans. In his limited free time, he enjoys reading and listening to contemporary politics and economics commentary and supports several education and research charities.
DeepZen is a British company comprised of technical, language and business experts who are bringing a new generation of AI driven voice technology to businesses and individuals.
— In a sentence, what does your company do?
We have developed exclusive AI technology which synthesizes the human voice in order to replicate emotions and intonations. The technology, which is being applied across multiple verticals including book publishing, gaming, podcasting, voiceovers, Apps and education, is revolutionising the way audiobooks and other forms of content, are produced.
— What makes your company/product different in this market?
Our core differentiators are the level of technical and NLP expertise we bring to bear, which results in a more advanced quality of product and therefore a superior listening experience for customers and a more authentic brand representation for clients. We have emphasis and intonation control overlaid with emotion control and additionally pronunciation control through our editing suite, which allows us to adapt our technology to fit our clients’ specific needs. No other player in the market provides this level of a complete technological solution. Our key focus is on simultaneously delivering quality, speed, simplicity and scalability.
— Describe how and when your company came to be. In other words, what was the problem you found and the ‘aha’ moment?
We could see the developing importance of voice across many different sectors, however the ability to create emotion in the human speech wasn’t possible, technically, before 2017 when deep neural networks became more advanced. The aha moment was the realisation that a deep learning based system could generate complete human sounding audio recordings which would overcome the time and financial constraints of doing so at scale through current processes.
— What milestone are you most proud of so far?
We have built our deep learning based end to end technology which identifies the emotions within text and synthesizes the text based on those emotions. We built our editorial tool which gives us the ability to control the synthetic voice in any way we want. We also introduced a library of voices both male and female with different accents. All these things happened in the last 12 months. Additionally, we have signed 4 co-publishing deals in the UK, signed a Worldwide distribution deal and digitally produced tens of books with hundreds more in the pipeline to be delivered this year.
— What are people most excited by?
People are very excited by the product itself and the fact that we have built a system which can read any text as a human would do. Also, the financial and temporal savings have been very exciting for the publishing community; we can significantly reduce the process and complexity of creating audiobooks from an average of 4 weeks to a few days and at a significant cost reduction versus traditional studio production methods.
— Have you pursued funding and if so, what steps did you take?
We have pursued funding yes.
— What KPIs are you tracking that you think will lead to revenue generation/growth?
Sales KPIs: # of new contracts signed both publishing and non-publishing, the $ value of these contracts; the # of engaged qualified leads in the sales funnel and the average time to conversion on net sales
# of new contracts signed both publishing and non-publishing, the $ value of these contracts; the # of engaged qualified leads in the sales funnel and the average time to conversion on net sales Financial KPIs: Revenue growth and net profit margin
Revenue growth and net profit margin Customer KPIs: # of customers gained and retained, market share %, net promoter score
— How do you build and develop talent?
As a small business you need to be very clear on what your key goals are in the upcoming three year window. You then need to recruit in to your company the talent with which you need to build it, whilst ensuring you provide an environment where you can continually develop the talent you have. Employees need to understand the aspirations the company has, the part they play in that success story, i.e. how their role helps to achieve those aspirations and what the development plan is for them within the company. People don’t want to stand still; they want to learn and improve their own skills and knowledge and you need to show them that your company is the right place for them to achieve that.
— How do you manage growth vs sustainability?
We think of our sustainable growth rate as the “ceiling” for sales growth or the most our sales can grow without new financing and without exhausting our cash flow. It is a balance we are very conscious of maintaining at this stage in our lifecycle. Our focus is on choosing the right target verticals with whom to work in order to manage the returns on investment of time and money in an optimal way.
— What are the biggest challenges for the team?
Developing the technology fast enough to meet the demand and to meet our own exacting standards in terms of what we deliver to our clients.
— What’s been the biggest success for the team?
After eighteen months of development of the platform itself, the first range of books are now ready for release, comprising of original and digital narration and produced by DeepZen in collaboration with publishers including Endeavour and Legend Press.
— What advice would you give to other founders?
Increasing areas of our lives are being touched by technology, so to be successful in this space you need to identify a key problem and then believe you can build a solution to that problem in a way which is superior to any other options out there for that customer base. Then it is all in the planning. Your business plan needs to be rock solid and built for sustainable growth. Protect your downsides, quickly identify potential added value and execute.
— Have you been or are you part of a corporate startup program or accelerator? If so, which ones and what have been the benefits?
We are in the Oracle for Startups program. (DeepZen will be one of the startups featured at Oracle OpenWorld Europe taking place Feb. 12 and 13 in London.) We are also in the start-up programs of Google, Amazon, NVIDIA and IBM. The main benefits are the support, advice, engagement and contacts derived from these associations. | https://medium.com/startup-grind/startup-spotlight-q-a-deepzen-1195603775a1 | ['The Startup Grind Team'] | 2020-06-01 22:24:11.594000+00:00 | ['Startup Lessons', 'Entrepreneurship', 'Startup', 'Startup Spotlight', 'Artificial Intelligence'] |
How to Act Around a Person with Depression | Things You Should Not Say to a Depressed Person
There are specific phrases and statements you shouldn’t be saying to someone struggling with depression. This should be common sense, but depending on how educated you are on the topic, you may not be aware of the consequences these words carry. So, please sit down and take notes.
“I don’t believe in depression.”
It doesn’t matter if you believe it in or not — it exists anyway. Depression isn’t something people made up with; it’s a medical diagnosis and a legitimate illness. Your personal opinion on the topic doesn’t override the official position of the entire medical community worldwide.
This is one of the worst statements you can come up with. Not everyone who’s depressed is entirely on good terms with their diagnosis, especially if this is a recent development in their medical history. They may be vulnerable, lost, scared, and confused. They are doing their best to cope and accept their situation, which means the absolute worst you could do is poke at them with a sharp stick. An individual who’s diagnosed with depression is experiencing what might be the worse hardship in their lives, and they might be coming to terms with their condition. Devaluing this condition can cause great harm.
It doesn’t matter if you believe it in or not — it exists anyway.
“You just need a hobby.”
Photo by Dương Nhân from Pexels
News flash: they now have a hobby — depression! This “hobby” might stay with them for a while, and it may also go away and come back at any point. Most likely, your friend has a ton of hobbies as it is. Another news flash: none of those hobbies prevented them from getting depressed in the first place. Please don’t mix up the treatment plan, such as more active social life and coping mechanisms, with the causes and nature of their illness.
Imagine coming over to someone with a leg cast and telling them they need a hobby. A hobby won’t cure their broken limb.
“It’s all in your head.”
Thank you, Captain Obvious. Of course, it’s in their head. Guess what? It doesn’t make it any easier. And the broken leg is a part of your body — so what? Is that a cure? No, the cast is the cure, similar to a depression treatment plan.
As Albus Dumbledore said in “Harry Potter and the Deathly Hallows:
“Of course it is happening inside your head, Harry, but why on earth should that mean that it is not real?” — J.K. Rowling.
“You need to fill up your schedule and stop thinking about it.”
While this is a sweet sentiment, filling up their schedule won’t fix the problem. It’s a way to self-distract and avoids their illness, not a solution. The more they attempt to pretend everything is all right, the worst it gets. There is a difference between planning social life to help with depression while receiving help and improving mental health, and pretending that everything is okay while making no progress.
Furthermore, increasing the number of work hours can lead to exhaustion, burnout, and suicidal thoughts. South Korea is well-known to have a high number of suicides due to its citizens working more extended hours compared to other nations on the planet. According to Welcome to the Jungle, Koreans see 25 suicides per 100,000 citizens, and almost 20% of their population admits having 10 or more drinks every week trying to cope with stress.
Don’t tell your friends to work harder and longer hours. It may harm them significantly.
“I don’t believe in anti-depressants (or therapy, mindfulness).”
Again, you don’t need to believe in anything — it exists, whether you like it or not. Anti-depressants are approved by medical specialists and scientists worldwide. Therapy has been a known strategy not only for those with depression but for anyone struggling with an issue. Mindfulness is a practice actively followed around the world.
There is no need for you to believe or not believe. Your friend is following the plan their medical specialist provided him. If it requires them to take medication — they have to take medication. If there are no meds, but meditation and reflection journals are a part of it — they will have to do it.
Unless you are a licensed specialist with a medical degree and many years of experience, keep your opinions to yourself.
“But you seem so happy all the time!”
Yes, it takes much effort to put on a facade of calmness and professionalism. It doesn’t mean that the depressed person isn’t slowly dying inside. You also won’t see them cry in the break room during lunch, but they still do it. If someone looks composed and well-put-together, it doesn’t change the fact they might be hanging by a thread. Most of us don’t want to affect others around us negatively.
Unless you are a licensed specialist with a medical degree and many years of experience, keep your opinions to yourself.
“Why are you depressed? You have everything!”
It’s not a matter of having everything or having nothing. One’s possessions and accomplishments don’t dictate happiness and a healthy mental state. If someone who’s depressed drives a Tesla Model X and owns three condos, it doesn’t mean they are mentally healthy, happy, and in peace with themselves.
Similarly to this, some people look at celebrities and expect them to be perpetually happy and well. However, a lot of famous people are admitting they have or had mental issues. They tell us about their struggles with mental health, which is an excellent example of how no one is safe from depression and other mental disorders. It’s phenomenal that celebrities are using their voices to discuss such important topics.
Having everything doesn’t constitute being fine. People from less fortunate countries probably look at us and think we have everything; therefore, we must always be happy. This is not always the case.
“Just snap out of it.”
I can’t just snap out of it, Karen. There is no “snapping out” of depression in a matter of seconds. Approach this the same way you would approach a patient with a broken leg. Are you going to tell him the same thing? I imagine this dialogue:
“Hey, man, just snap out of it!” “Um… My leg is broken, and I have a cast.” “It doesn’t matter. Snap out of it!”
You understand how ridiculous this is. Mental illness requires proper treatment and often involves multiple doctors’ approval. It takes months or even years. So, please, don’t suggest anything that can be upsetting to your friend who has depression. It will only make them feel worse, if not worthless.
“My friend, Kevin, dealt with his depression in a month, so can you.”
I’m super happy for Kevin, but everyone is different. Every mental issue is unique, as well. There is no one-size-fits-all approach. Even if two people have the same problem, they may have been treated differently. Maybe they have been taking different medications or taught different coping mechanisms. Something as small as work schedule can make a difference in one’s mental health, because depression can manifest earlier in the day for some people, and later for others. Even a slight variation in the diet can spark a change. Therefore, don’t mention any Kevins. You don’t know their precise diagnosis and mental state.
“You need to try this [insert mushroom/vitamin/online class].”
No, I don’t need to try the magical mushroom. If a doctor doesn’t approve a treatment, the patient with a mental problem should not be exploring it. Again, if something worked for one person, it doesn’t mean it will work for another one. There are thousands of people claiming online that they find a cure for depression, cancer — and a ton of other diseases. If the doctor doesn’t approve it, it’s not happening.
“Your anti-depressants are making it worse. Try aromatherapy instead.”
Thank you, Karen, but lavender oil won’t cure the depression. Orange oil won’t help either. While it’s nice to turn on the diffuser and spray the oils, this isn’t a treatment plan. Lavender does indeed calm you down, and sweet orange oil scent may distract you from spiralling down the negative thoughts rabbit hole. However, this is a temporary solution. Essential oils are not FDA-approved as an anti-depressant. | https://medium.com/the-ascent/how-to-act-around-a-person-with-depression-e89149b1e866 | ['Joanna Henderson'] | 2020-05-15 12:01:01.145000+00:00 | ['Mindfulness', 'Mental Illness', 'Psychology', 'Health', 'Mental Health'] |
Designed Intelligence: Enhancing the human experience | By Connor Upton, Group Design Director, Fjord at The Dock, and James O’Neill, Service and Systems Design Lead, Fjord at The Dock.
“Any sufficiently advanced technology is indistinguishable from magic .” — Arthur C. Clarke
We often hear about how AI can be used to automate mundane tasks and free us up to do other things, but we rarely talk about what those other things are. What about an alternative perspective: how might AI help us do the things we love, but better? AI can be used to extend our perceptual and cognitive abilities and change how we interact with the world around us. It could be used to give us new capabilities beyond what we can achieve on our own. Enhancing the human experience is one of the pillars of Designed Intelligence, our approach to designing for, and designing with, AI at Fjord. This allows us to think about experience design in a new way, not just as a means to deliver an existing service, but potentially opening up new ways to interact with customers and generating new business models.
Artistic inspiration
The world of art is often a source of inspiration for technologist and designers. Autonomous vehicles were described in a short story by Asimov in 1953 and voice interfaces by Philip K Dick in 1968. Technology also informs the work of artists, and AI has now become a new creative medium. When Google released Deep Dream, a computer vision application, in 2015 many creators experimented with its ability to transfer artistic styles onto their images or generate psychedelic patterns. But beyond its ability to mimic style, artists are exploring how AI can be used in more novel ways. Gene Kogan’s work on deep fakes, where he controls the faces of famous politicians, opens up questions around the trustworthiness of media. Memo Akten uses neural nets to create new forms of participatory art. In his piece “Learning to see: Gloomy Sunday”, he enables people to move mundane household objects around under a camera to generate stunning interactive scenes from nature, including seascapes, clouds and fire. These “live reality filters” are a new form of experience, and they feel magical. And like all good art they help us see our reality through different eyes. So, how are these works of art and AI influencing the world of interaction design?
AI as interface
AI, like all new technologies, needs new modes of interaction. The early web brought drop-down selection and digital forms into the mainstream, mobile brought us the hamburger menu, pull-to-refresh and infinite scrolling. Each new type of interaction is a response to the goals people want to achieve and the constraints or abilities of the technology. While AI may seem new, it is already driving many of our interactions. Everyday experiences, like auto complete in your search bar and song recommendations on your streaming service, are so common we forget that these are powered by machine learning. Even more recent AI advances like computer vision are becoming important interaction techniques. The first time you pointed your camera at your credit card rather than type in all those numbers it felt amazing. This “perceptual accelerator” got the job done faster and more accurately using the technology that was already built into the device.
We’ve been experimenting with how computer vision can be used in this way in different domains. For example, when you buy a box of medication it comes with a wad of paper that describes ingredients, dosage, side-effects and other important information. Most of us never read it. But what if you could just point your camera at the box and let it identify the drug and cross reference it against your conditions, drug regimen, allergies and other factors. Native digital formats make it easier to highlight the parts that are most relevant to you, insuring that you don’t miss any important information. We developed a prototype that showed that this approach is feasible. What impact might this have on patient experience and safety?
Using similar technology, AI can redefine how we navigate the physical and digital world. Working with grocery retailer Whole Foods, our Austin studio developed a proof of concept that demonstrates how computer vision, NLP and recommendation systems can be blended into the shopper experience. Enhancing the experience with these technologies allowed shoppers better navigate the store, find products quicker, and discover new recipes and ingredients in real-time.
AI and AR for grocery shopping.
The role of computer vision in user interaction will accelerate as the technology becomes more accessible. Companies like Matterport are already providing a platform that allows people to capture, edit and share 3D models of physical spaces and objects. Originally targeted at professional users, their new mobile app puts the capability into everybody’s hands. This makes it possible for designers to experiment with creating digital twins, simulations, and AR experiences.
Extending human capabilities
As well as accelerating our interactions with services, AI is allowing us to tackle entirely new interaction challenges.
Dr Peter Scott-Morgan is a Cambridge academic who suffers from motor neuron disease, a degenerative condition that attacks nerves in the brain and spinal cord. Peter has set out on a mission to become the world first true cyborg, undergoing a series of physical and technological augmentations to help him continue to live and work. The personality retention project is a collaborative research program that uses emerging technologies to support this mission. Working with a team of partners, Fjord helped design a new eye-tracking keyboard that integrates with text to voice generating technology. This solution enables Peter to continue to write and communicate even as he gradually loses his natural physical and sensory abilities.
Concept design of new visual keyboard from personality retention project.
We’re also using technology to give people skills they’ve never had before. For example with VELUX, the roof window specialist, we designed an app that lets people see how additional light could transform their living space. Combining computer vision and augmented reality with a seamless interface, the app let people scan a room and then place virtual windows into the space. This gives the average home-owner the ability to explore and imagine 3D space with the vision of an architect. | https://medium.com/design-voices/designed-intelligence-enhancing-the-human-experience-b9c60aeab0f3 | [] | 2020-09-11 10:50:09.377000+00:00 | ['Design', 'Fjord', 'Experience', 'Artificial Intelligence', 'AI'] |
Starbucks’ Founding Story Teaches Us to Leave Our Egos at the Door | With each passing day, the heaviness on Howard’s shoulders kept accumulating. He saw a world of opportunity and couldn’t let his moment pass away. He decided to leave Starbucks and build his own chain of espresso bars.
As I’m learning about the story of what became synonymous with an American classic, our loved-to-hate, yet beloved hangout spot, I’m pondering on an idea cemented by determination and audacity. It’s a story founded on humbling self-trust; on believing so much in something that it borders infatuation, a conviction of the heart that breaks away from the comforts of complacency and into the expanse of possibility.
“There’s a fine line between self-doubt and self-confidence, and it’s even possible to feel both emotions simultaneously. Back then, and often enough today, I could be overwhelmed with insecurities, and at the same time have an abundance of self-assurance and faith.” — Howard Schultz
As Howard is due to teach us, there is no ego in greatness, but only humility and nimbleness, coupled with a deep focus on doing what’s best for the business or project at hand, whatever that may be.
When Howard founded his espresso bar in April 1986, he named it ‘Il Giornale,’ after one of the largest newspapers in Italy. He reasoned that Italian giornale, translating to daily, distilled his vision of building a loyal following that would learn to appreciate great coffee. Under the mantra ‘Everything mattered,’ Howard directed the operation like a fine orchestra conductor: opera playing in the background, international newspapers neatly displayed on rods, carefully-selected Italian descriptors spotlighting the espresso creations, and servers accessorized with bow ties.
However, even a well-executed performance could fall short of the audience’s expectations: people started expressing their dissatisfaction with the loud music, the staff felt the bow ties were impractical, only very few could read and understand the Italian terminology.
Regardless of what your vision may be, it needs to be tailored to what your customers want. Educating your audience requires you to gather them in your store in the first place, and you only do that by creating an environment they want to be in. This is a life lesson, as much as it holds true in business: to get someone to listen, you first need to build an audience. And you only do that by listening first.
This is tough for an entrepreneur, or anyone who’s ever had to share an idea. You have a vision in mind that unfolds perfectly into the accumulation of your decisions, almost robotic movements that are difficult to part with. You spend all waking hours obsessing over every single detail, and once deliberated, you’ve already formed the emotional bond that prevents you from letting go. | https://medium.com/swlh/starbucks-original-name-teaches-us-to-leave-our-egos-at-the-door-bf04129d8cf7 | [] | 2020-09-03 22:21:23.792000+00:00 | ['Entrepreneurship', 'Life Lessons', 'Startup', 'Startup Lessons', 'Marketing'] |
Toxic solutioning, form vs function, blurred vision | The UX Collective is honored to have been recognized as the Best Design Publication of 2020 at the Brazil Design Awards. 💙
The results of the State of CSS survey are out, and the State of JS survey is now open.
The UX Collective is an independent design publication that elevates unheard design voices, reaching over 401,100 designers every week. Curated by Fabricio Teixeira and Caio Braga
I disguised as an Instagram UX influencer for 4 months; this is what I learned →
By Teisanu Tudor
Form vs. function: when is it okay to be weird? →
By Jason Brush
The problem of toxic solutioning →
By Nathan Robinson
Do stories make social media more addictive? →
By Anna E. Cook
Learn like a scientist, think like a PM, work like a designer →
By Boon Yew Chew
More top stories:
News & ideas
Look to speak →
Google’s tech lets you speak with your eyes.
→ Google’s tech lets you speak with your eyes. Yahoo’s icons →
Refining Yahoo’s weather icon suite.
→ Refining Yahoo’s weather icon suite. Alternative maps →
Intriguing maps that reveal alternate histories.
→ Intriguing maps that reveal alternate histories. Going deeper →
Non-obvious ways to have deeper conversations.
Tools & resources
Platform abuse →
A guide for safer product development.
→ A guide for safer product development. Octopus →
Generate visual sitemap from any sites.
→ Generate visual sitemap from any sites. Awkward convo →
A framework for having design conversations.
→ A framework for having design conversations. Blurred vision →
Stark’s new burred vision simulator tool. | https://uxdesign.cc/toxic-solutioning-form-vs-function-blurred-vision-cffd37d8f56f | ['Fabricio Teixeira'] | 2020-12-12 13:29:29.193000+00:00 | ['Product Design', 'Design', 'Startup', 'Productivity', 'UX'] |
3 Fantastic Tips for Writers from Arthur Conan Doyle | This month I’ve been watching Granada’s “Sherlock Holmes” television series from the 1980s starring Jeremy Brett.
While I’ve enjoyed numerous different screen adaptations of Arthur Conan Doyle’s mystery stories (including Cumberbatch’s modern spin on the detective), I love how this series seems to bring the stories to life exactly as Doyle envisioned them, including his Victorian England. Jeremy Brett is absolutely fantastic as Holmes, portraying him to the letter.
Here’s a clip from one of the episodes (you can find many of the episodes on YouTube).
I also love how this show makes me want to dive back into the Holmes books all over again (I read them long ago when I was a little kid). Stay tuned! I might be typing up a blog post soon filled with writing techniques gleaned from Doyle’s stories.
But, today, I have an interesting find to share with you.
While reading about Doyle and the Holmes stories online, I stumbled across a short article by Doyle titled “How I Write My Books”. The article first appeared in The Strand Magazine in 1924.
It’s always fascinating to get a peek at the writing process of a famous author.
Here are my top three takeaways:
1. Forget Factual Accuracy, Focus on Dramatic Effect
Sometimes when I’m working on a piece of fiction, I worry over whether I’ve made any factual errors. Yes, it’s fiction, but the piece might be based in the real world.
Maybe one of my characters works in a profession that I don’t have any personal experience with. That’s when I start Googling for hours, and I might still worry that I’ve made errors once I finish the piece.
Has that happened to you too? Doyle says not to obsess over getting every little detail correct. He admits that he often made factual errors when writing his Sherlock Holmes stories. He writes,
“In short stories it has always seemed to me that so long as you produce your dramatic effect, accuracy of detail matters little. I have never striven for it and have made some bad mistakes in consequence. What matter If I can hold my readers? I claim that I may make my own conditions, and I do so. I have taken liberties in some of the Sherlock Holmes stories. I have been told, for example, that in ‘The Adventure of Silver Blaze,’ half the characters would have been in jail and the other half warned off the Turf forever. That does not trouble me in the least when the story is admittedly a fantasy.”
Doyle emphasizes that your main concern should be making sure that your story holds the attention of readers. Of course, this doesn’t mean that you shouldn’t do any research at all. If an error makes the story unbelievable, that will ruin the dramatic effect. But, in most cases, readers will overlook tiny errors in fiction pieces if the story is entertaining.
The bottom line is to concentrate on telling a gripping story. And don’t let your worries over factual errors stop you from sharing your story with the world.
Now, Doyle does believe factual accuracy is necessary if you’re writing historical fiction. That leads into takeaway #2…
2. How to Research
Doyle notes that he researched extensively when writing historical fiction,
“It is otherwise where history is brought in. Even in a short story one should be accurate there. In the Brigadier Gerard stories, for example, even the uniforms are correct. Twenty books of Napoleonic soldier records are the foundation of those stories. This accuracy applies far more to a long historical novel. It becomes a mere boy’s book of adventure unless it is a correct picture of the age.”
Essentially, Doyle is saying that if you want your readers to take your piece seriously (an article, essay, biography, memoir, etc.), devote time to research.
Doyle shares his system for researching that will be helpful to any writer who is working on a research heavy project:
“My system before writing such a book as ‘Sir Nigel’ or ‘The Refugees’ was to read everything I could get about the age and to copy out into notebooks all that seemed distinctive. I would then cross-index this material by dividing it under the heads of the various types of character. Thus under Archer I would put all archery lore, and also what oaths an archer might use, where he might have been, what wars, etc., so as to make atmosphere in his talk. Under Monk I would have all about stained glass, illumination of missals, discipline, ritual, and so on. In this way if I had, for example, a conversation between a falconer and an armourer, I could make each draw similes from his own craft.”
You could follow Doyle’s notebook system but use computer apps instead like Evernote or Scrivener.
3. How to Become a Successful Writer
Finally, Doyle shares his secret for success as a writer: a strong work ethic.
“As to my hours of work, when I am keen on a book I am prepared to work all day, with an hour or two of walk or siesta in the afternoon…Twice I have written forty-thousand-word pamphlets in a week, but in each case I was sustained by a burning indignation, which is the best of all driving power.”
I love that Doyle broke his writing sessions up with a long walk. I wrote about how walking can stimulate your creativity in my blog post here.
Even though many of us probably can’t devote entire days to writing, Doyle’s amazing dedication to his craft is inspiring. Just like Doyle, we can try to make writing a priority in our schedules each day (even if some days that means only an hour or even just twenty minutes).
Steven Pressfield writes in The War of Art,
“This is the other secret that real artists know and wannabe writers don’t. When we sit down each day and do our work, power concentrates around us. The Muse takes note of our dedication. She approves. We have earned favor in her sight. When we sit down and work, we become like a magnetized rod that attracts iron filings. Ideas come. Insights accrete.”
Doyle also points out that it’s easier to find the motivation to write when you are working on a project that you love,
“From the time that I no longer had to write for sustenance I have never considered money in my work. When the work is done the money is very welcome, and it is the author who should have it. But I have never accepted a contract because it was well paid, and indeed I have very seldom accepted a contract at all, preferring to wait until I had some idea which stimulated me, and not letting my agent or editor know until I was well advanced with the work. I am sure that this is the best and also the happiest procedure for an author.”
Doyle’s picture of the happy writer is a fantastic goal to work towards. Many of us who are working writers are probably not yet at that point where we can write solely for pleasure alone. We need to take on projects that will pay the bills and put food on the table.
However, even at this point in our writing life, it’s important to make sure that we don’t abandon the writing that feeds our souls.
Do you have an idea for a story or a blog post or a book that’s tugging at your heart? Maybe you’re not sure if you should write it, if it will be successful, if your audience will enjoy it.
Write it anyway. The world needs to hear your story. And, as Doyle says, it will make you happier too. | https://medium.com/copywriting-secrets/3-helpful-tips-for-writers-from-arthur-conan-doyle-dd8425c41dc8 | ['Nicole Bianchi'] | 2020-10-17 19:02:43.110000+00:00 | ['History', 'Writing', 'Fiction', 'Productivity', 'Creativity'] |
Theories on Theory: Why Some of it is Bull**** | The world of advertising theory and research is a mystical and enticing place. Nuggets of truth pulled from studies can have immense power; they can fuel a creative brief or help you win over a client. But, we can’t forget that theory isn’t fact and must be used cautiously. To avoid the advertising research rabbit-hole, here are a few tips.
1. Check the references
A popular statistic echoed throughout advertising blogs and forums is that people only remember about 10% of what they see. Interesting, right? Well, this “fact” is extremely misleading for two reasons. First, the original study that published this finding said that people only remember 10% of what they READ but 30% of what they SEE. That’s a huge difference. Second, this article was first published in 1946. It’s ancient! The moral of the story here is that before using a compelling bit of research, always consult the original source. As a rule of thumb, anything published before 2005 is probably no longer relevant. However, you might discover that newer papers reference original studies and build off their findings.
2. Beware of the dramatic headline
You may have heard that because of the internet and mobile devices, humans now have an attention span shorter than that of the average goldfish. A 2015 study showed that participants had an average attention span of 8.25 seconds while that of a goldfish is 9 seconds. I have mistakenly used this statistic several times to colorfully demonstrate why messaging needs to be short and sweet. Similarly, it has appeared in the New York Times, Time Magazine and splashed all over online marketing blogs and publications. But, when you dig deeper, there is not much substance to backup this “fact.” The stat, originally published in a report by Microsoft, was actually pulled from an outside source that claims to have pulled it from another source that also claims to have pulled it from another source, and so on. Long story short, there is no definitive source of this claim nor is there a reputable study to back it up. If you want to learn more about this statistical conspiracy check out this interesting blog post.
Maybe our attention spans really are shrinking due to our constant exposure to content. However, we probably should not make the comparison between human and goldfish attention spans until we get more data.
3. Be skeptical
Researchers are only human and the data they dig up are not always perfect. Conducting a study with reliable results is f***ing tough. It requires a lot of time, money and a great deal of patience. When conducting my own research in college on the effects of music therapy on students with developmental disabilities, I saw how frustrating this process can be. There is endless red tape with participants and institutional review boards. My study took over six months to get approved, cutting our window for data collection in half. Consequently, the results of our year long study were not as exciting or impactful as we had hoped. Although a study may not yield optimal results, researchers sometimes rush to publish in order to stay relevant in the academic community, placate clients or defend an investment in the study. So, it should come as no surprise that results are often cherry-picked to maximize their impact. This is not to say that these researchers aren’t brilliant, they almost always are. However, we have to be wary of studies with splashy and exciting claims and be cautious of taking them at face value. A few key questions to ask yourself if evaluating an advertising study:
Are the authors transparent about their methodology? Do they talk about how many subjects they used and how data was collected?
Is the research published in a peer-reviewed journal? Have experts supported the findings and checked for inconsistencies (e.g. Journal of Advertising Research, Psychology & Marketing, Advertising & Society Review)?
Is the author affiliated with any institutions or corporations? Could the language or results be biased because of that affiliation?
A good example of healthy skepticism was discussed by Ashley Ringrose in a recent Mumbrella360 debate. Ringrose pointed to research conducted by Oxford BioChronometrics that had been frequently used by his colleague to argue that digital metrics were ineffective because of the abundance of fraudulent bot clicks. The claim here is that up to 98% of clicks on Google ads are from bots. If taken at face value, this research could have an enormous effect on how we use digital. However, there are several red flags. First, this research was conducted over a seven day period. Second, only£100 (GBP) was allocated as a budget for each platform (Google, Yahoo, LinkedIn and Facebook). Third, this study was not picked up or supported by any reputable journal or research publication. Ultimately, this tells me that these results are probably unreliable and definitely aren’t representative of the scope of digital ads. Accordingly, similar studies show that only 2% of ad clicks are from bots. Moral of the story: don’t ignore a study’s methodology and take results with a grain of salt.
4. Don’t make theory the law
While it can be perilous to oversimplify, theory should also never rule your life. Theory and research can open many doors; it can shed light on universal truths and similarities in the human experience. However, it shouldn’t close any doors. A recent article showed machine learning algorithms often risk over or under-extrapolating from data it’s given. Its answer to a simple SAT-style question is not only wrong but it also has an insane series of equations to support it. This is over-complicating things at its finest.
Even gravity is still considered a “theory” and can be disproved if there is sufficient evidence. So, advertising theory should be treated as such. Humans are strange and unpredictable creatures. No one theory or algorithm can conclusively explain how all people consume media, shop for products or become loyal to brands. Use advertising theory as a thought starter and creative fuel, not as a tool to shut down divergent thinking.
For a more fun take on how we should be wary of research watch this. | https://medium.com/comms-planning/theories-on-theory-why-some-of-it-is-bull-b8fbfb9c5e1a | ['Ali Goldsmith'] | 2017-07-24 08:47:15.564000+00:00 | ['Advertising', 'Marketing', 'Science', 'Research', 'Creativity'] |
Why Antibodies Don't Tell the Whole Story of Immunity | There’s Good News About Your Immune System and the Coronavirus
When antibody levels go down, T cells have your back
T cell rendering. Image: Design Cells/Getty Images
More than any other facet of Covid-19, the question of immunity has been a stressful source of good news/bad news whiplash.
Good news: Scientists discovered early on that most people who have been infected with SARS-CoV-2 (the official name for the novel coronavirus) create virus-specific antibodies — special proteins produced by immune cells that help fight off the coronavirus and provide immunity against future infections. This finding helped guide the dozens of vaccines currently under development.
Bad news: Those antibodies may hang around for only a couple months, a phenomenon called waning immunity. There have been anecdotal accounts of a few people potentially contracting the virus a second time, and a new preprint paper — which has not yet been peer reviewed — showed that in some recovered patients, antibody levels declined to undetectable levels after three months. These reports have caused some people to speculate that a vaccine will be largely ineffective and that we may never develop herd immunity to the virus.
Before you start to doom spiral, though, let’s turn back to good news: Antibodies aren’t the only tools the immune system has to fight repeat invaders. Several recent studies have shown that in addition to antibodies, people also develop virus-specific T cells. These immune cells are an important component of long-term immunity, and in some cases they’re detectable in the body many years after antibodies dissipate. But because nothing is simple with SARS-CoV-2, the T cells produced in response to the coronavirus are a little unusual.
B cells and T cells work as a team
The immune system has two waves in its defense against an invader: the initial innate response, which looks the same for pretty much any attacker, and the slower adaptive response, which takes about a week to develop but is tailored to the current assailant. The adaptive response also serves as a type of immunological memory, so that if the same virus tries to reinfect a person, their immune system can kick into gear and immediately mount a virus-specific defense. It’s this second phase that scientists are especially interested in right now, because it’s also the one that’s activated by vaccines.
There are two main cell types involved in adaptive immunity that work as a team, B cells and T cells, both of which are white blood cells — technically called lymphocytes — that primarily live in the lymph nodes. B cells make antibodies, those coveted proteins that latch onto the virus and either disarm it or block it from entering the body’s cells. But in order to learn what the virus looks like and what kinds and shapes of antibodies to make, B cells need T cells.
Even if “antibody levels are quite low in coronavirus infection, having T cells around to get them up quickly will give you quite a bit of protection.”
“To make really good quality B cells and get good antibodies, you need T cell help,” says Richard Locksley, MD, a professor of medicine at the University of California, San Francisco. For example, he says, “When you get a vaccine, like a flu vaccine or a tetanus toxin or whatever, the T cell help is what makes you get the good antibodies by the B cells.”
There are several different flavors of T cells that play a role in the immune system, but the two variations that are especially important when it comes to immunity are referred to as memory T cells, because they remember the virus in case it comes back. The first memory T cells, known as “helper” T cells, educate the B cells about what the antibodies should look like to combat the current invader. They also provide the B cells with growth factors and general support to produce as many antibodies as possible.
The other kind of memory T cells are called “killer” T cells. These T cells also learn what the current viral threat looks like, and they go off to find infected cells and destroy them so the virus can’t spread further inside the body. One expert called them “the special forces” of the immune system.
When antibody levels go down, T cells have your back
Several studies have shown that people who were infected with the novel coronavirus create memory T cells that react specifically to SARS-CoV-2. In one paper published in June, 100% of people who had recovered from mild Covid-19 infections developed virus-specific helper T cells, and 70% developed killer T cells. A follow-up study by the same scientists similarly found that, in people who were hospitalized with severe Covid-19 infections, 100% produced helper T cells and 80% produced killer T cells. Two other groups in Germany and Sweden have released preprint papers showing similar results.
“The good news is, despite all the worries at the time that there was no immunity whatsoever, we did find immunity in all the factors that we analyzed,” says Alba Grifoni, PhD, a scientist at the La Jolla Institute who worked on both of the published studies. “One month after the infection, we were able to detect a good immune response both for the B and T cell sides.”
The discovery of virus-specific memory T cells is good news for several reasons. First, those T cells boost the immune response and help guarantee that B cells create high-quality antibodies. Potentially even more important is the fact that the memory T cells can remain in a reservoir in the body for a long time, in some cases for decades. This fact is especially critical in light of the possibility that SARS-CoV-2 antibodies might decline rather quickly. Memory T cells can remind the B cells which antibodies to make, replenishing the pool on command.
“But what we think happens is almost all of those patients will have T cell memory, and the T cells seem to be quite durable.”
“The advantage of having really good T cells is even if the antibody levels have gone down, every time you get exposed again, the T cells will clone up and provide help really quickly, and those antibodies will be expanded again,” Locksley says. Even if “antibody levels are quite low in coronavirus infection, having T cells around to get them up quickly will give you quite a bit of protection.”
Based on what scientists know about SARS and MERS, coronaviruses related to the current infection, there was speculation that SARS-CoV-2 antibodies would provide immunity for several years. Since the new coronavirus has been circulating for only six months, it’s too soon to know how long the antibodies will last, but the latest research suggests that the timeline might be months rather than years, especially in people who had mild or asymptomatic infections.
“There’s some information now emerging that antibodies may decline over time,” says John Wherry, PhD, director of the Institute for Immunology at the University of Pennsylvania. “That concerns some people, suggesting that our antibody memory, our protective immunity, may wane over time. Jury’s still out on that. But what we think happens is almost all of those patients will have T cell memory, and the T cells seem to be quite durable.”
Scientists don’t yet know how long SARS-CoV-2 T cells will last, but one recent study published in the journal Nature found that people who were exposed to the original SARS virus in 2003 still had T cells that responded to the virus in 2020, 17 years later.
Some people’s immune systems might be handicapped by the virus, while others may have a running start
Of course, it can never just be easy with this virus. While people do produce T cells in response to SARS-CoV-2, evidence has emerged that the cells are delayed, suppressed, and out of balance — a phenomenon known as lymphopenia, meaning a deficiency of lymphocytes. Disrupting the immune system isn’t unique to the coronavirus, virtually all viruses do it to some extent to help them thwart the immune response. But in SARS-CoV-2, the lymphopenia is more severe and appears to target the killer T cells.
“When you see this lymphopenia, the loss of lymphocytes, in other infections, often it’s B cells and T cells and the other minor [cell] populations. In SARS-CoV-2, it seems to be preferentially impacting the [killer] T cells,” says Wherry, who released a preprint paper on the finding. “We don’t really know why. That occurs in some other very severe infections — it can occur in Ebola, it may occur in some others — but it’s a little bit unusual. It’s also unusual for the lymphopenia to be lasting a long time. In other infections, it tends to be very transient, only lasting for maybe a few days.”
“I think this is working like lots of viruses work, and it’s going to hit these pathways we know about. And maybe tucked in there will be some new surprise that will guide the way to a better way to treat not only this virus, but other viruses.”
Lymphopenia is more pronounced in people with severe Covid-19 infections, although scientists don’t know if it is the cause or the result of the prolonged, exhausting war the immune system wages against the virus. One possibility is that without the anticipated T cell response, the initial innate immune defense goes into overdrive, resulting in the so-called cytokine storm, where inflammatory immune proteins cause irreparable damage to the body’s tissues. The adaptive immune system does rebound eventually, though, and people who recover from the virus produce adequate numbers of B cells and T cells to fight off the infection.
Another question mark when it comes to T cells is the discovery by Grifoni and others that up to 50% of people have memory T cells that respond to SARS-CoV-2, even if they’ve never had the virus. The leading theory is that exposure to other similar coronaviruses, like the ones that cause the common cold, prompted people to produce T cells that also respond to the new strain. This discovery could potentially explain why some people have very mild or asymptomatic infections — perhaps their immune systems have a running start to mount a response and quickly produce antibodies to the novel coronavirus.
Grifoni says these cases are important to keep in mind for vaccine development, because a higher baseline level of memory T cells could influence how a person’s immune system responds to the vaccine, potentially speeding up their production of antibodies and enhancing their protection against the virus. “If you have a T cell response before the vaccine, and you don’t measure that and just look at the response after the vaccination, you would not know whether [the vaccine actually worked or] you got lucky,” she says. “You need to look at what was the [T cell] response before.”
While there are still a lot of unanswered questions when it comes to the novel coronavirus, the good news is that scientists do know a lot about the immune system, and they’re working at record speed to figure out how it responds to this particular virus.
“I don’t think all of a sudden something is going to work in some bizarre way we’ve never seen before. I think this is working like lots of viruses work, and it’s going to hit these pathways we know about,” Locksley says. “And maybe tucked in there will be some new surprise that will guide the way to a better way to treat not only this virus, but other viruses.” | https://elemental.medium.com/theres-good-news-about-your-immune-system-and-the-coronavirus-7d2c1fc976c1 | ['Dana G Smith'] | 2020-07-22 17:43:13.602000+00:00 | ['Science', 'Body', 'Covid 19', 'Coronavirus', 'Health'] |
Meet Angela Friedman the invincible 102 year old from New York. | Image by Obi Onyeador on Unsplash
Meet Angela Friedman the invincible 102 year old from New York. She's beat Covid-19 twice, and she lived through the flu of 1918 as a baby.
Joanne Merola, Angela’s daughter, says her mother was first diagnosed with the corona virus back in March 2020 whilst in hospital for a minor procedure. She spent a week in hospital followed by a period of self-isolation, and recovered fully.
In late October, Joanne received a call from the nursing home where her mother lives. She had tested positive for the virus yet again, and had symptoms.
Fortunately, after a period of isolation and treatment, on 17 November she tested negative.
Maybe living through the flu of 1918 is what has made her invincible! | https://medium.com/vital-world-online/meet-angela-friedman-the-invincible-102-year-old-from-new-york-d4f2ab769ea0 | ['Rejoice Denhere'] | 2020-12-19 09:03:08.553000+00:00 | ['Covid 19', 'Wellness Coaching', 'Wellness', 'Health', 'Coronavirus'] |
Permissionless Professors #5: Nat Eliason | We believe monetization is a hidden secret. Secrets hiding in plain sight. From the psychology of anchoring, the mathematics of power-law pricing tables, the application of demand elasticity, and the market positioning of your price — your path to better monetization awaits.
Follow | https://medium.com/monetization-manifesto/permissionless-professors-5-nat-eliason-b4a72f8edd2d | ['Gary Bailey - Monetization Manifesto'] | 2020-12-17 14:56:35.631000+00:00 | ['Growth', 'Marketing', 'Entrepreneurship', 'Startup', 'Monetization'] |
new writing resource | Hi there,
Hope all of your writing projects are going well. I just wanted to let you know about the latest article published at Copywriting Secrets:
A Look Back at August: new projects, the best books I read, writing & creative inspiration, and more!
This is a different guide than usual. I’m sharing a recap of my August: new projects I’m working on as well as books and articles that I enjoyed that I thought you’d enjoy as well.
Here are some of the things you’ll discover in the new article:
what we can learn from the classics about creating influential work
how difficult the writing process was even for famous writers like J.R.R. Tolkien
a fantastic book that will help you boost your creativity
quotes to keep you going when writing gets tough
I hope you enjoy the article. Have a fantastic week!
Cheers,
Nicole | https://medium.com/copywriting-secrets/new-writing-resource-febbddb7e8dc | ['Nicole Bianchi'] | 2019-09-12 22:18:57.370000+00:00 | ['Inspiration', 'Creativity', 'Productivity', 'Writing'] |
How I Sold More Membership Cards Than Any Other Store Clerk | How I Sold More Membership Cards Than Any Other Store Clerk
The exact scripts to use
Photo by Christiann Koepke on Unsplash
In 2007, when I was a 17-year-old sales assistant in a clothing store, I sold more store cards than entire store ‘clusters’ put together.
A cluster, in this example, was all of the staff in all of the stores for each area — competing against me. So I wasn’t just first, I was so far ahead that the company itself actually asked me to re-train its sales assistants on my technique.
After it became immoral to sell people into retail credit-agreements (post-2008), these skills transitioned into my other ventures of sales and marketing. Less used for selling ‘finance’ or ‘payment plans’ and instead focusing on making products look, sound, and feel more attractive to buyers.
You see, marketing is sales. Marketing is the efforts you make to create, support, or close the sale. Today, I’m going to tell you exactly what it was that helped me do so well, so in turn, you can use these tactics in your own careers too. | https://medium.com/better-marketing/how-i-sold-more-membership-cards-than-any-other-store-clerk-3166376576a2 | ['Geraint Clarke'] | 2020-08-11 17:05:53.527000+00:00 | ['Sales', 'Business', 'Startup', 'Marketing', 'Psychology'] |
A Second Coronavirus Death Surge Is Coming | In the United States, the rising severity of the current moment was obscured for several weeks by the downward drift of cases, hospitalizations, and deaths resulting from the spring outbreak in northeastern states. Even though deaths have been rising in the hardest-hit states of the Sun Belt surge, falling deaths in the Northeast disguised the trend.
It is true that the proportion of infections in younger people increased in June and July compared with March and April. And young people have a much lower risk of dying than people in their 60s and older. But, at least in Florida, where the best age data are available, early evidence suggests that the virus is already spreading to older people. Additionally, analysis of CDC data by The New York Times has found that younger Black and Latino people have a much higher risk of dying from COVID-19 than white people the same age. According to the racial data compiled by the COVID Tracking Project in concert with the Boston University Center for Antiracist Research, Latinos in Arizona, California, Florida, and Texas are 1.3 to 1.6 times more likely to be infected than their proportion of the population would suggest. It is telling that despite outbreaks all over Texas in recent weeks, the border region has been leading the state in deaths per capita.
Even with cases surging, if hospitalizations were not rising, that might suggest that this outbreak might be less deadly than the spring’s. But hospitalization data maintained by the COVID Tracking Project suggested otherwise as early as June 23. On that date, hospitalizations began to tick up across the South and West, and they have not stopped. It’s possible we’ll match the national peak number of hospitalizations from the spring outbreak over the next week.
Even if better knowledge of the disease and new treatments have improved outcomes by 25 or even 50 percent, so many people are now in the hospital that some of them will almost certainly die.
There was always a logical, simple explanation for why cases and hospitalizations rose through the end of June while deaths did not: It takes a while for people to die of COVID-19 and for those deaths to be reported to authorities.
So why has there been so much confusion about the COVID-19 death toll? The second surge is inconvenient for the Trump administration and the Republican governors who followed its lead, as well as for Mike Pence, the head of the coronavirus task force, who declared victory in a spectacularly incorrect Wall Street Journal op-ed titled, “There Isn’t a Coronavirus ‘Second Wave.’”
“Cases have stabilized over the past two weeks, with the daily average case rate across the U.S. dropping to 20,000 — down from 30,000 in April and 25,000 in May,” Pence wrote. In the month since Pence made this assertion, the seven-day average of cases has tripled. Several individual states have reported more than 10,000 cases in a day, and Florida alone reported 15,000 cases, more than any state had before, on an absolute or per capita basis.
But there’s another reason for some of the confusion about the severity of the outbreak right now. And that’s the perceived speed at which the outbreak initially landed on American shores and started killing people. The lack of testing let the virus run free in February and much of March. As my colleague Robinson Meyer and I put it at the time, “Without testing, there was only one way to know the severity of the outbreak: counting the dead.” And that is how we figured out how bad the outbreak was. Thousands began dying in the greater New York City area and a few other cities around the country in early April. The seven-day average for new cases peaked on April 10, followed by the peak of the seven-day average for daily deaths just 11 days later.
Everything seemed to happen at once: lots of cases, lots of hospitalizations, lots of deaths. But some of this is also the compression of memory. Most of us remember the deaths in March beginning as quickly as the cases, especially given the testing debacle. That’s not exactly what happened, however. The nation did, in fact, see cases rise weeks before the death toll shot up. There was a time in March when we had detected more than 100 cases for each death we recorded. This is a crucial metric because it gets at the perceived gap between cases and deaths. And it tells us that we did see a lag between rising cases and deaths back in the spring.
During the slow-decline phase in May, the case-to-deaths ratio fell to about 20. Then, this summer, the case-to-death ratio began to rise in early June. On July 6, the ratio hit 100 again, just like in the spring. But as in spring, this was not a good sign, but rather the leading indicator that a new round of outbreaks was taking hold in the country. And, indeed, a week ago, this ratio began to fall as deaths ramped up.
The U.S. came most of the way down the curve from the dark days of April, and now we’re watching the surge happen again. The testing delays, the emergency-room-nurse stories, the refrigerated morgue trucks — the first time as a tragedy, the second time as an even greater tragedy. One must ask, without really wanting to know the answer, How bad could this round get? | https://medium.com/the-atlantic/a-second-coronavirus-death-surge-is-coming-10dba630f635 | ['The Atlantic'] | 2020-07-16 14:14:01.021000+00:00 | ['Health', 'Science', 'Coronavirus'] |
5 Tips on How to Use Your Unfinished Blog Posts to Help You Write New ones | 5 Tips on How to Use Your Unfinished Blog Posts to Help You Write New ones
What to do with all the blog posts that you started and never finished
Photo by Brooke Cagle on Unsplash
My approach to writing is a little bit messier and chaotic than other writers.
I believe in progress over perfection, forward momentum, bouncing between projects and constantly growing and improving. But this approach to writing lends itself to several unfinished posts piling up in the drafts, and that’s fine, I have space for it and it doesn’t hinder my current or future writing.
In fact, it helps it.
If you’re like me, you probably have a stack of unfinished posts laying around too. This post is for you, and to help you answer the question of what to do with all this raw potential. Because that’s what it really is.
Keep reading to find out what I mean.
№01 Combine and turn into a new post
One great way to make use of all the sort-of-started-but-didn’t-finish blog posts you have piling up is to take a couple, or take several, and find ways to combine them.
Sometimes this will work easily, and sometimes it’ll take some effort on your part.
I’ve found several older unfinished posts that perfectly compliment something I’m working on but needed a little something to feel finished, to feel polished. These older pieces are great sources, either in entirety or with key passages, to perfectly fit the dry parts and difficult parts of current work.
More often than not, we writers write on specific themes and ideas. We come back to these same topics over and over, exploring for greater understanding and also to teach more deeply on the subject. This means that all the unfinished posts were likely some kind of attempt to do the same or similar. And if so, then they likely have some good lines, solid passages, or some perfectly fitting contribution to your latest attempt at the same or similar subject.
This is a win-win for writers, and proof that you really shouldn’t delete things in haste. Let your work sit, even if it feels like it fails short, or you just couldn’t find a way to finish the piece. You never know how useful those words might be to some future pieces.
№02 Find their common theme, combine and turn them into one new post
As was mentioned, we writers tend to explore common themes in our work. Sometimes our unfinished pieces will complement each other well enough to come together as one new piece entirely.
When you feel a bit dry on inspiration, or some piece you’re working isn’t quite panning out for whatever reason, take time to hit pause and explore your collection of unfinished work.
You may find something to help finish or carry your current work along, as we discussed in the previous point. Or, and this can be a fun diversion from the frustration of your current work, you may find some unfinished work that can work together.
Here’s what I do:
Copy the first piece, and create a new document, paste the work there. Copy the second, third, and however many others and paste them over as well Be sure to leave plenty of white space between the copy-pastes, you need wiggle room and room to experiment, and find connections Play around with what’s there, copy lines and passages, paste them into the empty spaces, see what can run together, and take a measure of the shape of the whole Work at it until you have something that begins to look more like one piece instead of a sewn together blog post version of Frankenstein’s monster Rinse and repeat
The creative lifestyle is a constant push to stretch your creative barriers, to find new experiments and to not be afraid to try new and different things. And if you followed my steps above, even if the new piece crashes and burns entirely, it’s ultimately just a copy of earlier work, and the earlier work is still safe. No harm, no foul.
№03 Find their theme, turn them into a series
Follow the same steps from the previous point, except this time, look for the theme that connects them and look for ways to make a series.
It’s a simple trick, but sometimes understanding how your work connects can become its own inspiration and driving power in helping you complete individual posts. Seeing these individual pieces as part of a series helps you see the questions that are driving the collection, and understanding these questions helps you see what is left unsaid, unanswered. Finish these pieces then becomes a simple task of saying what’s left unsaid and answering those questions.
Plus, once you have the series finished, you have several pieces on one common theme that tie together instead of the one you originally sat down to write. It’s a solid win for you and your efforts and a great way to stretch yourself and grow into a more capable writer.
What’s more, your confidence grows too, and I have yet to meet a writer who doesn’t, from time to time, need a little boost in their confidence.
Photo by Austin Distel on Unsplash
№04 Learn from them
This might seem like the easiest of tasks on this list, but it’s deceptive in its complexity and challenge.
When you come up against your unfinished work, have the courage to ask why you didn’t finish this work. And then with each answer, you come up with, keep asking more questions of yourself. There are few skills more powerful and better able to transform you and your work than that of honest self-reflection.
Explore why you didn’t finish a piece, look for patterns, and gauge where your interest waned. See if you can better understand these things and then take note if you’ve felt or experienced them recently or in current works.
Learn your lessons, apply them, and grow into the writer your work needs and deserves.
№05 Finish them
Sometimes, all your work really needs is time. And some pieces need more time than others.
If you’ve read enough of my work by now, you know I’m a big fan of letting your written work sit for a while before editing or continuing it. I usually do this from one day to the next, but I’ve had some unfinished work that took well into and beyond a month to finally find the missing parts they needed to be carried to completion.
Finishing your unfinished work can be some of the most challenging and difficult work you do. But, it’s also the most rewarding and satisfying. There’s a simple joy in finishing your work. And what’s more, there is no better way to grow as a writer than to finish your writings.
Next Steps
Stop deleting your work, play the long game, invest in the unfinished work you’ve started but couldn’t quite bring to a finish for whatever reason. Every so often, take a look at this collection, find your next piece there, find some inspiration from something you once had great momentum and passion behind, or find several pieces and find a way to combine them.
The point is this: Don’t be so quick to delete, sometimes your work just needs to sit a while longer, and you need more thinking-time to grow and develop these pieces and explore the powerful potential they already have.
My challenge to you is to explore your unfinished work, see what’s there, what you can work with and start breathing new life into it. | https://medium.com/swlh/5-tips-on-how-to-use-your-unfinished-blog-posts-to-help-you-write-new-ones-5fa50833de18 | ['Gregory D. Welch'] | 2020-02-28 05:35:03.477000+00:00 | ['Self Improvement', 'Productivity', 'Writing', 'Inspiration', 'Creativity'] |
The Studies of Heredity Started in Mental Asylums | The Studies of Heredity Started in Mental Asylums
The most important research to date
Patients from a mental asylum out for air in Victorian England. (Source: Painting by K H Merz 1843)
The science of heritage is quite a big scientific sector that actually focuses on many branches of human behavior and human evolution. Historians state that the research of heredity was started in the early centuries AD, but due to the lack of advanced knowledge or technological advancements, the field was in a slow evolution and obtaining only vague results based on many assumptions that would fill the many missing gaps.
Today we can read the DNA of a person to determine their heritage, but back in the day, before the knowledge of DNA, this science was based very much on registries and statistics. In other words, researchers would look in different archives for a person’s heredity and compose the data into statistics that would best represent the heritage of that person.
As I always say, the best way to understand a certain event or field is to look at where things started, or in this case where heredity science started to actually become interesting and more promising.
Mental asylums could be the answer
Theodore M. Porter, a historian and researcher in this field, has stated that the deteriorating mental health of King George III led to the study of heritage to be carried out in mental asylums during the beginning of the 18th century. This is because the registry system in mental asylums was the best at the time due to all the patients being registered with every piece of information available. This meant that mental asylums, prisons, or even correction schools had the largest archives which would have given contemporary researchers a better chance at actually making some connections.
King George III of England (Source: Wikimedia Commons)
Due to the development of psychiatry, many criminals were actually ending up in mental asylums and not prisons as judges would define their crimes based on psychopathic behavior which needed to be treated, not confined. This meant that the registry would grow exponentially during the 18th century in West Europe. From this, researchers also discovered a reason for the increased number of patients that were suffering from mental illnesses: the increased complexities of ordinary life which also came with more stress.
However, at the time, many researchers believed that mental illnesses were actually hereditary and this is because they appeared when they made certain connections in the heritage of patients. Some of the patients were found because a previous generation in their heritage had also ended up in a mental asylum. Therefore, the directors of asylums within West Europe were told to modify their registry so that they would keep track of siblings of each patient that would become patients in the future.
This led to some drastic interventions within the families who seemed to have mental illnesses passed on from generation to generation. These families were discouraged from reproducing in order to stop the spread of mental illnesses. A drastic solution, but even they knew that their lack of scientific knowledge in correlation with the lack of technological advancement did not allow them to come up with better solutions towards stopping the spread of mental illnesses.
The evolution of hereditary studies
Another interesting fact from Porter’s latest book, Genetics in the Madhouse, is that at the beginning of the 19th-century scientists in this field had a burning desire to standardize the data gathered by mental asylums. Mainly to offer them a constant flow of information as well as to make data easier to analyze. This is where the information which was used in Ludvig Dahl’s Pedigrees of Mental Illnesses came from
Frederik Holst, M.D., Beretning, Betankning og Indstilling fra en til at undersøge de Sindsvages Kaar i Norge (1828), table of causes by disease form, from a census. (Source: Theodore M. Porter UCLA Department of History)
Such studies were seen as the pillars of statistics which were recognized by scholars such as Francis Galton who launched the eugenics paradigm in 1900. Another great scientist in the field by the name of Gregor Mendel tried to experiment with the transmission of hereditary characteristics within different plants. This study really showed potential as many people thought that this ideology could be applied to people with mental illnesses, however, this theory was rejected by many contemporary scientists on the basis that the theory was too simplistic.
What followed in the early 20th century, before molecular genetics, was Nazi scientists that really looked into the eugenics field to accomplish some tremendous research (although suspiciously unethical). Since the discovery of molecular genetics, most of the research done by these Nazi scientists was disregarded. | https://medium.com/history-of-yesterday/the-studies-of-heredity-started-in-mental-asylums-e073d73f1c49 | ['Andrei Tapalaga'] | 2020-11-30 21:02:11.964000+00:00 | ['Science', 'Mental Illness', 'History', 'Mental Health', 'Health'] |
What Survival Experts Say About Quarantine Baking | It’s not really about being able to build a fire or carry a heavy pack over long distances, it’s a general feeling of competence that imbues you with a sense of confidence. And that realization of “I can do this” can carry over into other parts of life. Having accomplished one hard task, you believe you can problem-solve and persevere through other difficult situations as well.
Now faced with the challenge of a global pandemic, it seems that many of us are brushing up on our practical skills, although they’re more Little House on the Prairie than My Side of the Mountain. And while knitting a scarf may not directly relate to surviving a deadly virus, it does provide a feeling of competence and control. Maybe we can’t ensure that our parents will be safe or we won’t lose our jobs, but we can turn flour, water, and yeast into bread and grow our own sustenance from vegetable trimmings and seedlings. And in gaining control over that facet of our lives, we gain a little control over the rest of our lives, too.
“A lot of this pandemic and the things surrounding it are out of our control,” Robinson says. “It’s not our personal choice to stay at home, it’s not our personal choice not to go into the office. Homemaking tasks give you that bit of control and a sense of accomplishment.”
A concrete feeling of accomplishment is something that many of us were lacking even before the novel coronavirus emerged, says John Hudson, chief survival instructor for the British military. As our lives have become more virtual, outdoor survival courses and shows like Naked and Afraid have risen in popularity over the last decade. Hudson, who’s applied his survival knowledge to the current situation in the e-book How to Survive a Pandemic, thinks a big reason for this appeal is the instant gratification that living off the land offers people.
When the only evidence that the sacrifices we’re making through social distancing are working is that nothing happens, it makes sense that so many of us would yearn for a more tangible outcome from our efforts.
“We spend an enormous amount of our working hours indoors, looking at a screen, answering emails … and we never really see an immediate or even any tangible result of our efforts,” he says. “Doing survival instruction, there’s a definite reward in my world if you complete a task. It’s almost an immediate reward because you know if you’ve succeeded or failed [in building shelter or lighting a fire] straight away.”
When so much in our lives is uncertain, and the only evidence that the sacrifices we’re making through social distancing are working is that nothing happens, it makes sense that so many of us would yearn for a more tangible outcome from our efforts.
“It may be out of boredom, but when we’re so anxious and you can cook, that’s an effort-based reward,” says Kelly Lambert, PhD, a professor of behavioral neuroscience at the University of Richmond. “You’re chopping, you’re dicing, you’re stirring, and then you have this wonderful reward at the end of it, something that’s tangible, something you can see, something you can share with your family.” | https://elemental.medium.com/what-survival-experts-say-about-quarantine-baking-9b654a2d5fdf | ['Dana G Smith'] | 2020-04-22 16:00:56.620000+00:00 | ['Brain', 'Food', 'Baking', 'Psychology', 'Coronavirus'] |
Time Management: The Single Most Important Component to My Productivity | Time Management: The Single Most Important Component to My Productivity
The combined strategy of time management that applies to most artistic creators, especially with the COVID-19 Pandemic
Photo by Kari Shea on Unsplash
Stop and retrace your day today, what you’ve done thus far, or, if you’re reading this in the morning, stop and think about what you did yesterday. What was the structure like? Did you find time to write? Did you set aside a few hours to churn out a few dozen or hundred words to your liking? Were you satisfied with your productive output? What do you think you could’ve done better? Honestly, I tend to stay away from writing self-help articles that try to tell people how to work. Everyone’s workflow is different, people thrive on different things, people are distracted by different things, where some people can write in a crowded bar, others would need a quiet library or home office. The Covid-19 outbreak has changed the way we do everything in our lives for pretty much everybody, including us writers. I’ve noticed that quite a few people have reported that they’re finding it more difficult to write, even if they have extra time to do so and less responsibilities. If this sounds like you, I think the answer might lie in how you use your interrupted versus your uninterrupted productive time, and how you balance and use the two in conjunction to create better pieces.
It doesn’t matter what your business or work is, if you’re in charge of yourself and independent, whether it’s with your dream business startup, or writing career, or perhaps even your music gig, time management is an essential skill to have.
Over the years, one thing that’s been a constant for my productivity has been the effect of how I manage my time on the total output of work. This same principle applied when I was a musician and it still applies as a writer, though I wouldn’t put too much stock in the first part of that section because I was a terrible musician. Let’s just say writing is more my calling than music was.
Ultimately, what the different spaces we may write and the different times we may seek out those spaces have to do with our productivity is the difference between distracted and undistracted time. We don’t analyze material the same when we have a small amount of allotted time versus an extended period where our brains can focus.
The Divided Day
Many of my days are spent dividing the day up and putting the focus in when I can, allowing myself an hour here, two hours there, and spacing it out so I can get other things done. We all have lives that need tending and there’s really not anything we can much do about that, sometimes. When my month is primarily composed of divided days, I tend to get a lot less finished, but I get a whole lot more started. I think this is pretty true of everyone. We start a piece, our focus is broken, we go do something and come back only to find it extremely difficult to get back into the frame of mind we were in when we left.
The divided day has its perks, make no mistake. This is actually a good thing and will come in handy later. For many people, starting a piece is the hard part and continuing them even harder. But we can utilize the divided day to touch up pieces that are old with no intention of finalizing them.
When I sit down to finish a piece and only give myself an hour to do it, something just doesn’t feel right, no matter how close to done it was when I started. But I can use the divided day to slowly work on pieces over time, etching them out like a sculpture slowly sculpts his finest works of art.
The Whole Day
As harsh as this may sound, I think any writer’s work will suffer if they cannot take whole days to themselves in order to write. No distractions, phone turned off, no friends, no Facebook, no social media, nothing. The whole day is necessary for the pieces that can be started and brought to completion in a single day and to really give those pieces that have been sitting on my shelf a few proof-reads before I feel comfortable enough to hit publish.
Whole days are what I designate as publishing days. When I submit or publish work in a rush, I feel uncertain and insecure about how well I did. I don’t feel confident. I tend to think this lags onto our next pieces and the following pieces and can build up and pretty soon, we’re unsure, we feel scattered, confused about our purpose, and like we’re working in a total state of distraction and chaos. This isn’t good for us as writers. Some writers prefer to have a designated writing area so they can get into the habit of treating writing like their actual work for another company (because it is) and eliminate all distractions during that period of time.
If we really want to be serious, whole days are absolutely essential to our productivity and our sense of purpose as creators, whether we’re writers or musicians or artists is less relevant than how we manage our time to maximize our productivity and focus. Not all focus is equal.
Ten minutes to glance over a piece and improve it here and there, touch up sentences, and change out words for more interesting ones are not the same as having three hours to feel comfortable in taking our time, focusing solely on the work in front of us, and publishing the finalized work once we’re happy with it.
The Combined Strategy
I actually use both of these in conjunction for my process. I give myself dates for each piece to be published by, I build them slowly in several apps (Microsoft To-Do and OneNote are essential for my business) where I can organize each piece by title, main topic, section, and then the content of each part, contributing to a lot of pieces slowly over time. This makes the respective parts better, overall.
A lot of people I know tend to try to use one or the other. There are the peeps who always want to have a good chunk of time ahead of them, at least a couple of hours before they even get started. I’m absolutely frickin’ convinced that this is why so many people rarely start anything, if ever. So, from my heart to yours, for God’s sake, start your work even if you don’t have the time to finish it. Get a paragraph out there. Writing doesn’t have to be linear, get anything out there, throw ideas at the wall and see what sticks. Sometimes I start with what ends up becoming the 8th paragraph of a piece. You can always reassemble later, but if you struggle with getting words on paper, the thing you need to embrace the gaps in time, whatever they may be, to pour out your thoughts no matter how good or bad, and roll with it.
They don’t have to be masterpieces, what’s important in the downtime is tossing whatever scraps you can out there to build with later. The more notes you have, the better off you’ll be when it comes time to assemble pieces of work. If you’re a musician, take that time to think up a new melody or rhythm or what have you and make sure you get it down by practicing it a few times. Use that time wisely.
Then when you finally get a day ahead of you to put all of your efforts towards the completion of work, you’ll just have tons of material lying around. Assembling it all into the perfect whole is the only task you have ahead of you and you don’t have to focus on conjuring up so much new material and writing everything from scratch, you can more assemble the ideas you’ve already thought up in the interim moments. Completing a piece is much easier when you have a predetermined guide to follow, one that you’ve made on your own.
Takeaway
I’m actually not one of those people who believes we need to maximize every moment of every day. I’m a firm believer that rest and daydreaming are very important elements of the human psyche and imagination (I did a story on the importance of daydreaming on mental health) and they’re crucial to my productivity and overall well-being as a person. But I think that there are still going to be moments in between where we can sneak in some ideas when our brains are up for the task, and, of course, time we’ll need to set aside if we’re going to take our work seriously, time that’s uninterrupted where we can dedicate 100% of our focus to the task at hand. As you can see, I’m a proponent of the wholistic approach.
Don’t rely too much on one strategy or the other, A lot of people try to pinch whatever time they can in their already-hectic lives, but aren’t willing to trade in the things they love for the whole-day approach. I’ve been this person and, honestly, it took eliminating social media from my work atmosphere before my work really began to take off in terms of productive output (and reception).
Other people try to rely way too much on sitting down in front of a blank, empty screen without ideas laying around to assemble and just rolling with the assumption that they’ll be able to dream up all the right words in that very moment. But that’s not really how our concentration works. We can’t force our brains to think of interesting things in the times when our brains aren’t cooperating. This is why a balanced strategy is the best for my productivity and hopefully, by employing it, it might help yours as well. | https://joemduncan.medium.com/time-management-the-single-most-important-component-to-my-productivity-fb26c3b3787d | ['Joe Duncan'] | 2020-04-14 02:51:26.767000+00:00 | ['Productivity', 'Creativity', 'Time Management', 'Business Strategy', 'Writing'] |
Writers: How to Accomplish More by Writing Less | The struggle is real
So, why did I think this was a good idea to struggle twice as much, or ten times more?
Our conscious mind can only process one project at a time, nothing more. There’s no such thing as multitasking — only dilution of our efforts. Writing multiple books at once might feel productive and faster, but the net effect was horribly slow. Three years slow.
The process of writing brings a flood of creativity along with it. Whether you write fiction of non-fiction, the world looks different while you’re in writing mode. You pay attention to little details more. You look at language differently and everything around you is fair-game for a story.
The problem: the universe will try and conspire against you with everything she has.
Most people who start books don’t finish them. These are called drawer novels, because they sit in a proverbial drawer (digital or physical) somewhere. Of the people who do finish novels, most of those first books will be terrible (and should stay in the drawer). This is OK. The bad, first book is part of the process. We’ve got to get it out of our bodies, like the flu.
So, it makes sense when a new idea comes to you, and you feel it’s brilliant, that you’d wan’t to act on it immediately. I mean, if we’ve got to write a terrible first book, why not write the bad book and the next book simultaneously?
Don’t. It’s a terrible idea. This was my thinking and it took me ten times longer to finish my first book.
I wish I’d listened to Stephen King earlier. King takes the extreme stance that writers’ notebooks are the worst idea ever — that the best ideas will rise to the stop and stick in your mind. The ideas you can’t let shake are the ones you should write. The ideas you forget were forgettable.
I don’t fully agree with King, but I understand his intention. When I started writing I thought I had all these amazing ideas. I’d write them in giant notebooks and spent more time generating new ideas than I did on the writing itself. I’d get ideas for novel after novel and I’d start each book.
When I got to the point where I had to pick a story I couldn’t choose just one. They all sounded so good. I figured I could peck-away at a couple novels simultaneously. How hard could it be?
Two books became three.
Three became four.
Then, I added unfinished short stories to the never-ending conveyor belt of non-productivity. The ideas would come in. I’d stop what I was writing and pick up the new project. I thought I was a genius and wondered why other authors didn’t write this way… I’d get so many more books done at once!
The process was brutal and unfulfilling. I’d lose track of storylines and put details in one book that belonged elsewhere. I thought I was working smart. I was a lunatic. Being new to fiction writing the process was so exciting I didn’t recognize the madness. It’s embarrassing to type this now.
I hope you won’t follow in my footsteps.
I wrote seven manuscripts in a year and none were salvageable, due to this writing process. I had to send myself to personal writing rehab and completely re-think my process. If I had continued with my treadmill writing I never would’ve finished a single project. | https://augustbirch.medium.com/writers-how-to-accomplish-more-by-writing-less-30afdf86024a | ['August Birch'] | 2019-04-15 17:55:50.087000+00:00 | ['Writing Tips', 'Productivity', 'Creativity', 'Fiction', 'Writing'] |
A Supercomputer's Covid-19 Analysis Yields a New Way to Understand the Virus | A Supercomputer Analyzed Covid-19 — and an Interesting New Theory Has Emerged
A closer look at the Bradykinin hypothesis
Photo: zhangshuang/Getty Images
Earlier this summer, the Summit supercomputer at Oak Ridge National Lab in Tennessee set about crunching data on more than 40,000 genes from 17,000 genetic samples in an effort to better understand Covid-19. Summit is the second-fastest computer in the world, but the process — which involved analyzing 2.5 billion genetic combinations — still took more than a week.
When Summit was done, researchers analyzed the results. It was, in the words of Dr. Daniel Jacobson, lead researcher and chief scientist for computational systems biology at Oak Ridge, a “eureka moment.” The computer had revealed a new theory about how Covid-19 impacts the body: the bradykinin hypothesis. The hypothesis provides a model that explains many aspects of Covid-19, including some of its most bizarre symptoms. It also suggests 10-plus potential treatments, many of which are already FDA approved. Jacobson’s group published their results in a paper in the journal eLife in early July.
According to the team’s findings, a Covid-19 infection generally begins when the virus enters the body through ACE2 receptors in the nose, (The receptors, which the virus is known to target, are abundant there.) The virus then proceeds through the body, entering cells in other places where ACE2 is also present: the intestines, kidneys, and heart. This likely accounts for at least some of the disease’s cardiac and GI symptoms.
(Sign up for Your Coronavirus Update, a biweekly newsletter with the latest news, expert advice, and analysis to keep you safe)
But once Covid-19 has established itself in the body, things start to get really interesting. According to Jacobson’s group, the data Summit analyzed shows that Covid-19 isn’t content to simply infect cells that already express lots of ACE2 receptors. Instead, it actively hijacks the body’s own systems, tricking it into upregulating ACE2 receptors in places where they’re usually expressed at low or medium levels, including the lungs.
In this sense, Covid-19 is like a burglar who slips in your unlocked second-floor window and starts to ransack your house. Once inside, though, they don’t just take your stuff — they also throw open all your doors and windows so their accomplices can rush in and help pillage more efficiently.
The renin–angiotensin system (RAS) controls many aspects of the circulatory system, including the body’s levels of a chemical called bradykinin, which normally helps to regulate blood pressure. According to the team’s analysis, when the virus tweaks the RAS, it causes the body’s mechanisms for regulating bradykinin to go haywire. Bradykinin receptors are resensitized, and the body also stops effectively breaking down bradykinin. (ACE normally degrades bradykinin, but when the virus downregulates it, it can’t do this as effectively.)
The end result, the researchers say, is to release a bradykinin storm — a massive, runaway buildup of bradykinin in the body. According to the bradykinin hypothesis, it’s this storm that is ultimately responsible for many of Covid-19’s deadly effects. Jacobson’s team says in their paper that “the pathology of Covid-19 is likely the result of Bradykinin Storms rather than cytokine storms,” which had been previously identified in Covid-19 patients, but that “the two may be intricately linked.” Other papers had previously identified bradykinin storms as a possible cause of Covid-19’s pathologies.
Covid-19 is like a burglar who slips in your unlocked second-floor window and starts to ransack your house.
As bradykinin builds up in the body, it dramatically increases vascular permeability. In short, it makes your blood vessels leaky. This aligns with recent clinical data, which increasingly views Covid-19 primarily as a vascular disease, rather than a respiratory one. But Covid-19 still has a massive effect on the lungs. As blood vessels start to leak due to a bradykinin storm, the researchers say, the lungs can fill with fluid. Immune cells also leak out into the lungs, Jacobson’s team found, causing inflammation.
And Covid-19 has another especially insidious trick. Through another pathway, the team’s data shows, it increases production of hyaluronic acid (HLA) in the lungs. HLA is often used in soaps and lotions for its ability to absorb more than 1,000 times its weight in fluid. When it combines with fluid leaking into the lungs, the results are disastrous: It forms a hydrogel, which can fill the lungs in some patients. According to Jacobson, once this happens, “it’s like trying to breathe through Jell-O.”
This may explain why ventilators have proven less effective in treating advanced Covid-19 than doctors originally expected, based on experiences with other viruses. “It reaches a point where regardless of how much oxygen you pump in, it doesn’t matter, because the alveoli in the lungs are filled with this hydrogel,” Jacobson says. “The lungs become like a water balloon.” Patients can suffocate even while receiving full breathing support.
The bradykinin hypothesis also extends to many of Covid-19’s effects on the heart. About one in five hospitalized Covid-19 patients have damage to their hearts, even if they never had cardiac issues before. Some of this is likely due to the virus infecting the heart directly through its ACE2 receptors. But the RAS also controls aspects of cardiac contractions and blood pressure. According to the researchers, bradykinin storms could create arrhythmias and low blood pressure, which are often seen in Covid-19 patients.
The bradykinin hypothesis also accounts for Covid-19’s neurological effects, which are some of the most surprising and concerning elements of the disease. These symptoms (which include dizziness, seizures, delirium, and stroke) are present in as many as half of hospitalized Covid-19 patients. According to Jacobson and his team, MRI studies in France revealed that many Covid-19 patients have evidence of leaky blood vessels in their brains.
Bradykinin — especially at high doses — can also lead to a breakdown of the blood-brain barrier. Under normal circumstances, this barrier acts as a filter between your brain and the rest of your circulatory system. It lets in the nutrients and small molecules that the brain needs to function, while keeping out toxins and pathogens and keeping the brain’s internal environment tightly regulated.
If bradykinin storms cause the blood-brain barrier to break down, this could allow harmful cells and compounds into the brain, leading to inflammation, potential brain damage, and many of the neurological symptoms Covid-19 patients experience. Jacobson told me, “It is a reasonable hypothesis that many of the neurological symptoms in Covid-19 could be due to an excess of bradykinin. It has been reported that bradykinin would indeed be likely to increase the permeability of the blood-brain barrier. In addition, similar neurological symptoms have been observed in other diseases that result from an excess of bradykinin.”
Increased bradykinin levels could also account for other common Covid-19 symptoms. ACE inhibitors — a class of drugs used to treat high blood pressure — have a similar effect on the RAS system as Covid-19, increasing bradykinin levels. In fact, Jacobson and his team note in their paper that “the virus… acts pharmacologically as an ACE inhibitor” — almost directly mirroring the actions of these drugs.
By acting like a natural ACE inhibitor, Covid-19 may be causing the same effects that hypertensive patients sometimes get when they take blood pressure–lowering drugs. ACE inhibitors are known to cause a dry cough and fatigue, two textbook symptoms of Covid-19. And they can potentially increase blood potassium levels, which has also been observed in Covid-19 patients. The similarities between ACE inhibitor side effects and Covid-19 symptoms strengthen the bradykinin hypothesis, the researchers say.
ACE inhibitors are also known to cause a loss of taste and smell. Jacobson stresses, though, that this symptom is more likely due to the virus “affecting the cells surrounding olfactory nerve cells” than the direct effects of bradykinin.
Though still an emerging theory, the bradykinin hypothesis explains several other of Covid-19’s seemingly bizarre symptoms. Jacobson and his team speculate that leaky vasculature caused by bradykinin storms could be responsible for “Covid toes,” a condition involving swollen, bruised toes that some Covid-19 patients experience. Bradykinin can also mess with the thyroid gland, which could produce the thyroid symptoms recently observed in some patients.
The bradykinin hypothesis could also explain some of the broader demographic patterns of the disease’s spread. The researchers note that some aspects of the RAS system are sex-linked, with proteins for several receptors (such as one called TMSB4X) located on the X chromosome. This means that “women… would have twice the levels of this protein than men,” a result borne out by the researchers’ data. In their paper, Jacobson’s team concludes that this “could explain the lower incidence of Covid-19 induced mortality in women.” A genetic quirk of the RAS could be giving women extra protection against the disease.
The bradykinin hypothesis provides a model that “contributes to a better understanding of Covid-19” and “adds novelty to the existing literature,” according to scientists Frank van de Veerdonk, Jos WM van der Meer, and Roger Little, who peer-reviewed the team’s paper. It predicts nearly all the disease’s symptoms, even ones (like bruises on the toes) that at first appear random, and further suggests new treatments for the disease.
As Jacobson and team point out, several drugs target aspects of the RAS and are already FDA approved to treat other conditions. They could arguably be applied to treating Covid-19 as well. Several, like danazol, stanozolol, and ecallantide, reduce bradykinin production and could potentially stop a deadly bradykinin storm. Others, like icatibant, reduce bradykinin signaling and could blunt its effects once it’s already in the body.
Interestingly, Jacobson’s team also suggests vitamin D as a potentially useful Covid-19 drug. The vitamin is involved in the RAS system and could prove helpful by reducing levels of another compound, known as REN. Again, this could stop potentially deadly bradykinin storms from forming. The researchers note that vitamin D has already been shown to help those with Covid-19. The vitamin is readily available over the counter, and around 20% of the population is deficient. If indeed the vitamin proves effective at reducing the severity of bradykinin storms, it could be an easy, relatively safe way to reduce the severity of the virus.
Other compounds could treat symptoms associated with bradykinin storms. Hymecromone, for example, could reduce hyaluronic acid levels, potentially stopping deadly hydrogels from forming in the lungs. And timbetasin could mimic the mechanism that the researchers believe protects women from more severe Covid-19 infections. All of these potential treatments are speculative, of course, and would need to be studied in a rigorous, controlled environment before their effectiveness could be determined and they could be used more broadly.
Covid-19 stands out for both the scale of its global impact and the apparent randomness of its many symptoms. Physicians have struggled to understand the disease and come up with a unified theory for how it works. Though as of yet unproven, the bradykinin hypothesis provides such a theory. And like all good hypotheses, it also provides specific, testable predictions — in this case, actual drugs that could provide relief to real patients.
The researchers are quick to point out that “the testing of any of these pharmaceutical interventions should be done in well-designed clinical trials.” As to the next step in the process, Jacobson is clear: “We have to get this message out.” His team’s finding won’t cure Covid-19. But if the treatments it points to pan out in the clinic, interventions guided by the bradykinin hypothesis could greatly reduce patients’ suffering — and potentially save lives. | https://elemental.medium.com/a-supercomputer-analyzed-covid-19-and-an-interesting-new-theory-has-emerged-31cb8eba9d63 | ['Thomas Smith'] | 2020-09-03 18:37:13.371000+00:00 | ['Health', 'Science', 'Bradykinin Hypothesis', 'Coronavirus', 'Covid 19'] |
How Long Coronavirus Survives on Hard and Soft Surfaces | This scanning electron microscope image shows the new coronavirus, SARS-CoV-2, in yellow and isolated from a U.S. patient, emerging from the surface of cells (blue/pink) cultured in the lab. Credit: NIAID-RML
How Long Coronavirus Survives on Hard and Soft Surfaces
It just sits there for hours, even days, waiting for a new host to pick it up
When a new virus emerges, among the many things scientists do not know is how long it survives outside its targeted hosts. For the new coronavirus, SARS-CoV-2, we humans are the host. And scientists now have an idea for how long this thing can remain viable when it gets deposited on various surfaces, typically by a sneeze or a cough.
Viruses are not technically living things. To endure, they need to get inside us, invade our cells, then hijack the nuclear machinery of life. The cells of a person infected with SARS-CoV-2 reproduce the coronavirus, and the person suffers the symptoms of COVID 19.
Somewhat lost amid all the news lately is new research published March 17 in the New England Journal of Medicine, results that had circulated for about two weeks prior to the formal publication, and which I noted the other day in my COVID-19 FAQ. The research reveals some figures I found startling, so it seems important to highlight it separately. The coronavirus was found to last up to…
3 hours in aerosols (airborne droplets)
4 hours on copper
24 hours on cardboard
3 days on plastic or stainless steel
“The results provide key information about the stability of SARS-CoV-2, which causes COVID-19 disease, and suggests that people may acquire the virus through the air and after touching contaminated objects,” say the researchers, who are from UCLA, Princeton University, the National Institutes of Health and the Centers for Disease Control and Prevention.
Interestingly, the stability of this new coronavirus on surfaces was found to be similar to that of its cousin that caused the SARS outbreak back in 2002 and 2003, which was contained after infecting about 8,000 people and killing 774. And that similarity “unfortunately fails to explain why COVID-19 has become a much larger outbreak,” the researchers say. “If the viability of the two coronaviruses is similar, why is SARS-CoV-2 [the new one] resulting in more cases? Emerging evidence suggests that people infected with SARS-CoV-2 might be spreading virus without recognizing, or prior to recognizing, symptoms. This would make disease control measures that were effective against SARS-CoV-1 less effective against its successor.”
That statement refers to so-called super-spreaders, who are infected with the coronavirus, but have no symptoms (or maybe are mildly sick and think they just have a cold or a touch of the flu, or that it’s nothing) and who then spread it widely. Even someone who ends up with severe symptoms can spread the disease during the incubation period of the virus, a period of 2 to 14 days (median of about 5 days) before symptoms appear.
The survivability of this new germ shows why it is so important to sanitize surfaces, avoid shaking hands, do the social distancing thing, avoid touching your face, and frequently and properly wash your hands (20 seconds of scrubbing with soap). The CDC has detailed recommendations for disinfecting hard and soft surfaces in your home, here. | https://robertroybritt.medium.com/how-long-coronavirus-survives-on-hard-and-soft-surfaces-dc32696539f9 | ['Robert Roy Britt'] | 2020-03-23 21:05:31.459000+00:00 | ['Home', 'Disease', 'Health', 'Science', 'Coronavirus'] |
Content Means Nothing Without Context | One of the first things I did as a writer was to put together a series of three articles showing people how to use Google to answer nearly any question. It was a massive guide.
I was proud of it, and I got some positive feedback from the few readers I had at the time. It must have gone to my head, because I instantly turned that praise into my next mistake: I decided to publish the guide as a book.
I spent a week laboring over it, expanding it, improving it, making screenshots — the whole nine yards. I designed everything myself (another mistake) and self-published it on Amazon.
The result? Crickets. Even at my low hourly rate at the time, I’m still waiting for that one to recoup its investment.
There were many reasons why my book was a flop, but the main one, I think, is that I broke the cardinal rule of creating in an online world:
Content is king, but context is God.
You can’t just tweet out your article and expect to get an extra 1,000 shares. You can’t just transcribe a video and hit ‘Publish.’ What does well as an article won’t automatically sell as a book.
Like the kings of history, our work can only do well insofar as it is empowered by the context around it. If a people stopped believing its king was God’s messenger, they chopped his head off. That’s exactly what happens to our content if we distribute it across dozens of platforms like a firehose, not thinking about the culture and context of each one: It dies immediately.
Why do we do this in the first place? “Because I said so.”
Influencers, trend reports, pseudo-experts, they all tell us the same thing: “Nowadays, you gotta be everywhere. Be on Instagram. Be on Facebook. Make a TikTok account. Hell, that one’s new — make two!”
Bullshit.
The only thing that happens if you promote yourself anywhere and everywhere is this:
Image courtesy of the author, based on Greg McKeown’s Essentialism
If you look at the history of how most influencers become towering giants on multiple social media platforms, you’ll see that they worked really hard on one of them — until they exploded and eventually took their giant crowd elsewhere.
It’s really easy to get 100,000 Twitter followers if you have 1,000,000 on Youtube. But to go from 0 to 100,000 on both at the same time? That’s really hard.
Each platform has its own, unique context. If we ignore it, we’ll drown.
Twitter is built around wit, around humor, sass, information density. Medium offers long-form, transformative reading experiences — people spend hours there, but only if they love words. Instagram is visual. It doesn’t require words at all, but it’s also superficial.
This doesn’t mean you can’t share your work around the web, but it’s a reminder to acknowledge context wherever you go.
If you share an article on Twitter, quote the top highlight. Deliver a 2-sentence pitch on why I should read it. Or turn it into a tweet storm and give me the whole thing!
Whatever you do, don’t walk naked into a pub. Read the room, or we’ll shoo you out the door. | https://medium.com/better-marketing/content-means-nothing-without-context-a032b06ed53f | ['Niklas Göke'] | 2020-05-27 20:15:15.036000+00:00 | ['Social Media', 'Marketing', 'Creativity', 'Content Marketing', 'Writing'] |
Try These 20 CNN Headline Templates to Draw in Readers | Try These 20 CNN Headline Templates to Draw in Readers
I browsed their site for 1.5 hrs
Illustration by Cynthia Marinakos.
You know how it is. You’ve got that huge to-do list to get through when you get online outside of work: an article to write, the banking, canceling an appointment, finding a gift for your cousin’s newborn, emailing a business for a refund…
You log into your email account with the intention of emailing that business who annoyingly billed you after the trial period — and you hadn’t decided to subscribe. But then you see all the new emails that have come in overnight, and curious, you pick out the ones that seem most interesting. Half an hour later, you forget why you logged in.
You begin researching the article, and gosh, there’s so much interesting stuff out there. You jump between papers. You get caught up on YouTube, “Ooh a new TEDTalk.” “Hey, those boots I was looking at last week are on sale.” And before you know it, an hour and a half has gone by and you’re only a quarter of the way through your research.
Why is it so hard to stay focused online?
One reason is there are people out there who are so damn good at writing headlines we can’t resist: for articles. For email subject headers. For videos. Books. Ads.
And there’s one industry whose existence depends particularly on writing compelling headlines:
The media.
Without magnetic headlines, their readership numbers drop. Without strong readership, they won’t attract big advertisers — advertisers that want to reach the right audience to sell their products and services. Without big advertisers, they can’t afford to investigate, write, and distribute content to the masses.
Magnetic headlines make all that happen.
We can learn a lot from CNN’s headlines because they write with credibility and simplicity. They grab attention — but not through clickbait. The problem with clickbait is it entices readers with over-the-top, vague, or misleading headlines where content doesn’t deliver.
That’s a problem because the worst we can do is waste our readers’ time and trample on their trust.
You and I wouldn’t dream of deceiving our readers for the sake of a click, would we? You and I know that constant sensational headlines that don’t deliver would turn off readers and damage our reputations long-term, just as it has for dodgy used car salesmen.
So CNN is a great place for us to learn about the type of headlines we want to share with our readers. Here’s what we’ll run through today:
See real CNN headlines that work
Get templates to create your own impactful headlines
Understand what makes the headlines so powerful
Get a simple summary of what’s common between CNN headlines
All this so you can add these templates to your writing swipe file — to get the readership you deserve for your precious words. | https://medium.com/better-marketing/try-these-20-cnn-headline-templates-to-draw-in-readers-632e818b9a4a | ['Cynthia Marinakos'] | 2020-08-19 02:54:00.031000+00:00 | ['Headline Hacks', 'Business', 'Writing', 'Productivity', 'Startup'] |
There are Many Uncanny Parallels Between Depression and Fascism | Both depression and fascism thrive on fear and terrorizing their host — be it your mind or your country — until you systematically question what your eyes, ears, and heart are reporting back to you; until you no longer trust your senses and either endorse the agenda of that which seeks to destroy you, or just give up.
For its part, depression gradually injects doubt into every aspect of personhood. It may undermine a once competent professional until their skills appear worthless and unemployability certain, or shred someone’s self-esteem until they believe a romantic relationship can only exist out of pity rather than love, or put the kibosh on one’s dreams — because, let’s face it, what future is there for someone who’s such an incapable and unlovable waste of space?
Both depression and fascism thrive on fear and terrorizing their host.
At its most virulent, depression corrodes your sense of self and erodes your identity, and the parasite feeds until only the physical representation of the host remains.
Donald Trump is having the same effect on America that depression has on an individual. And he’s doing it the same way: by distorting reality, strafing journalists and citizens alike with falsehoods.
In both cases, the aim is for lies to supplant reality altogether.
If the farce endures in its grotesque glory, it’s because it takes initiative, courage, and knowing exactly who you are in order to take a stand against what you’re being told to accept as the norm, whether by your mind or by the latest White House tenant.
To the unsuspecting onlooker, when I was in the throes of deepest depression, I looked as I always had. But whenever I opened my mouth, it was clear that it wasn’t me speaking, but depression — through pained, inarticulate self-doubt.
To the unsuspecting onlooker, America still mostly looks like it always has. But whenever Donald Trump opens his mouth, it’s clear it isn’t democracy speaking, but fascism, through absurd sentences almost entirely devoid of syntax or meaning.
Similarly, just as I remember a different life before depression flattened me, many of us remember a different life before our current political regime began normalizing hate.
Now that white supremacists are in charge, they believe that order can be restored by returning anyone who doesn’t fit their norm to their respective sub-human category, ranging from most similar and tolerable (healthy, able-bodied straight American-born Christian white women) to most different and undesirable (anyone else).
Plainly put, many of us are now regarded as inferior, as lesser than, based on national origin, immigration status, religion, sexual orientation, skin tone, reproductive choices, physical and mental abilities, etc…
The current administration would like us to believe that this hierarchy is “normal” — but it is not.
That we should have the audacity to define our own identities and demand equality — because America was founded on the basis of all people being created equal — is to invite shaming, if not mockery.
Shame and mockery are devastatingly powerful tools.
With depression, too, shaming wields great destructive power.
When depression became larger than life itself, it bullied me into identifying with it. The illness kept me under house arrest, stewing in shame because I couldn’t work, and therefore I couldn’t afford to consume health care and get well enough to work, a conundrum familiar to many sick Americans.
In the eyes of a staunchly individualistic society like ours, in which we’re always supposed to win, to achieve, I didn’t pass muster. I failed to measure up, I was weak, a ‘ridiculous loser’. Depression also built a wall around me to keep out other humans, chipping away at my self-esteem and declaring isolation as the new normal.
Under such conditions, staying alive — that is to say, performing the most basic human functions required to do so — becomes the greatest act of resistance you’re capable of.
Trite though it may sound, “While there’s life, there’s hope,” and your making it through each brand new day is proof of this.
Do not ever discount the hope of better days buried deep inside you.
In America, we’ve now got a Muslim ban, and soon we’ll even have a border wall to keep out other fellow humans. Those of us who refuse to fall in line with the regime are constantly being othered, divided, derided, debased — and yet we keep coming together regardless because we remember life before.
Do not ever discount the hope of better days buried deep inside you. As the intellectual ability to envisage alternatives to what is, hope is one of the most powerful weapons of all. | https://kittyhannaheden.medium.com/there-are-many-uncanny-parallels-between-depression-and-fascism-f82b999afb95 | ['A Singular Story'] | 2020-12-10 15:12:40.377000+00:00 | ['Mental Health', 'Psychology', 'Self', 'Society', 'Politics'] |
Get Started with PySpark and Jupyter Notebook in 3 Minutes | Install pySpark
Before installing pySpark, you must have Python and Spark installed. I am using Python 3 in the following examples but you can easily adapt them to Python 2. Go to the Python official website to install it. I also encourage you to set up a virtualenv.
To install Spark, make sure you have Java 8 or higher installed on your computer. Then, visit the Spark downloads page. Select the latest Spark release, a prebuilt package for Hadoop, and download it directly.
Unzip it and move it to your /opt folder:
$ tar -xzf spark-1.2.0-bin-hadoop2.4.tgz $ mv spark-1.2.0-bin-hadoop2.4 /opt/spark-1.2.0
Create a symbolic link:
$ ln -s /opt/spark-1.2.0 /opt/spark̀
This way, you will be able to download and use multiple Spark versions.
Finally, tell your bash (or zsh, etc.) where to find Spark. To do so, configure your $PATH variables by adding the following lines in your ~/.bashrc (or ~/.zshrc ) file:
export SPARK_HOME=/opt/spark
export PATH=$SPARK_HOME/bin:$PATH
Install Jupyter Notebook
Install Jupyter notebook:
$ pip install jupyter
You can run a regular jupyter notebook by typing:
$ jupyter notebook
Your first Python program on Spark
Let’s check if PySpark is properly installed without using Jupyter Notebook first.
You may need to restart your terminal to be able to run PySpark. Run:
$ pyspark Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 2.1.0
/_/ Using Python version 3.5.2 (default, Jul 2 2016 17:53:06)
SparkSession available as 'spark'.
>>>
It seems to be a good start! Run the following program:
(I bet you understand what it does!)
import random
num_samples = 100000000 def inside(p):
x, y = random.random(), random.random()
return x*x + y*y < 1 count = sc.parallelize(range(0, num_samples)).filter(inside).count() pi = 4 * count / num_samples
print(pi) sc.stop()
The output will probably be around 3.14 .
PySpark in Jupyter
There are two ways to get PySpark available in a Jupyter Notebook:
Configure PySpark driver to use Jupyter Notebook: running pyspark will automatically open a Jupyter Notebook
will automatically open a Jupyter Notebook Load a regular Jupyter Notebook and load PySpark using findSpark package
First option is quicker but specific to Jupyter Notebook, second option is a broader approach to get PySpark available in your favorite IDE.
Method 1 — Configure PySpark driver
Update PySpark driver environment variables: add these lines to your ~/.bashrc (or ~/.zshrc ) file.
export PYSPARK_DRIVER_PYTHON=jupyter
export PYSPARK_DRIVER_PYTHON_OPTS='notebook'
Restart your terminal and launch PySpark again:
$ pyspark
Now, this command should start a Jupyter Notebook in your web browser. Create a new notebook by clicking on ‘New’ > ‘Notebooks Python [default]’.
Copy and paste our Pi calculation script and run it by pressing Shift + Enter.
Jupyter Notebook: Pi Calculation script
Done!
You are now able to run PySpark in a Jupyter Notebook :)
Method 2 — FindSpark package
There is another and more generalized way to use PySpark in a Jupyter Notebook: use findSpark package to make a Spark Context available in your code.
findSpark package is not specific to Jupyter Notebook, you can use this trick in your favorite IDE too.
To install findspark:
$ pip install findspark
Launch a regular Jupyter Notebook:
$ jupyter notebook
Create a new Python [default] notebook and write the following script:
import findspark
findspark.init() import pyspark
import random sc = pyspark.SparkContext(appName="Pi")
num_samples = 100000000 def inside(p):
x, y = random.random(), random.random()
return x*x + y*y < 1 count = sc.parallelize(range(0, num_samples)).filter(inside).count() pi = 4 * count / num_samples
print(pi) sc.stop()
The output should be: | https://medium.com/sicara/get-started-pyspark-jupyter-guide-tutorial-ae2fe84f594f | ['Charles Bochet'] | 2019-12-05 10:43:00.520000+00:00 | ['Data Engineering', 'Python', 'Big Data', 'Jupyter', 'Spark'] |
Our FAQs | Writers
What happens when I submit my article to TDS?
Thank you so much for taking the time to submit your article to our team! We will review it as soon as we can.
If we believe that your article is excellent and ready to go, this is how you will be able to add your post to our publication. If “Towards Data Science” shows up after you click on “Add to publication” in the dropdown menu at the top of the page, that means we have added you as an author and are waiting for you to submit your article. Once you have submitted your article, it will be reviewed by an editor before a final decision is made.
If we think that your article is interesting but needs to be improved, someone from our team will provide you with feedback directly on your submitted Medium article.
Please note that we only respond to articles that were properly submitted using either our form or via an email that exactly follows the instructions listed here. We don’t respond to pitches or questions already answered in our FAQs or on our Contribute page. We also ignore articles that don’t comply with our rules.
If you haven’t heard from us within the next five working days, please carefully check the article you submitted to our team. See if you can now submit it directly to TDS and look for any private notes from us that you may have missed. You should also make sure to check your spam folder.
If you just can’t reach us, the best thing for you to do is submit your article to another publication. Although we’d love to, we can’t provide customized feedback to everyone because we simply receive too many submissions. You can learn more about our decision here and submit another post in a month. | https://medium.com/p/462571b65b35#bc29 | ['Tds Editors'] | 2020-11-19 01:16:58.476000+00:00 | ['Writers’ Guide', 'Tds Team', 'Writers Guide'] |
Facebook Newsfeed Algorithm: 5 Ways to Recover Organic Reach | Ladies and gentlemen. We come together today to once again mourn the loss of Facebook organic reach, to share the grief all of us marketers feel. And perhaps, in that sharing, we can find the strength to look toward the future with some hope.
Yes, organic reach on Facebook is abysmal and getting worse, thanks to the latest announcement from the social network that’s visited by more than a billion users every day. Facebook will show more funny videos and baby pictures posted by family and friends instead of news and other marketing content from brands, businesses, and publishers.
How bad is organic engagement on Facebook? On average, it’s somewhere in the neighborhood of less than 1 percent.
Yikes.
Every once in a while, one of your posts might still get tons of organic engagement. But it’s fast becoming mission impossible.
Facebook: Unhackable.
Facebook’s algorithm is powered by machine learning. While I don’t know the secret formula Facebook uses, we know from a computer-science perspective that machine-learning algorithms learn by testing and figuring out howpeople react to those tests.
Bottom line: if people really love your content and engage with it, then they are more likely to see more of that type of content in the future. The reverse is also true — if you post garbage, and if people don’t engage with it, then those people are even less likely to see your stuff in the future.
More engagement (i.e., shares, comments, Likes) means more visibility in Facebook’s news feed. Facebook’s algorithm is more likely to give more visibility to posts that resonate well, to audition it in front of more people.
In fact, Facebook Ads, Google AdWords and even organic search work the same way.
So what’s the solution?
Your mission, if you choose to accept it, is to mitigate the loss from the latest Facebook newsfeed algorithm. You must raise your organic engagement rates.
Let’s meet your new weapons — the five crazy hacks that will help you do what’s said to be impossible: hack the Facebook newsfeed algorithm.
Note: Some of these hacks involve spending a little bit of money. Others are totally free. All of them are totally worth your time.
Facebook Newsfeed Hack #1: Preferred Audience Targeting
Listen up: Preferred audience targeting is a brand new Facebook feature that works just like ad targeting, but for your organic posts. That’s right, this new feature lets you target your organic updates as if they were ads, for free.Facebook lets you target your update so only the people who are most likely to be interested in your update will see it.
Here’s where the preferred audience targeting option can be found:
This feature is so powerful because not everyone who follows your Facebook page is going to care about every single update you publish. If you want to start raising your organic engagement, you need to stop broadcasting to all of your followers and focus on those people who are most likely to engage with specific updates.
Think about it. Why do people follow huge companies like IBM or GE? It could be for any number of reasons.
Facebook’s preferred audiences feature is pure genius for companies that have a variety of products and divisions, or that operate in multiple countries. You can narrow the targeting based on users’ interests and locations to reach the people you really want without bothering the rest of your followers.
This feature also has benefits for smaller companies and publishers. Take me for example. I post updates on a wide variety of topics, including online advertising, entrepreneurship, social media marketing, SEO, branding, andgrowth hacking.
Preferred audience targeting allows me to decide who sees my posts — or who won’t see my post, using audience restrictions:
Here’s another example. Let’s say you’re a French clothing retailer with locations in France, Poland, and Germany. You could make it so that only French-speaking millennial females who live near your locations will see your post announcing your latest deals.
Remember: everybody who likes your page isn’t your target market. Plenty of random people will like your page over time, but then never engage with your updates, visit your website, or buy from you.
If you can only reach 1 percent of your audience, you should more narrowly target the people who aretruly interested in what you have to offer. Giving people what they’re interested is what great marketing is all about — and, in the process, it will help you raise your Facebook engagement rate significantly.
Facebook Newsfeed Hack #2: The Unicorn Detector Pyramid Scheme
The Unicorn Detector Pyramid Scheme is the process you can use to separate your content unicorns from the donkeys.
What is a content unicorn? Well, content becomes a unicorn when it is clearly among the top 1 to 2 percent of all of your content. These are your most rare and beautiful pieces of content that attract the most shares, engagement, and views.
A content donkey, on the other hand, doesn’t stand out at all. At most, it’s average. Ninety-eight percent of your content will be donkeys that get average engagement — again, less than 1 percent is the average organic engagement on Facebook, which is insanely low, right?
To raise your organic engagement rates on Facebook, you need to post fewer, but better updates. You can test out your content organically on Twitter.Here’s how it works.
Post lots of stuff on Twitter — somewhere around 20 tweets per day. But imagine that every tweet has been infected with a virus, one that will ultimately kill them without the antidote within less than 24 hours.
The only cure for these infected tweets? They need to get a significant number of retweets, clicks, likes, and replies.
Examine your top tweets in Twitter Analytics. Those tweets with the most engagement — your top 5 or 10 percent — have survived!
Your content that got the most engagement on Twitter is also highly likely to generate similar engagement on Facebook.
Facebook Newsfeed Hack #3: Post Engagement Ads
You can use Facebook’s Post Engagement Ads to give your posts a bit of a push. Yes, that means you’re spending a little money to “earn” some free reach in the news feed.
For example, let’s say I posted the above update only on my wall. The engagement is going to be pretty low. Maybe a few hundred people will see it.
So what happens if I spend just $20 to promote it? In this case, I paid for more than 4,400 impressions (clicks, follows, likes, etc.), but also got more than 1,000 organic engagements for free as a result.
How? Whenever someone shares your promoted post, it results in more people seeing it organically in their newsfeeds and engaging with it.
Facebook Newsfeed Hack #4: Add Engaged Followers
Did you know there’s a way you can selectively invite people who have recently engaged with one of your Facebook posts to like your page? This is a valuable but little-known feature available to some (but not all) pages.
You want people who engage with you to become part of your Facebook fan base. You know these people like you and are more likely to engage with your content because they’ve done so in the past.
Here’s how you do it: Click on the names of the people who reacted to your post (liked, loved, etc.). You’ll see three types of buttons (Invite, Liked, Invited). Clicking on that Invite button will send an invitation to people who engaged with one of your Facebook posts to like your business page.
Does it work? Yep. Between 15 to 20 percent of the people I invite to like my page are doing so.
Oh, and did I mention it’s totally free? You can read more about the Facebook invite button here.
If you want to further increase your Facebook following, you could run a remarketing and list-basedFacebook Fan / Page Promotion campaign, but I wouldn’t recommend it. I don’t think it’s a good investment unless you have a ridiculously low number of followers. You’re better off doing nothing.
Our goal is to increase engagement rates to increase earned organic engagement. Attracting the wrong types of fans could hurt, rather than help, your engagement rates.
Facebook Newsfeed Hack #5: Use Video Content
The decline of organic reach almost mirrors the rise of video on Facebook.
Users watch more than 8 billion videos every day on the social network. And these videos are generating lots of engagement.
Just look at this recent research from BuzzSumo, which examined the average total number of shares of Facebook videos:
Facebook is doing its best to try to kill YouTube as the top platform for video. If you haven’t yet, now is the time to jump on the bandwagon.
Stop sharing vanilla posts that get little to no engagement. Add some video into your marketing mix! That should help improve your organic engagement because engagement begets engagement.
Closing Thoughts on the Facebook Newsfeed Algorithm
Facebook organic reach is pretty terrible. That’s why you should start treating your organic Facebook posts more like a paid channel, where you have to pickier and optimize to maximize engagement, in the hopes of getting more earned organic engagement.
We’ll never get back the Facebook organic reach we’ve lost over the past few years. However, these five hacks will help dramatically increase your organic engagement and mitigate your losses from the latest Facebook news feed change.
Be a Unicorn in a Sea of Donkeys
Get my very best Unicorn marketing & entrepreneurship growth hacks:
2. Sign up for occasional Facebook Messenger Marketing news & tips via Facebook Messenger.
About the Author
Larry Kim is the CEO of MobileMonkey — provider of the World’s Best Facebook Messenger Marketing Platform. He’s also the founder of WordStream.
You can connect with him on Facebook Messenger, Twitter, LinkedIn, Instagram.
Originally published on Wordstream.com | https://medium.com/marketing-and-entrepreneurship/facebook-newsfeed-algorithm-5-ways-to-recover-organic-reach-5925adcc009 | ['Larry Kim'] | 2019-07-16 10:11:01.091000+00:00 | ['Entrepreneurship', 'Facebook', 'Social Media', 'Algorithms', 'Marketing'] |
‘Turning up the Heat’ A Story Device for Addressing Climate Change | ‘Turning up the Heat’ A Story Device for Addressing Climate Change
A Conventional Symbol Applied to a Novel Problem
Photo by Aay Kay on Unsplash
Hot weather — and the many ways we’ve learned to communicate it — has come to play many important roles in our stories. We often use Heat to build a juxtaposition of worlds, communicate a character’s disorientation, or symbolize an unavoidable reality demanding our attention.
Juxtaposition:
Florida Project gives us an example of Heat’s visual juxtaposition turned towards the film’s larger class critique. In Florida Project, we spend the whole movie focused on a motel community caked in constant Florida-style sweat, all while living next to the dream-like Disney World.
The one time I distinctly remember characters without such apparent sweat is when a couple accidentally ends up at the motel during their honeymoon to Disney World. The couple was truly of some other world than our main characters, and it is self-evident that ending up in this motel was a big problem for them. So, as quickly as they enter the film they get the heck out of that motel.
However, no one does class critique better than Parasite and Heat plays an important role in communicating the juxtaposition of Parasite’s upper and lower class worlds. The rich are well-dressed and picturesque with AC shielding them from sweat, but the poor do not have such luxuries. Instead, they wear noticeably old sweat-stained clothes as they huddle next to loud ineffective fans, using pizza boxes to fan themselves off.
Disorientation:
I tend to focus heavily on movies (It’s a blessing and a curse), but Heat has also been used in written stories. One of my first and most memorable experiences thinking of Heat in a story comes from reading Albert Camus’s The Stranger in a high school English class.
Heat (and the baking sun) builds on the book's larger existential themes as it displays a disorienting reality for The Stranger’s main character Meursault:
“The heat was beginning to scorch my cheeks; beads of sweat were gathering in my eyebrows. It was just the same sort of heat as at my mother’s funeral, and I had the same disagreeable sensations — especially in my forehead, where all the veins seemed to be bursting through the skin.”
Unavoidable Reality:
One of the most compelling uses of Heat I’ve seen recently is to symbolize an unavoidable reality. In this way, the Heat runs parallel to something else in the narrative which demands the characters’ attention.
In a very physical sense, Heat is often a hard thing to avoid. I’ve experienced my fair share of scorching hot days, and if you don’t have AC, there’s just no avoiding the fact that you are going to be HOT when the planet says so. In this way, Heat serves as a very effective symbol to run parallel to all those other things our characters can’t escape either.
Sydney Lumet’s film 12 Angry Men is a prime example of this. The story follows 12 jurors stuck in a deliberation room as they come to a unanimous decision regarding the guilt or innocence of a young kid charged with the murder of his father. All the while, each character becomes caked in more and more sweat as New York City sees a truly scorching day.
Occasionally, their debate pauses as the characters are exhausted by the tension. Some jurors turn to small open windows for some breeze or go to the bathroom to wash off their face, but any attempt to find relief from the heat and tension of the deliberation room can only last for a short second.
There’s no escaping the deliberation room and its oppressive Heat until every member faces the conflicting facts of their case and comes to a unanimous decision. | https://medium.com/climate-conscious/turning-up-the-heat-a-story-device-for-addressing-climate-change-998522d0350e | ['Cameron Catanzano'] | 2020-11-22 17:51:11.928000+00:00 | ['Storytelling', 'Climate Change', 'Psychology', 'Film', 'Environment'] |
Books to Keep the BoogeyDudes Away: The October 2020 Brilliant Reading List (#5 Can Be Music to Your Soul) | The Princess Bride by William Goldman
Be not fooled by the saccharine title of this classic novel, which was subtitled “a classic” before it was even written, according to the “translator/abridger” of the “original story.”
And in case you haven’t figured it out by reading the book or from the sentence above, the “translator abridger of the original story” is none other than William Goldman himself, which right there already hints at what a rollicking, 4th-wall-breaking, semi-satirical fantasy-comedy The Princess Bride is. So if that sounds like the kind of novel that tickles your tailfeathers, then you have to give it a try! 😃
Quo Vadis by Henryk Sienkiewicz
I cracked open Quo Vadis on a Friday afternoon and some 8 hours later, realized that I had powered through the whole thing without stopping. Although it’s been a while since then, this historical novel set in Rome during the end of Emperor Nero’s reign shows the slow destruction of Rome, its persecution of early Christians, and the redemption of a young Roman soldier whose attraction to a young woman who is forbidden to him drives a deep, complex plot that is difficult to describe in a few simple lines.
Quo Vadis was written in 1896 by Nobel laureate Henryk Sienkiewicz, and became an international bestseller that was adapted to the screen multiple times. If you’re wondering what makes this novel so compelling, the best way to find out is to read it yourself. (Just make sure you have a lot of free time to read it all in a sitting or two, just in case 😉)
Out of a Far Country by Christopher and Angela Yuan
Out of a Far Country is a duo-autobiography (a duography?) written by a mother and son about a picture-perfect family on the verge of falling apart; a profligate son on the verge of destroying his life with parties, sex, and drugs; a hopeless mother on the verge of suicide…
And a series of small, unwanted miracles that turned it all around.
This is a true story that, to me, is one of the clearest pictures of what love really is: not mere indulgence, sexuality, emotion, feeling, or passion, but a genuine, self-sacrificing, truth-honoring, long-suffering attitude toward even undeserving “prodigals” who try to push it away.
To this day, Christopher and his mother Angela and father Leon continue to share their experiences to audiences around the world, bringing hope for healing relationships to families that are hurting, just as theirs was.
Sense and Sensibility by Jane Austen
If you loved the cheeky humor, the hilarious characterizations, and ingenious plot of Jane Austen’s most famous work, Pride and Prejudice, then you’ll enjoy her other alliteratively titled novel, Sense and Sensibility.
This story about two sisters, one logical and reserved, the other wild and passionate, and their challenging relationships with out-of-reach love interests includes such snarky, side-splitting lines as:
“Mrs. Ferrars’ family had of late been exceedingly fluctuating. For many years of her life she had had two sons; but the crime and annihilation of Edward a few weeks ago, had robbed her of one; the similar annihilation of Robert had left her for a fortnight without any; and now, by the resuscitation of Edward, she had one again.” In spite of his being allowed once more to live, however, [Edward] did not feel the continuance of his existence secure, till he had revealed his present engagement; for the publication of that circumstance, he feared, might give a sudden turn to his constitution, and carry him off as rapidly as before.
Need I say more? 😉
Then Sings My Soul by Robert Morgan
Then Sings My Soul is a collection of background stories about the great classic hymns of history.
Some stories include:
The story of a famous Baroque composer who wrote, in less than a month, a powerful, perennial composition still played hundreds of years after its creation
The tale of the other, less famous song written by former slave-ship owner John Newton
How a German nobleman with an odd-sounding name inspired a young woman to write and publish a poem which became a song, a hundred years later
If you, like me, love history, music, and stories (especially true stories), or know someone who does, then this book is a must for your (or their) personal library! | https://medium.com/be-a-brilliant-writer/books-to-keep-the-boogeydudes-away-the-october-2020-brilliant-reading-list-5-is-music-to-your-7539779c6dfe | ['Sarah Cy'] | 2020-10-29 18:57:32.899000+00:00 | ['Inspiration', 'Writing', 'Books', 'Reading', 'Music'] |
Our FAQs | Writers
What happens when I submit my article to TDS?
Thank you so much for taking the time to submit your article to our team! We will review it as soon as we can.
If we believe that your article is excellent and ready to go, this is how you will be able to add your post to our publication. If “Towards Data Science” shows up after you click on “Add to publication” in the dropdown menu at the top of the page, that means we have added you as an author and are waiting for you to submit your article. Once you have submitted your article, it will be reviewed by an editor before a final decision is made.
If we think that your article is interesting but needs to be improved, someone from our team will provide you with feedback directly on your submitted Medium article.
Please note that we only respond to articles that were properly submitted using either our form or via an email that exactly follows the instructions listed here. We don’t respond to pitches or questions already answered in our FAQs or on our Contribute page. We also ignore articles that don’t comply with our rules.
If you haven’t heard from us within the next five working days, please carefully check the article you submitted to our team. See if you can now submit it directly to TDS and look for any private notes from us that you may have missed. You should also make sure to check your spam folder.
If you just can’t reach us, the best thing for you to do is submit your article to another publication. Although we’d love to, we can’t provide customized feedback to everyone because we simply receive too many submissions. You can learn more about our decision here and submit another post in a month. | https://medium.com/p/462571b65b35#1bcb | ['Tds Editors'] | 2020-11-19 01:16:58.476000+00:00 | ['Writers’ Guide', 'Tds Team', 'Writers Guide'] |
Addressing the Handover | I don’t like the word handover.
In design circles it implies the end of involvement for one group and the start for another. Designers will create a suite of assets, whether it’s a style guide or page designs, and then hand those over to a developer to be coded.
The designer dusts his hands and moves onto the next task, while the developer gets the design equivalent of a bucket of cold water to the face and is told to get cracking. For some reason, we expect developers to see all the countless micro-decisions the designer has made, understand the designer’s intent, and execute in accordance with it — even when the delivered files have unintentional discrepancies… (sorry designers, but you know it’s true).
This traditional waterfall process is still ever present, and common amongst a lot of digital agencies. It’s another hangover from the old creative agency model. But even the often lauded agile method has limitations, and doesn’t solve the issue of asset handover and collaboration. Not fully.
The main problem is that there is a gap between those who do the thinking and those who are responsible for the doing. We’re not really working as a team.
Typically, there might be a group of strategists, responsible for setting up the vision or proposition for a project or product. Then there are designers who ideate concepts for an interface or design aesthetic. Then we have the developers who are responsible for bringing all of the above to fruition. Projects will often work their way through teams in this manner and in this order. It’s rare that you find a developer involved in the ideation process, rarer still to find them included in the strategic thinking.
This is not how good teams work.
The most effective teams have a shared understanding, language and goal. Sports teams are a great example of this. They can only be successful when they work together.
They communicate often and openly, spending time getting to know how the other thinks and plays so that they can facilitate better for one another. They have a single purpose, and work together using each other’s individual skills to achieve a common goal — without a big handover. They are in-sync.
Project teams are no different. We all have a common goal of creating a successful, useable and beautiful product for our client and the real people that use it, in the least stressful way possible. To do this we need to be inclusive, contributing to the entire project cycle through the lens of our individual disciplines. Only when we collaborate in this way will we be truly efficient and successful. This collaborative culture spells the beginning of the end for the unhelpful handover.
The first step to greater collaboration across disciplines is understanding. It’s no longer viable for disciplines to be specialised in execution at the expense of understanding each other’s craft.
I’m not saying that teams should be multi-disciplined in execution, in fact people who heavily differentiate their skills usually end up with one of them suffering as a result (jack of all trades, master of none). But like any great sports team, we need to have a clear understanding of how each other works and what we need from one another in order to do the best work we can.
Fortunately, there are some simple steps that can help steer your project team in the right direction. | https://uxdesign.cc/addressing-the-handover-3f874e1e96d4 | ['Jonny Gibson'] | 2017-11-09 17:48:00.542000+00:00 | ['Development', 'Teamwork', 'Design', 'UX', 'Psychology'] |
Is geothermal energy everything that it is cracked up to be? | The principle of geothermal energy is very simple: hot water and steam from deep in the earth’s crust is used to drive turbines. It produces no harmful polluting gases and has the potential to become one of the main sources of renewable energy of the 21st century.
The natural heat energy produced from the earth is called geothermal heat energy. The source of geothermal energy is the continuous heat flux flowing from the interior of the earth towards its surface. Geothermal power plants pipe hot water or steam through wells that sometimes reach deep down to reservoirs underground. The thermal energy is then converted into electricity using different technologies:
Dry steam power plants extract very hot steam from reservoirs in the earth. The steam activates turbines that generate electricity.
Geothermal flash steam power plants use water temperatures of a least 182°C and convert it to steam to drive generator turbines. When the steam cools, it condenses water which is injected back into the ground to be used again.
Geothermal binary cycle power plants can use water temperature as low as 57°C. The thermal energy is used to heat a fluid that turns into steam at low temperatures. This steam is pushed through a turbine to generate electricity. The water never touches the fluid and is re-injected into the well, where it heats up again, closing the cycle.
Pros and cons
The geothermal resources of the earth are vast, clean and plentiful. Unlike most other renewable energy resources, geothermal energy is available throughout the year, has an inherent storage capability and is independent of weather conditions.
Its storage capability makes it an ideal stabilizing energy, which can compensate for the fluctuating nature of other forms of renewable energy, originating from the sun or the wind. Underground thermal energy storage (UTES) systems store energy by pumping heat into an underground space. Thermal energy can be stored in boreholes, aquifers and caverns or pits. The storage medium is water but can also be molten salts, soil and rocks. Boreholes are man-made vertical heat exchangers that work to transfer heat between the energy carrier and the ground layers.
Cost is one of the drawbacks of geothermal energy: plants are expensive to install and they are generally limited to locations where a combination of heat, permeability of the earth and flow make extraction economical for electricity generation. Geothermal energy resources differ from one geographic location to another, depending on depth, temperature and pressure, abundance of ground water and underground chemical composition. Geothermal energy resources typically vary in temperature from about 50 to 350°C. The high temperature geothermal resources (above 200°C) are generally found in volcanic regions and island chains. The medium temperature (between 150 and 200°C) and low temperature geothermal resources (under 150°C) exist in most continental regions and are fairly widespread. Low temperature resources are used directly for heating while the higher temperature ones are used for conversion into electricity.
Another issue is the relative abundance of greenhouse gases (GHGs) below the surface of the earth, which can be released into the atmosphere through geothermal activity. However, since geothermal power plants do not burn fuel to generate electricity, the levels of air pollutants they emit are low compared to fossil fuels, according to the International Energy Agency (IEA). Geothermal power plants emit 97% less acid rain-causing sulfur compounds and about 99% less carbon dioxide than fossil fuel power plants of a similar size. Most geothermal power plants use scrubbers to remove the hydrogen sulfide naturally found in geothermal reservoirs and inject the geothermal steam and water that they use back into the earth. This recycling helps to renew the geothermal resource.
From Germany and Turkey to Kenya
As highlighted in the most recent REN 21 report, Turkey and Indonesia remained in the lead for new geothermal installations in 2019, followed closely by Kenya. Other countries that added new geothermal power plants in 2019 (or added capacity at existing facilities) were Costa Rica, Japan, Mexico, the United States and Germany.
The top 10 countries with the largest stock of geothermal power capacity at the end of 2019 were the United States, Indonesia, the Philippines, Turkey, New Zealand, Mexico, Kenya, Italy, Iceland and Japan. Several amongst them see geothermal electrical energy as one of the ways to meet their renewable energies target, in an attempt to align with the Paris Agreement on Climate Change. For instance, the Indonesian government’s target for 23% renewables in the energy mix by 2025 assumes an installed geothermal power capacity of 7 GW (7% of the energy mix).
Marit Brommer is the Executive Director of the International Geothermal Association (IGA). In an interview she gave to REN 21, she explains that the geothermal industry has a lot in common with the fossil fuel extraction business: the technologies used for extracting energy are similar, even if fossil fuel is extremely polluting and not renewable. As the price of oil has come down to historically low levels during the COVID-19 pandemic, it is no longer covering the costs of drilling, etc…In her mind, it is an opportunity that oil companies, which are already investing in renewable energies, should be taking: use the technology they know and switch to producing clean and renewable geothermal energy.
“The overlap between geothermal and oil and gas is in exploring, drilling and production. With this comes expert understanding of the earth’s sub-surface. It takes expert knowledge to find the right spots to drill, how to drill, what equipment is needed, and how to use it. During the current crisis, many skilled workers in oil and gas drilling companies are on standby. These workers could be re-deployed to the geothermal sector,” she argues.
IEC expertise makes the difference
For the geothermal industry to continue expanding everywhere around the world, the technology used must meet proper safety and performance benchmarks. IEC International Standards ensure that systems and devices employed are tested and meet the appropriate standards of quality and efficiency. IEC Technical Committee 5 develops specifications and standards for the rating and testing of steam turbines. In 2020, it released the second edition of a key standard specifying the requirements for steam turbines: IEC 60045–1, which now includes automation safety specifications. The standard can be used for geothermal steam turbines but also for turbines employed in concentrated solar power plants, another form of renewable energy. | https://medium.com/e-tech/is-geothermal-energy-everything-that-it-is-cracked-up-to-be-b1876b2c4181 | [] | 2020-08-14 14:21:24.852000+00:00 | ['Environment', 'Renewable Energy', 'Safety', 'Geothermal Power', 'Sustainability'] |
Does code need to be perfect? | Last month I had a conversation with the CEO of one of our clients. Their CTO and Head of Engineering asked us to help rework a part of their codebase. It had become impossible to add new functionality without breaking anything else and no one really knew how everything worked. While running stable and fast, this highly successful startup’s code is a big mess from a technical point of view. The CEO asked me why we need to make this effort since, from his perspective, there was no real issue, development just needed to be faster at delivering new features.
In these cases I think there is a truth in both points of view. The engineers want to write perfect code using the latest techniques, make sure that the code is well documented so they can fully understand how everything works and that it has tests so they can easily update things later. Product owners on the other hand just want things to be done, fast and cheap, so they can ship new features or convince new clients.
How can you make these conflicting views work together?
Ignore the future, code for now
Most product companies go through a few phases. Each of these phases require a different view on what “perfect” means. We could discuss long and hard about which phases exist, but for the sake of this article, I will just make the distinction between proof-of-concept code, MVP code and long-term code. Some examples of each to clarify.
When fleshing out a new idea for a product, it doesn’t make sense to spend any time on writing code that is open for extension, fully tested and conforming to the latest coding standards. The goal is to make a proof of concept, for example by connecting a few APIs or trying out a new interface idea. It is very unlikely anyone will have to dive into this code again when the goal is achieved.
When building a minimal viable product most people overestimate the need for good code. Every startup’s most important thing is to be out there with a nice looking, functional product. How it works under the hood doesn’t really matter. Until your MVP really gets traction you can run on shitty code or even do things manually to prove you have a product/market fit. Only once you nail it and the customers start flowing in, you should start caring about code, but up until then, you’re almost writing one-off code too.
As soon as those hard earned customers start flowing in, you are most likely generating some revenue or have attracted outside money. Now is the right time to start thinking about clean, long-term code. This is the situation our client from the example in the introduction was in. Since your audience is most likely to grow a lot, you need to start considering performance, stability and availability a lot more. Your engineering team is also going to scale up. This will force you to implement coding standards, documentation standards and a bunch of other procedures and practices. You start to need perfect code.
You can see in each of these examples a difference in the goal of the code and a difference in what “perfect” means in those situations.
Perfect code does not exist
Given these different phases a product can be in, a general definition of perfect code does not exist.
We work for a wide variety of clients with an even wider variety of codebases. Some of those we have started, others originated from the client or another development agency. In some cases it is even a mix of our start, handed over to a client’s own development team for some time, but ending up with us again later on.
This experience shows that each project is different, uses different technologies, has different coding styles or programming patterns, but also that most of these solutions may have been perfect at that time. Still, with these kinds of hand overs, engineers often complain about the work the other team did, it’s not perfect.
In reality there is no such thing as the perfect way to do something. It might sounds strange, but programming is not an exact science. There are multiple ways to do things, which might all be valid.
Dealing with non-perfect code
There is however a very big difference between not perfect and bad. Think about the Pareto principle and Sufficient Design.
Every programmer that is forced to work on a project with legacy code, an MVP or even an existing long-term product, will want to rewrite it. It puts them back in control and gives a feeling of security, working on something they understand instead of dealing with what they will most likely consider a big spaghetti with meatballs. Big rewrites from the ground up are however always a bad idea. You will lose a lot of business logic and knowledge while doing so. This is not necessary, things can be left untouched, and considered not perfect, but not bad either, if they match the following criteria (taken from this article):
Does the code do what it is supposed to do?
Is it correct, usable and efficient?
Can it handle errors and bad data without crashing — or at least fail safely?
Is it easy to debug? Is it easy and safe to change?
The last one is probably the hardest one and the least likely. In those cases a developer can isolate parts of the code and make them abstract, then write a test to make sure it works like expected. If any changes are needed from then on, the tests allow to rewrite that specific part only, making it easier to debug and change code.
When starting from scratch, extra care is needed. Of course any new project (or refactor of an existing part of a product) should be written properly: clean and readable code that follows some coding standard. The danger here is premature optimisation. Think about the current goal, not things like caching or overly complex database structures, avoid expensive technology or caring too much about performance. The less complex the code, the easier it is for new developers to get started. This is important in early stage startups, but also when working for clients; someone might need to take over the code one day. | https://medium.com/we-are-madewithlove/does-code-need-to-be-perfect-a53f36ad7163 | ['Andreas Creten'] | 2016-11-10 15:34:06.691000+00:00 | ['Software Engineering', 'Software Development', 'Agile', 'Entrepreneurship', 'Startup'] |
Maintaining Your Streak is the Secret to Reaching Pretty Much Any Goal | My daughter, Ruby, hits up her three best friends in Reno every night on Snapchat before she goes to bed every night.
They’re all on Pacific time, three hours earlier than we are here in Pennsylvania. They’re still doing their homework or maybe watching TV when she’s falling asleep.
But she does…whatever fifteen year olds do on Snapchat. Sends them funny, filtered pictures. Makes jokes. Answers whatever they sent her last.
She does it, even if she’s exhausted. Even if she’s been up all night with basketball and homework. Even if she’s sick. Even if she’s miserable for some reason.
When her phone is out of commission, she scrambles to borrow mine or her dad’s.
No matter what’s going on in her life, she’s touched all three of these friend via Snapchat every night since November 2018, when we moved. Because they have a streak and she doesn’t want to be the one to break it.
Streaks are powerful motivators.
Ruby wouldn’t really be less of a friend if she skipped sending a meme one night. She knows that. Her friends, who have known her since she was six years old, know that.
Showing up every single day for each other, though, has kept them in touch with each other across 2500 miles for more than year.
As I’m writing this post it’s 10:45 p.m. on a Wednesday night. I spent the day in Erie, ninety minutes from home, because I had a doctor’s appointment. I worked from a Panera. I had no time to write a blog post.
But I made a commitment to a friend — we promised each other we’d write every day this week. And I’ve done it so far. I don’t want to be the one to break the streak.
I wouldn’t be less of a writer if I skipped today and picked it up again tomorrow. My friend would forgive me. In fact, I’ve already whined to her and she’s already forgiven me.
But I don’t want the blank space on my calendar.
We’re Motivated by Gold Stars
However metaphorical they may be. We want some proof we’re doing a good job. And we’ll work for it, if we’ve set ourselves up to get that proof.
I’m writing this post at a quarter of eleven at night, because I want to feel like I’m keeping my commitment to myself. And because I want four days in a row of writing blog posts this week. And because I want to be able to report to my friend that I did it.
That’s my gold star. But a literal star on my calendar helps. They’re not even gold. They’re just plain black ink. But they’re just as motivating as those little stickers were in kindergarten. I want to put one on my calendar today. And so here I am, writing.
My three day streak . . . (photo: Author)
What Will You Streak This Week?
Pick a thing. Maybe it’s writing related. Maybe it’s exercise or going to bed on time or . . . really, it doesn’t matter.
Just pick something.
Likely it’ll be some kind of habit you either want to create (like me this week — I want to kickstart my habit of blogging every day) or break (maybe you want a streak of non-smoking days or something along those lines.)
Stick a calendar on your wall where you’re going to see it often. And give yourself a star every day that you meet your goal. Aim for a two day streak first. That’s all, just two days.
You can do just about anything twice in a row, right?
Then aim for three days. Then four.
Don’t think about forever. Or even the end of the week. Just two days. Then three. Then four.
And when you hit a day like I did today — a day when it’s a quarter to eleven at night and you still haven’t earned your star — all you have to do is ask you’re self if you’ve got it in you to do your thing today.
Not forever. Not all week. Just one more time. Today.
What’s the minimum viable iteration of your task? Some days, you might just be able to squeak that out.
The good news is, that counts. Give yourself a star.
Streak maintained for one more day. | https://medium.com/the-write-brain/maintaining-your-streak-is-the-secret-to-reaching-pretty-much-any-goal-8368d165ad64 | ['Shaunta Grimes'] | 2020-02-20 04:19:52.325000+00:00 | ['Self', 'Productivity', 'Creativity', 'Life Lessons', 'Writing'] |
IBM Watson Discovery wins a finalist spot in Fast Company’s 2020 Innovation by Design Awards | IBM Watson Discovery wins a finalist spot in Fast Company’s 2020 Innovation by Design Awards
This prestigious competition placed IBM Cloud & Data Platform alongside some of the most innovative design in our industry
Our designers take on ambitious projects in order to solve complex problems. I see up close the talent, grit, and humility it takes to do that well. I’m excited to share that Fast Company’s 2020 Innovation by Design Awards honored Watson Discovery as a finalist in their user experience category.
Innovation by Design embraces creative work at the intersection of design, business, and innovation. Several acclaimed designers, executives, and thought leaders sat on the jury this year to find and promote exceptional work in our industry. Entries are judged on the key ingredients of innovation: functionality, originality, beauty, sustainability, user insight, cultural impact, and business impact.
We need innovative design more than ever, and the 2020 honorees have brought creativity, inventiveness, and humanity to address some of the world’s most pressing problems. — Stephanie Mehta, editor-in-chief of Fast Company
Enterprise-scale findability
Structure your data with ease
IBM Watson Discovery is an AI search technology that retrieves specific answers to your questions while analyzing trends and relationships buried in enterprise data. Using cutting-edge natural language processing, allows employees to spend less time searching for information and more time acting on valuable insights.
As IBM helps ushers its clients into the era of the cognitive enterprise, the Watson Discovery product team realized that data scientists and engineers can’t be the only ones who utilize AI.
Shifting users meant shifting mindsets
Learn more about how we help serve business users
To further democratize the use of Watson Discovery, the product team shifted their mindset to center the business users in a user experience revamp. The needs, wants, and expectations of a business user acted as their guiding light the entire way.
The US-based team visited clients in Japan in an effort to understand the needs of these business users, sitting side-by-side with them to see their current workflows firsthand. Through workshops, sponsor user feedback, and remote testing, they were able to piece together a clearer picture of “the business user” to see what Watson Discovery’s functionality could do for them.
With a strong focus set in place, leadership trusted the team to carry out the strategy, while providing feedback through frequent meetings. To pull this massive effort off, they found ways to work with different squads from around the world.
The end result brought a set of tools historically geared towards developers, data scientists, and AI engineers, right into the hands of the most underutilized user base: business users. To cater to their workflow, the out-of-box models retrieve pertinent information from the data and provide the user with a set of tools to customize their project. Business users can also react to changes and continuously iterate. This allows them to both meet their end goal and see small, valuable victories each step of the way.
I’m so proud of this team for their excellent work — and excited that this competition’s prestigious jury gave them the recognition they deserve.
Winning team
Design lead: Kim Callery
Design team: Adi Veerubholta, Farzana Sedillo, Mostyn Griffith, Becca Shuman, Jeremy Burton, Joanne Lo, Frances Kim, Nicole Black, Sam Pattnaik, Stephanie Brunner, and Zak Crapo | https://medium.com/design-ibm/ibm-watson-discovery-wins-a-finalist-spot-in-fast-companys-2020-innovation-by-design-awards-861e8a3408bf | ['Arin Bhowmick'] | 2020-10-16 19:07:30.299000+00:00 | ['Machine Learning', 'AI', 'Artificial Intelligence', 'Design', 'UX'] |
13 Games I Play to Continuously Crush My Writing Goals With Ease | 13 Games I Play to Continuously Crush My Writing Goals With Ease
Writing is hard, playing isn’t
Photo by Erik Mclean on Unsplash
Ugh. It happened again.
Just before you slept yesterday, you were excited to wake up, brew your favorite coffee, and start writing a new article today.
But today, you feel like sleeping. Writing doesn’t seem exciting anymore. It seems tedious.
Not able to finish your writing goal today, worse, not even at least opening the doc to start writing, is sucking the energy out of you. You feel like you are procrastinating. You feel like you are missing something. You feel like you are a loser who doesn’t get things done.
It just feels bad. You want to write, but you can’t.
You are tense. You’re frustrated. You’re losing hope.
But wait! This is actually a good thing — it’s a necessity for success. We can’t avoid it, nor skip it, and can’t even suppress it. It happens whenever it has to happen. I call this “The Writer’s Mind-Building Period.”
This actually comes with a lot of benefits. A few are:
You start to think more clearly when that period ends. Because you’re fresh, you’re fully energetic, and you’ve got a newer, better perspective now. You get more motivated to write and make up for your lost time.
However, it comes with some drawbacks too:
Laziness is addictive. You become addicted to that — not writing. You feel like giving up and you might, actually, give up even though you know that it’s a temporary period.
The good news? That period will end whether you write or not. It just happens. So when that period comes, it’s better to write than to waste time, right?
I play a few games and those help me get my writing tasks done really easily in those periods. It’s fun.
Wanna try those?
Cool. Let’s do it. Let’s make our writing journey fun.
Remember, every writer is different. You have the choice to choose your own game (I’m a weird kind of writer, every game works for me in different scenarios.) | https://shajedulkarim.medium.com/13-games-i-play-to-continuously-crush-my-writing-goals-with-ease-bfd5528ba538 | ['Shajedul Karim'] | 2020-09-09 08:22:24.801000+00:00 | ['Writing Prompts', 'Writing', 'Marketing', 'Creativity', 'Writing Tips'] |
Take Your Creativity to the Next Level | Take Your Creativity to the Next Level
Learn the power of observation, imagination, and being present
Photo by Kelly Sikkema on Unsplash
What do you do when you feel like your creative well runs dry? Where do you turn to? Do you have the tools in place to help kick-start your creativity again?
Learning to carefully observe the world around you, and to wield the power of your imagination — connecting dots, seeing patterns, linking ideas and more — while fully engaging in the moments around you can help you avoid that ugly feeling of coming up dry on ideas. More, it is a powerful way to reboot your creative energy and find new inspiration.
Let’s dive in and take a closer look.
Observation
It sounds like a bullshit cliche, and maybe it’s been said enough to count as one, but the world we live in is absolutely overflowing with ideas. It takes a bit of practice, effort, and sometimes some sweat equity to see and appreciate these ideas, but the fact is, they’re all around us.
Being able to pop the hood, take a closer look, tinker with the patterns and the rhythms of the world is our job as creatives. It’s also the starting place to discovering these ideas.
Don’t just take note of the crooked tree with those sharp, angular branches clawing at the sky…tell me what kind of tree it is, what time of day, and most importantly, how do you feel seeing this?
That’s the power of observation.
Don’t just casually acknowledge the things that snag your attention, give them their due. Pay attention fully. Break down what you’re seeing, how it makes you feel, how it connects to other things you’ve seen, felt, and understood.
Look for the details in the cracks.
Try asking yourself:
What’s the texture
What color is it? What shade?
How does it make me feel?
What does it remind me of?
If you had to describe this to someone who had never seen this before, how would you do that?
Imagination
But it’s not enough to just observe. You have to dig at what you’re seeing, feeling, experiencing too. You have to get at the “why” behind it, why did this thing capture your attention? What’s the connection? What’s the link?
Use the power of your imagination to draw on other things you’ve seen, felt, and experienced that this reminds you of. Are there connections between these things? Is there a pattern, a series, some reason these repetitions are standing out to you?
But also…
Flip the script, what is this not at all like? What’s something this is starkly in contrast with that can help you describe it, sense it, experience it? What is it’s antithesis? And how does knowing it’s striking counterbalance help you see and experience this thing more closely?
Engaging with the present moment
And of course all of this requires you to be fully in the moment. But not just this moment, but also other moments where you’ve seen and experienced similar things (or their striking opposites). You have to ground yourself in these moments and become fully aware, fully connected, and fully immersed in the observation and its experience.
Dig deeper, find the roots, draw everything you can from this. This is the beginning of powerful ideas that you’re getting at.
Keep a journal
You will need somewhere to record all of these observations and their connections, and one of the best places for this is a journal. It’s a great place to write what you’re seeing, experiencing, feeling, connecting with and more, and a great place to come back to and reflect on, to draw out those connections mentioned earlier.
And remember, journals are more than their physical reality, at least it has been my experience anyway. A journal can feel like a trigger element, the scribblings in between its pages awakens that dreaming part of your mind and can stir old memories back to the surface. It’s a powerful powerful tool and a very effective way to spark creativity.
Some things to write in your journal that will help you with creativity:
What you saw, heard, felt and experienced
Why you think it stood out to you
What it reminded you of
Connections, patterns, personal and universal symbols you sense in what you experienced
Your emotional responses to it
Wrapping things up
If you want to be more creative, you have to go deeper than just scratching the surface when you watch the world around you. You have to observe, and I mean really observe. It has to become a part of your autopilot as much as your conscious effort. You and your creativity are partners in this effort, and you have to develop a keen sense of partnership — synchronicity — with that inner self.
Trust me, your soul will stir, you’ll sense the goose-flesh crawling, and you’ll feel the chill when your creativity wants you to notice something. You just have to learn to be in tune, paying attention and ready to receive.
Ultimately, if you want to be more creative, practice creativity. It sounds painfully simple and obvious, and in so many ways it can be, but it’s also a complex effort that will require your continued pursuit. But you can get there, and with time and learning the power of observation, imagination, and engagement with your present moments, you will. | https://medium.com/swlh/take-your-creativity-to-the-next-level-71c5f02a51a3 | ['Gregory D. Welch'] | 2020-01-03 11:13:20.511000+00:00 | ['Self Improvement', 'Productivity', 'Writing', 'Inspiration', 'Creativity'] |
This Is How To Plan A Day. | I am going out of town with my family tomorrow.
I have a lot to get done before we step on the plane.
I woke up with my mind spinning on repeat: work, write, errands, kids, meals, cleaning, dog. All the things I want or need to do today. Over and over. I knew I was overwhelmed by this day.
So I planned it out.
And now I know, I have 1 1/2 hours to write this morning. Plus a cushion later tonight to re-read and edit. And I know exactly when I am doing all my other tasks. And which nagging little activities don’t really need to get done today.
The best part of planning the day is that I am not wasting time with anxiety about when I will get it all done. Because I already know.
Here is how I do it:
Make a List
I wrote down all the crazy things that were in my head. It actually wasn’t as much as I thought. When you think of the same things over and over, it feels like the tornado from the Wizard of Oz. When you write them down on a piece of paper, you realize the volume of your to-do’s amounts to a strong breeze.
Here’s the kicker: the list only took me a minute or two.
I did come back to it a few minutes later to make sure I didn’t miss anything. But making the list takes no time at all. Because it was right on the tip of my brain already.
Decide How Long Each Thing Takes
Some things on your list will take 15 minutes or less. I group those things together. In my case, its things like putting the mail on hold while we are out of town. Little odds and ends. I will group those things together.
Then there are longer things. Writing. Finishing up the last pieces of a consulting project. Those each need at least an hour if not several hours. Because I only have one work project left, I know that I can spend the whole “work” time slot on the one thing, which helps.
So, I wrote down on my list how long each thing will take. And grouped together the shorter items that I can do in the same location.
Again, this step only took me a few minutes.
Determine The Open Time Slots of The Day
Right now, I have a few hours that are open. At 4p, I have to pick my kids up from camp. We usually hang out after camp or school and then have dinner between 5:30–6p. My kids have been going to bed pretty late, around 9p. But after dinner my husband usually does something with them, like throw a baseball. So, I know that 4–7p will not be a good time for tasks, because I will be with my family. But after 7p, I have some “wiggle room” for things that are not quite done.
That means I need to get the bulk of my activities done between now and 3:45p (when I need to leave the house for camp pickup).
That still gives me 6 hours to complete everything.
Six hours sounds like a lot of time.
Once again, this step only took me a few minutes.
Schedule Each Activity
Finally, I figure out the best time slot for each activity.
For me, whatever feels most urgent or necessary, I schedule it first. Today, its all the little odds and ends that I need to do. Most of them involve computer tasks and phone calls. So, I will do those first. All together, they should take about an hour. So, I’ll plan for 1 hour and 15 minutes, just in case. You never know how long I might be on hold.
Next, I want to finish that work project. It isn’t actually due until next week, but I know I don’t want to work on it on vacation. So, I will finish it today and not have to worry about it. It should take about 2 hours. I will do that next.
I plan 20 minutes for lunch.
After that, I will do some writing. I want to be “in the moment” when on vacation with my family, so I am unsure about my writing frequency next week. I want to finish a piece today. I will spend 1.5 hours working on my writing. And then if needed, re-read, revise, and edit after 7p this evening. With the goal of publishing tonight or tomorrow.
For writing, I also need to set a timer. Writing puts me in a deep state of “flow”, where I am engaged and lose track of time. So, I set a timer on my phone and go ahead and write. I don’t have to keep looking at the clock. I know that my phone will tell me when I need to move to the next task.
Finally, its all the errands that take me out of the house. I have about an hour left for those, and can pick my my kids on the way home from them.
As I put the schedule together, I realize that an hour might not be enough time to complete all the errands. So I plan to first do the errands that I should do without my kids. For errands like a trip to Target, I can bring my kids along after I pick them up. We should still be home by 5p, plenty of time for dinner.
Which reminds me, there is no time for me today to cook dinner. So I need to plan to either pick something up or find a decent order-in option. But, I know that is the case, and dinner won’t surprise me at the end of the day.
This part took me a little longer. Its a little bit like putting a puzzle together.
But it was still less than 15 minutes.
Take a Deep Breath. Then Execute.
And there is my day. From beginning to end, the entire day took about 20 minutes to schedule. And now I don’t have to spend time worrying about it.
I know exactly what I can and can’t do, and when. I know that there is time to get it all done. And anything that didn’t make this list can wait until after vacation.
Most important, this day no longer overwhelms me. My mind is not racing. I can focus on execution. | https://medium.com/swlh/this-is-how-to-plan-a-day-94dc24135e1f | ['Deb Knobelman'] | 2018-08-03 19:16:09.019000+00:00 | ['Mental Health', 'Productivity', 'Time Management', 'Entrepreneurship', 'Self Improvement'] |
Our FAQs | Writers
What happens when I submit my article to TDS?
Thank you so much for taking the time to submit your article to our team! We will review it as soon as we can.
If we believe that your article is excellent and ready to go, this is how you will be able to add your post to our publication. If “Towards Data Science” shows up after you click on “Add to publication” in the dropdown menu at the top of the page, that means we have added you as an author and are waiting for you to submit your article. Once you have submitted your article, it will be reviewed by an editor before a final decision is made.
If we think that your article is interesting but needs to be improved, someone from our team will provide you with feedback directly on your submitted Medium article.
Please note that we only respond to articles that were properly submitted using either our form or via an email that exactly follows the instructions listed here. We don’t respond to pitches or questions already answered in our FAQs or on our Contribute page. We also ignore articles that don’t comply with our rules.
If you haven’t heard from us within the next five working days, please carefully check the article you submitted to our team. See if you can now submit it directly to TDS and look for any private notes from us that you may have missed. You should also make sure to check your spam folder.
If you just can’t reach us, the best thing for you to do is submit your article to another publication. Although we’d love to, we can’t provide customized feedback to everyone because we simply receive too many submissions. You can learn more about our decision here and submit another post in a month. | https://medium.com/p/462571b65b35#2a0e | ['Tds Editors'] | 2020-11-19 01:16:58.476000+00:00 | ['Writers’ Guide', 'Tds Team', 'Writers Guide'] |
Depth-First Search vs. Breadth-First Search in Python | Let’s begin with tree traversal first.
What does it even mean to traverse a tree?
Since trees are a type of graph, tree traversal or tree search is a type of graph traversal. However, traversing through a tree is a little different from the more broad process of traversing through a graph.
Traversing a tree is usually known as checking (visiting) or updating each node in the tree exactly once, without repeating any node. Because all nodes are connected via edges (links), we always start from the root (head) node. That is, we cannot randomly access a node in a tree. There are three ways which we use to traverse a tree:
Preorder traversal
Inorder traversal
Postorder traversal
Preorder traversal
In preorder traversal, we are reading the data at the node first, then moving on to the left subtree, and then to the right subtree. As such, the nodes that we visit (and as we print out their data), follow that pattern: first we print out the root node’s data, then the data from the left subtree, and then the data from the right subtree.
Algorithm:
Until all nodes are traversed
Step 1 − Visit the root node
Step 2 − Recursively traverse left subtree
Step 3 − Recursively traverse the right subtree.
We start from the root node, and following preorder traversal, we first visit node one itself and then move to its left subtree. The left subtree is also a traversed preorder. The process goes on until all the nodes are visited. The output of the preorder traversal of this tree will be 1,2,3,4,5,6,7
Inorder traversal
In inorder traversal, we are following the path down to the leftmost leaf, and then making our way back to the root node, before following the path down to the rightmost leaf.
Algorithm
Until all nodes are traversed
Step 1 − Recursively traverse left subtree
Step 2 − Visit the root node
Step 3 − Recursively traverse the right subtree.
In-order Traversal
We start from the root node 4, and following inorder traversal, we move to its left subtree. The left subtree is also traversed inorder. The process goes on until all the nodes are visited.
Postorder traversal
Finally, in postorder traversal, we visit the left node reference first, then the right node, and then, if none exists, we read the data of the node we are currently on. We end up reading the root node at the end of the traversal (after visiting all the nodes in the left subtree and the right subtree).
Algorithm
Until all nodes are traversed
Step 1 − Recursively traverse left subtree.
Step 2 − Recursively traverse the right subtree.
Step 3 − Visit the root node.
Postorder Traversal
We start from the root node 7, and following postorder traversal, we first visit the left subtree. The left subtree is also traversed postorder. The process goes on until all the nodes are visited.
We have learned that the order of the node in which we visit is essential. Based on the order traversal, we classify the different traversal algorithms. There are two main techniques that we can lean on to traverse and visit each node in the tree only once: we can go wide or go deep.
The more common terms to describe these two options are breadth-first search and depth-first search, and they are probably exactly what we would expect them to be.
Depth-First Search (DFS)
In a DFS, we always explore the deepest node; that is, we go one path as deep as possible, and if we hit the dead end, we back up and try a different path until we reach the end.
Note: The DFS uses a stack to remember where it should go when it reaches a dead end.
In DFS, we have to traverse a whole branch of the tree and traverse the adjacent nodes. So for keep tracking on the current node, it requires last in first out approach which can be implemented by the stack, after it reaches the depth of a node then all the nodes will be popped out of the stack. Next, it searches for adjacent nodes which are not visited yet.
If it was implemented with the queue, which is first in first out approach, we could not reach the depth before that it would dequeue the current node.
The depth-first search is like walking through a corn maze. You explore one path, hit a dead end, and go back and try a different one.
We use a simple binary tree here to illustrate that idea. Starting from the source node A, we keep moving to the adjacent nodes A to B to D, where we reach the farthest level. Then we backtrack to the previous node B and pick an adjacent node. Once again, we probe till the most distant level where we hit the desired node E.
Let’s break down those steps. We first initialize the stack and visited array.
Push node A (root node) to the stack
We mark node A as visited and explore any unvisited adjacent node from A. We have two nodes, and we can pick any of them. For this example, we shall take the node in alphabetical order.
We mark B as visited and explore any unvisited adjacent node from B. Both D and E are adjacent to B, we push them into the stack.
We visit D and mark it as visited. Here D does not have any unvisited adjacent node. So, no node is pushed into the stack.
We check the stack top for return to the previous node — E and check if it has any unvisited nodes.
As E does not have any unvisited adjacent node, we keep popping the stack until we find a node with an unvisited adjacent node. In this case, there’s none, and we keep popping until the stack is empty.
Advantages:
DFS on a binary tree generally requires less memory than breadth-first.
DFS can be easily implemented with recursion.
Disadvantages:
DFS doesn’t necessarily find the shortest path to a node, while the BFS does.
DFS in Python
We are representing the tree in code using an adjacency list via Python Dictionary. Each vertex has a list of its adjacent nodes stored.
graph = {
'A' : ['B','C'],
'B' : ['D', 'E'],
'C' : [],
'D' : [],
'E' : []
}
Next, we set visited = set() to keep track of visited nodes.
Given the adjacency list and a starting node A, we can find all the nodes in the tree using the following recursive depth-first search function in Python.
dfs function follows the algorithm:
1. We first check if the current node is unvisited — if yes, it is appended in the visited set.
2. Then for each neighbor of the current node, the dfs function is invoked again.
3. The base case is invoked when all the nodes are visited. The function then returns.
def dfs(visited, graph, node):
if node not in visited:
print (node)
visited.add(node)
for neighbor in graph[node]:
dfs(visited, graph, neighbor)
Breadth-First Search
In BFS, we search through all the nodes in the tree by casting a wide net, that is, we traverse through one entire level of children nodes first, before moving on to traverse through the grandchildren nodes. And we traverse through an entire level of grandchildren nodes before going on to traverse through great-grandchildren nodes.
BFS explores the closest nodes first and then moves outwards away from the source. Given this, we want to use a data structure that, when queried, gives us the oldest element, based on the order they were inserted. A queue is what we need in this case since it is first-in-first-out(FIFO).
Let’s see if queues can help us out with our BFS implementation. We use a simple binary tree here to illustrate how the algorithm works. Starting from the source node A, we keep exploring down the branches in an ordered fashion, that is, from A to B to C where level completes. Then we go to the next level and explore D and E.
We first initialize the queue and a visited array.
We start with visiting A (root node).
We mark A as visited and explore unvisited adjacent nodes from A. In this example, we have two nodes, and we can pick any of them. We shall take the node in alphabetical order and enqueue them into the queue.
Next, we mark B as visited and enqueue D and E, which are unvisited adjacent node from B, into the queue.
Now, C is left with no unvisited adjacent nodes.
We mark D as visited and dequeue it. We keep on dequeuing to get all unvisited nodes. When the queue gets emptied, the program is over.
Advantages:
BFS is simple to implement.
BFS can be applied to any search problem.
BFS does not suffer from any potential infinite loop problem compared to DFS. The infinite loop problem may cause the computer to crash, whereas DFS goes deep down searching.
BFS will always find the shortest path if the weight on the links are uniform. So BFS is complete and optimal.
Disadvantages:
As discussed, memory utilization is poor in BFS, so we can say that BFS needs more memory than DFS.
BFS is a ‘blind’ search; that is, the search space is enormous. The search performance will be weak compared to other heuristic searches.
BFS in Python
We are representing the tree in code using an adjacency list via Python Dictionary. Each vertex has a list of its adjacent nodes stored.
graph = {
'A' : ['B','C'],
'B' : ['D', 'E'],
'C' : [],
'D' : [],
'E' : []
}
Next, we set visited = [] to keep track of visited nodes.
we set queue = [] to keep track of nodes currently in the queue
Given the adjacency list and a starting node A, we can find all the nodes in the tree using the following recursive breadth-first search function in Python.
bfs function follows the algorithm:
1. We first check and append the starting node to the visited list and the queue.
2. Then, while the queue contains elements, it keeps taking out nodes from the queue, appends the neighbors of that node to the queue if they are unvisited, and marks them as visited.
3. We continue until the queue is empty.
def bfs(visited, graph, node):
visited.append(node)
queue.append(node)
while queue:
s = queue.pop(0)
print (s, end = " ")
for neighbor in graph[s]:
if neighbor not in visited:
visited.append(neighbor)
queue.append(neighbor)
The code in this note is available on Github.
BFS vs. DFS
So far, we understand the differences between DFS and BFS. It is interesting to know when it’s more practical to use one over the other? At the early stage of taking an algorithm class, I faced this problem as well. Hopefully, this answer could explain things well.
If we know a solution is not far from the root of the tree, BFS might be better.
If the tree is very deep and solutions are rare, DFS might take an extremely long time, but BFS could be faster.
If the tree is very wide, a BFS might need too much memory to be completely impractical.
If solutions are frequent but located deep in the tree, BFS could be impractical.
In general, usually, we would want to use:
BFS — when we want to find the shortest path from a particular source node to a specific destination. (Or more generally, the smallest number of steps to reach the end state from a given initial state.)
DFS — when we want to exhaust all possibilities and check which one is the best/count the number of all possible ways.
either BFS or DFS — when we just want to check connectedness between two nodes on a given graph. (Or more generally, whether we could reach a given state to another.)
Quick summary
That’s it!
In this note, we learned all the theories and understand the two popular search algorithms — DFS, BFS down to the core. We also know how to implement them in Python. It’s time to see the information transfer from the note to the real world; you should start your first coding assignment immediately. It’s way more exciting than my note.
Never stop learning!
Resource:
The searching algorithm seems to come up quite often in coding interviews, and it can be hard to wrap your head around it at first. Once you learn the fundamentals, you must practice coding skills if you are eager to learn more about how the algorithm works and the different search strategies, you can get started with excellent the links below. | https://medium.com/nothingaholic/depth-first-search-vs-breadth-first-search-in-python-81521caa8f44 | ['Xuankhanh Nguyen'] | 2020-08-06 15:58:44.993000+00:00 | ['Programming', 'Algorithms', 'Data Science', 'Artificial Intelligence', 'Python'] |
Noam Chomsky Has Weighed In On A.I. Where Do You stand? | Noam Chomsky on Artificial Intelligence| Towards AI
STRINGER Mexico / Reuters
There is a core A.I. question. Can we create cognition? Is there a path by which our machines can possess any true understanding? Our best A.I. efforts are still stored mannikins. They create a lifelike illusion but there is no life behind the blank stares. We have technology that looks like the thought but in actuality, is not. Are we on the right path? If not, what is the path?
Last May, Noam Chomsky spoke on this issue at an MIT symposium. It is highly critical of the current directions in A.I. It’s a terrific read in The Atlantic. In fairness, here is a rebuttal by Google’s director of research, Peter Norvig.
There are two basic schools of thought on A.I. and the road to cognition.
Statistical Big Data — This approach makes use of some intense mathematics to sift through large amounts of data to recognize complex patterns. It can look at a picture and say that it is more likely to be a cat than a goldfish. Combined with cloud infrastructures and lots of computers, you get Siri and Facebook can recognize your friends in that beach photo. This is a simplified summary of how this works.
Bio-Based — This approach says that all earth-based intelligence, from worm to man, has certain organizational and structural features that can be understood and machine implemented. It can only be understood by looking at how earth brains are wired. This understanding will lead to a mechanical recreation of cognition. Let’s use biology and neuroscience to crack this code.
The contrasting arguments look something like this.
Statistical Big Data
Our AI is inspired and works much like nature. It is a parallel set of nodes (neurons) arranged in a network each processing small aspects of the larger whole.
True, it is not a model for cognition yet, but it’s early. Give the technology time to grow and mature. It provides the best path for the emergent behavior of cognition. We are doing things that seemed impossible a decade ago. Watch what happens in the next decade.
Bio-Based
Current commercial neural net technology does not, in anyway model cognition. It does not, cannot and will not. While inspired by natural neural networks, they work nothing like brains (including ours) that are found in nature. The path to cognition lay in structuring these networks to work more closely like real brains. There is a chasm that won’t be crossed.
We are being fooled by mathematical trickery and that Google server that is recognizing pictures of kittens has no idea what a kitten is, and never will. Nature provides excellent, fully functional models for cognition. Let’s get back to nature, do some more science and figure this out.
Where do you stand? | https://medium.com/towards-artificial-intelligence/noam-chomsky-has-weighed-in-on-a-i-where-do-you-stand-f478d1b0e0ea | ['Dan Lovy'] | 2019-05-07 21:15:45.676000+00:00 | ['Machine Learning', 'Neural Networks', 'Technology', 'Artificial Intelligence', 'AI'] |
Recognising Joy | Many years ago I watched a strange and beautiful Japanese movie ‘After Life’ in which the characters were asked to choose one moment from their lives, one memory, that they could choose to relive forever after their death. Their ‘afterlife’ would consist of just this one crystalline image, something that they could recall with the utmost clarity, and nothing else for the rest of time.
This choice — which you can imagine was the hardest one most of them had ever made — was their task over the course of a week. They had to recall one moment of perfect happiness, which the team helping them would then recreate on film, before they could take it and move on to eternity.
After I saw this movie it stayed with me for the longest time, because you see I couldn’t imagine what that moment would be for me. I was relatively young when I saw the movie and hadn’t yet had my daughter, so had no idea what it might feel like to hold my own child. My relationship with my husband was good, but it had not had the best of starts and I had few happy romantic memories from first meeting him. I was close to my family, but we were also kind of dysfunctional, so a lot of my early memories were tinged with stuff that happened later, and so spoiled what might have been warm rose-tinted recollections of childhood.
As time went on and I considered it again and again over the years, I realised that there had actually been very few times in my life that I experienced real happiness as the state of mind — free of worry, entirely present and full of joy — that I understood it to be.
So I began to try and figure out what the formula for happiness might be for me. I could think of a handful of times in my life where I’d felt something akin to joy, so I began to write about each of them. Describing in great detail all of the sounds, smells and texture of those moments I remembered, in the vague hope that by doing what the movie had suggested — recreating it creatively — I would discover the magical formula for what had evoked that very particular feeling.
There was a moment in the back of a jeep on a deserted country road in Israel, during the three months I was travelling there when I was 20.
It was just after nightfall, and I remembered so clearly staring out through the canvas flaps at the night sky full of stars, the voices of my friends who were sat with me in the back, the sound of the crickets in the leaves at the side of the road. We had been hitch-hiking back from the southern part of the country to our kibbutz in the north and been offered a lift by some friendly locals, and in that moment — just as the jeep pulled away to take us home — I felt the clearest most palpable sense of happiness I’d ever felt. It was so deep and profound in that second that I felt as if I’d been slapped off my feet by a gentle wave.
The next was from the year I was 30. I’d decided — on the spur of the moment around my birthday — to go on holiday alone to Italy for a week, with some vague plan of travelling in a triangle from Bergamot, across to Venice and then down to Bologna and back over the course of seven days.
I scared the shit out of myself on that trip several times, getting lost in the days before GPS, being scammed by unscrupulous taxi drivers, but then — in Venice — there was a moment that made the whole trip worth it. Early one morning (it may even have been my birthday morning) I got up early and walked along the Grand Canal with my sketch book, and on my route passed a small bakery that had just opened. I went inside and bought myself two apricot pastries and then walked on to a small piazza where the sun was just starting to flood the cobbles. I sat down with my sketchbook, opened the paper bag and sat and ate one of the pastries on the marble steps of a church.
As I watched the stones turn first pale cream and then gold and a soft warm breeze moved the leaves in the surrounding trees, I remember that my throat had suddenly clenched tight with tears. Not because I was overcome with beauty in that lonely place, but with the realisation that — even as I recognised it — the utter joy I’d felt for just a split second was already passing.
The next moment I recognised was two years later. I was in Kyoto in Japan, about halfway through a three-week honeymoon with my husband that we’d been planning for the last decade.
Kyoto was everything I’d imagined it would be, beautiful ancient buildings and winding streets juxtaposed with all the sleek modernity of Japan, and I was literally thrumming with excitement about being there at last. In a small backstreet market, wandering away from my husband, I found a store that sold nothing but gift wrapping. Every single wall and surface was covered with the most exquisite printed papers and fabrics, and the smell inside was delicious and intoxicating, a mixture of sandalwood and the street-food cooked in sesame oil drifting in from the alley.
I remember very specifically that, as I drew a sheet of red gilded paper off one of the shelves to look at, it was almost as if a tiny bell had been struck. Everything around me in that moment came suddenly and immediately into sharp focus, and I was filled with a sudden deep sense of calm. And the thought that accompanied the calm was such a simple one that, when I think back to it now, it almost seems ridiculous.
The thought I had in that moment was “everything is as it should be”, and even as I type that now I feel a tightness in my chest, tears in my eyes.
I have written about all these moments over the years — in my quest to find a common theme, my own personal formula for happiness — without a great deal of success. What I have noticed though is that that my recognising of them as they happen has become that much better as a result. And I had a thought the other day, that that feeling of sadness I had in Venice was a good example of how we spoil joy for ourselves by anticipating its loss at almost the same moment we begin to feel it.
We’re so busy trying to grab onto the snowflake we don’t even think about the fact that in the very act of grabbing and holding we’re already destroying: melting something that is — by its very nature — fragile, beautiful and transitory. I have a story that, by remembering and describing all these beautiful colourful snapshots of happiness, I will become better at just seeing them as they happen. Observing with wonder, rather than reeling with the thought that I’m experiencing something miraculous and — very possibly — melting the snowflake before it even lands. My hope is that, by noticing these moments of joy as they happen, allowing them more and more, I can string them together like beads. Make a whole necklace of moments so that, when the time comes and I’m asked to choose, instead of having to search for them I’ll be spoiled for choice.
This isn’t the usual kind of post I make, but I made it in the hope that someone out there will read it and sit down and describe in detail their own personal moment of joy for themselves. And if you feel so moved to send it to me afterwards, then I’d really love to hear it.
………………
Law Turley is a BACP Registered Integrative Therapist and Certified Radical Honesty Trainer living and working in the south west of the UK. | https://lawturley.medium.com/recognising-joy-ce048085a606 | ['Law Turley'] | 2020-08-11 09:03:36.794000+00:00 | ['Joy', 'Creativity', 'Happiness', 'Self-awareness', 'Memories'] |
How Neural Networks Work | How Neural Networks Work
Understand what’s happening inside a neural network
Image by the author
This article is part of a series that explains neural networks without the math. The first part is here. The next part is here. You can also get the whole series as a book.
The basic structure of a simulated neuron
In the first part of this introduction, we talked about what an artificial neuron is. Artificial neurons are inspired by biological nerve cells, and transmit a signal from their “input side” (dendrites) to their “output side” (the axon):
Image by the author
The axon at the end divides into terminal branches, which connect to other dendrites (of many other such neurons), creating a network of billions of connected neurons. What makes an artificial neuron more than just a connecting cable is the ability of the neuron to decide whether it should actually propagate a signal down its axon or not.
In artificial neurons, the “cell body” (usually just a function in a programming language) will first weigh and then add up its inputs and, if they add up to more than a set threshold value, fire a signal down its axon. If the sum of the weighted inputs doesn’t exceed the threshold value, the neuron will stay silent and not fire a signal to the neurons that are connected at its terminal branches:
Image by the author
By connecting many such units in multiple layers with each other, we get an artificial neural network:
Image by the author
The weights between the inputs and the summing function can change, and represent the actual “learned” content of the network. All the information that the network has is stored in these weights. We will see in a moment how this works in detail. Read on!
A neuron for a logical ‘or’
Let us have a look at a few very simple neural networks so that we can see what happens inside them. These networks don’t do anything overly exciting, but they are easy to understand, and they do demonstrate the basic principles behind neural networks.
First, we want to have a look at a neuron that works as a logical or. The logical or operation has two inputs, A and B, and one output.
Consider a sentence like “If it is Saturday or very hot, I will go to the beach.” The two inputs are “it is Saturday” and “it is very hot”. The output is “I will go to the beach”. Each of the two input conditions can be true or false independently of the other. So it might be Saturday and very hot, or Saturday and not very hot, or not Saturday and not very hot. All in all, we have four possible combinations of input patterns, and for each, we have one desired output.
If it is not Saturday and not very hot, I will not go to the beach. If it is not Saturday but very hot, I will go to the beach. If it is Saturday but not very hot, I will go to the beach. If is Saturday and very hot, I will go to the beach.
So in three of the four cases, I will go to the beach (so my output will be true). In one of the four, namely, if both conditions are false, then I will not go to the beach. We can express the truth of the conditions either with the symbols T and F (for “true” and “false”) or just with 1 (true) and 0 (false).
To summarise: if either A or B is true, then the output of this logical operation should also be true. The output will also be true if only one of A and B is true. The output will only be false if both A and B are false. The following table shows what we want to achieve.
Input A Input B Output
------- ------- ------
0 0 0
0 1 1
1 0 1
1 1 1
----------------------
In order to build this as a neural network, we will need just one neuron. This one neuron has two inputs: one for the value of A and one for the value of B. Remember that between each input and the neuron is also a synaptic weight, which is shown in the diagram below as a little red circle with a number in it. This number is the factor by which the synapse will multiply its input before it passes it on to the neuron.
The neuron itself will add up its two inputs, and it will fire if the sum of the inputs is equal to or greater than 1. We say that the neuron has a threshold of 1.0. You can see this on the right side of the neuron below, right under the arrow that represents the neuron’s “axon”.
Now, the question is: how can we set the values of the synaptic weights so that this neuron fulfils the function of a logical or as just described?
Image by the author
Obviously, if each synaptic weight has a value that is equal to or greater than 1, then each one of the inputs A and B will be able to make the neuron fire. Let’s say A is logically true, which we will express as an input of 1. B is false, which means that the input is 0. Now we have to multiply A (which is 1, because A is true) with the synaptic weight 1.1:
1 × 1.1 = 1.1
We have to multiply B (which is false, i.e. 0) with its synaptic weight (1.1) too, but because B is 0, the result of the operation will be 0. Now the neuron gets one input with the value 1.1, and one input with the value 0. Its threshold is 1; therefore, since one of the inputs is already greater than 1, the neuron will fire and will set its output to 1.
This behaviour is exactly what we wanted! This neuron behaves like a logical or. You can check the other combinations of input values yourself to verify that this neuron would indeed work correctly in all four cases.
By the way, the scary Greek ‘Σ’ letter (pronounced “sigma”) inside the neuron’s body just means “sum”. So the neuron is summing up its inputs at this point and checking whether the sum is greater than the threshold value or not.
Calculating a logical ‘and’
Now let us consider another logical operation, the and. Take an example sentence like “If it is Saturday and very hot, then I will go to the beach.” This is different from before because now both conditions need to be fulfilled at the same time in order for me to go to the beach.
Here is a truth table for an and:
Input A Input B Output
------- ------- ------
0 0 0
0 1 0
1 0 0
1 1 1
----------------------
How do I have to set the synaptic weights so that our neuron now behaves as a logical and?
Clearly, I will need to set the weights so that each one by itself is unable to make the neuron fire. That is, each synaptic weight should be less than the threshold; but the two synaptic weights, when added together, should give us a greater value than the threshold value 1. This means that I can take any value between 0.5 and 0.9 for the synaptic weights. I get the following diagram:
Image by the author
You can easily verify that this will behave like a logical and.
Exclusive or not?
Things become slightly more complicated if I want to create a neuron that encodes a logical exclusive or, or xor. An exclusive or is true only if either A or B is true, but not both. Here is a truth table for the xor operation:
Input A Input B Output
------- ------- ------
0 0 0
0 1 1
1 0 1
1 1 0
----------------------
If you think about it for a moment, you will see that we cannot possibly achieve this result with a single neuron, because the weights would have to be greater than 1 so that each input can trigger the neuron alone and make it behave like an or. But if this was the case, then we could not get the right result for the last line where both inputs are 1, but the result is supposed to be 0.
So here we really need three neurons. One will act as an or and will fire if either one of its inputs is true. The second neuron will only have the job of stopping the output from becoming 1 in the case that both inputs are true. Therefore, one neuron needs to have both synapses set to 1 or 1.1, so that it will fire like an or. The second neuron is actually encoding an and, and if it fires, it will produce a negative output of −2. In this way, in the last line of the truth table, the sum of these two outputs will be less than 1. So we need a third neuron, which will be the output neuron, and this will fire only if the sum of its inputs is greater or equal to 1:
Image by the author
You can imagine that we can make the processing that happens between input and output as complex as we like by adding more and more neurons. Each new layer of neurons enables us to model more complex mappings between input and output.
Thanks for reading! In the next part, we will simulate neurons inside a spreadsheet, to see how exactly they work. Stay tuned. | https://medium.com/the-innovation/how-neural-networks-work-c34298a292df | ['Moral Robots'] | 2020-10-11 02:30:44.185000+00:00 | ['Neural Networks', 'Artificial Intelligence', 'AI', 'Computer Science', 'Programming'] |
Visualizing your home temperature with MCU8266, MQTT and AWS | Let’s build an IoT, cloud-connected thermostat! We will go through a step-by-step process of building a thermostat and connecting it over MQTT to AWS cloud for processing and visualization. We will cover all aspects of this project — the circuit design, the application running on the board, the MQTT protocol, as well as all the AWS cloud components. In the end, we will produce a graph of temperature over time like the one below. Can you tell when I took the device to my bedroom downstairs? (You might think the readings are off, as the low 50s is a little chilly, but that’s a story for a different occasion 😊).
Temperature graph in AWS
Solution Overview
There are two parts to this project— the device with the temperature sensor and the cloud platform processing the data. An application that runs on the device, periodically (every 5 minutes) sends temperature readings to the cloud, where it is stored, processed, and plotted on a diagram for visual analysis.
High level architecture
Device design
Parts
No soldering is required in this project and all the parts add up to ~30$ with plenty of spares for future projects.
ESP8266 DevKit board (example)
10k Ohm NTC Thermistor (example)
10k Ohm Resistor (example)
Breadboard. A mini 170 board works great (example)
1 x M-M jumper wire. They only sell packs, like this one. They’ll come in handy for future projects, though.
(optional) USB Battery Pack (example)
The centerpiece of the device is the ESP8266 NodeMCU DevKit board. ES8266 is a low-cost (~$4) compact WiFi-equipped microcontroller with low-energy consumption properties and a mini USB port. It is perfect for simple, battery-powered applications like this one.
ESP8266 NodeMCU DevKit board
Circuit
Temperature calculation
The temperature measurement will rely on the Steinhart-Hart equation, which models the relationship between temperature and resistance in a semiconductor, such as a thermistor. A thermistor is a resistor with variable resistance, heavily changing with temperature. To measure the thermistor’s resistance, we will combine it in series with a standard resistor, between the ground and a reference voltage. We will then use ESP8266’s built-in analog-digital converter (ADC) to measure the voltage on the thermistor. The circuit below depicts this configuration.
Thermistor/resistor schematic
There are 3 equations describing this circuit.
Circuit equations
The first one defines the relationship between voltage and resistance of the resistors in series, based on Ohm’s Law. The second one reflects the total voltage on the resistors, given they’re configured between the ground and the reference voltage of 3.3V, based on Kirchhoff’s circuit low. The last one defines the relationship between the ADC reading A and the voltage on the pin, which is the voltage of the thermistor. You can read more about the ADC calculation here. Combining these 3 equations, we get the following formula for the thermistor’s resistance:
Thermistor’s resistance formula
We can apply the Steinhart-Hart equation to calculate the temperature, which is:
Steinhart-Hart equation for thermistor temperature
Where T₀ is a reference temperature (25C⁰) at which the thermistor has a reference resistance of R₀ (10k Ohm). B is known as the thermistor’s coefficient and is a constant value of 3950.
Circuit build-out
We will replicate the schematic above on a breadboard. For now, don’t worry about the connection between the D0 and RST pins — we will cover this part later on.
A complete breadboard setup looks like this:
Let’s now work on the AWS side of the project, as some output parameters from this process will be necessary for configuring the application running on the device.
AWS Cloud services
The cloud architecture is composed of 3 main components: IoT Core, DynamoDB, and Sagemaker. The IoT Core service plays the role of a gateway receiving the stream of temperature readings from the device, DynamoDB provides a way to store them, and Sagemaker — to analyze them.
AWS cloud architecture
We will cover the important pieces of the AWS infrastructure and some of the configuration items, but don’t worry just yet about recreating them in the AWS console by hand, as we will later use a tool named terraform to spin it all up with a single command.
MQTT
Our device will connect to AWS over a protocol called MQTT. MQTT, a protocol originally developed by IBM in 1999 to monitor an oil pipeline in the desert, has been a go-to for IoT applications and is now a standard officially sanctioned by the Organization for Advancement of Structured Information (OASIS). This lightweight TCP/IP-based protocol intended for IoT communication is built on a pub/sub architecture and is characterized by a small client footprint and low bandwidth consumption thanks to its very little packet overhead, compared to HTTP.
MQTT Architecture (image from mqtt.org)
In the MQTT architecture, clients subscribe with the broker to a particular message topic that other clients can publish messages to. In our case, the device will be the client publishing readings to a topic and AWS will provide the broker.
AWS IoT Core
AWS IoT Core is a set of services dedicated to the integration of IoT devices to AWS cloud. The platform provides an MQTT broker, to which our device will connect and publish messages.
Setting up IoT Core involves creating an IoT Thing Type, an IoT Thing, and an IoT Rule. The Thing represents a single device, with a dedicated certificate that our client will use to connect to the broker. The certificate is attached to an IAM (Identity Access Management) policy, that defines permission the client will have, allowing it to connect and publish to specific topics. Here are the important parts of the policy:
{
"Action": [
"iot:Publish"
],
"Resource": "...:topic/tempReading/*"
},
{
"Action": "iot:Connect",
"Resource": "...:client/esp8266*"
}
In a nutshell, this policy allows any client starting with the name ‘esp8266’ to connect and publish messages to topics starting with the name ‘tempReading/’.
AWS IoT Rule and DynamoDB
The next component in our architecture is the IoT Rule. This element defines actions for the incoming messages — effectively routing them to other AWS services. Our rule will forward the message for persistence in a DynamoDB table — a simple NoSQL document database by AWS.
IoT Rule configuration
Notice the timestamp() in the SQL statement — this will enrich all incoming messages with a field holding the current time.
DynamoDB table
AWS Sagemaker
The final component of the cloud infrastructure is an instance of AWS Sagemaker. Sagemaker is an advanced service intended for Machine Learning purposes. We will use the Jupyter Notebook functionality to execute code analyzing and plotting the temperature readings.
AWS Sagemaker Notebook instance
We will use this instance to create a visualization of the temperature later in this article.
Terraform
We will use a configuration management tool called terraform to provision and configure all the above components and some miscellaneous elements like IAM roles and policies. Terraform allows for automated provisioning of resources across dozens of services like AWS through a practice called Infrastructure as Code, which enables configuration to be expressed through a high-level syntax. We will assume the reader is familiar with terraform basics. To get up to speed on those I recommend this quick tutorial or the official docs.
First, let’s clone the git repository holding all the configuration and code needed for this project.
Cloning into 'esp8266-mqtt-aws-iot-temperature'...
...
Resolving deltas: 100% (22/22), done. ~ git clone https://github.com/danielsiwiec/esp8266-mqtt-aws-iot-temperature Cloning into 'esp8266-mqtt-aws-iot-temperature'......Resolving deltas: 100% (22/22), done. ~
Next, make sure your AWS access key and secret key are configured and saved in the ~/.aws/credentials file, as terraform will need these. You can follow this guide to get it all set up.
It is also a good idea to tag the AWS resources you’re creating, for reference purposes. Edit the terraform/terraform.tfvars file with your custom tags, or remove the entries in it altogether leaving common_tag={} if you’re not interested in tagging your resources.
We can now start the provisioning. Open your terminal in the folder you cloned and change to the terraform folder. The first time you run terraform, you need to initialize it.
~ cd terraform ~ terraform init Initializing the backend... Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/aws v3.22.0...
- Installed hashicorp/aws v3.22.0 (signed by HashiCorp) ... Terraform has been successfully initialized! ... ~
This will take a few seconds, where terraform will download the AWS plugin. All you have to do now is kick off the provisioning with the apply command. When prompted, answer yes to accept the resources being provisioned. This operation will take about 3 minutes. Do you think you could beat that with manual provisioning through the AWS console? 😎
~ terraform apply ... Plan: 14 to add, 0 to change, 0 to destroy. ... Enter a value: yes Apply complete! Resources: 7 added, 1 changed, 0 destroyed. Outputs: cert_pem = -----BEGIN CERTIFICATE-----
MIIDWTCCAkGgAwIBAgIUTHFMBieJZAGgCgrD5RGX/xyPIRkwDQYJKoZIhvcNAQEL
...
AcvMAdMIv56kfxLPQBSHG1i4NYX6+UmFZ+hZGfIUYRcDAxcdrSmK9+QOEOTD
-----END CERTIFICATE----- iot_endpoint = a2XXXXXXX.iot.us-east-1.amazonaws.com private_key = -----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAy2gRySw4BIpGGuKBkAJmlNpZ4s6lUEH/hWglu/0/RTQWYgIp
...
/lc1GMA+el2VqHSYu/ji8VLe/9NKP1jU+D7DBSmKeeEFY/vSiRdfRQ==
-----END RSA PRIVATE KEY-----
Let’s analyze the output of this command. 14 resources were provisioned (many of them being IAM resources, like roles, policies, and attachments). Among them — IoT Thing, DynamoDB, and Sagemaker. The creation of the last one was the longest and took about 3 minutes. We also have some outputs listed. These values are specifically requested in the terraform scripts to be printed out. Take note of their existence (iot_endpoint, cert_pem, and private_key) as we will use them soon, but don’t worry about saving them, we will always have access to them through the terraform output command.
Note: Some of the AWS services, most notably the AWS Sagemaker are not cheap (around 6¢/h or $45 monthly). Once you’re done with this project, destroy all your assets with terraform destroy . You can always spin them back up with terraform apply .
Device Application
Now that our device build-out and cloud services are complete, we’ll switch our focus to the application running on the device. The default ESP8266 firmware allows the device to be programmed with the Arduino language, however, we will pick a more interesting alternative — MicroPython. MicroPython is a slimmed-down version of the Python 3 interpreter and runtime, which is optimized to run on bare metal. To use this, however, we will need to flash our board with special MicroPython firmware.
Flashing the board
The ESP8266 dev kit has a built-in micro USB port that we can use to connect the device to a computer. To flash the board we will use a Python tool by Espressif named esptool.py, which you can install with pip install esptool. First, let’s create a Python virtual environment, using Python’s built-in venv package, to separate this and other Python packages we will use from your system-wide packages. You will need Python 3.x for this.
~ python -m venv .venv ~ source .venv/bin/activate ~ python -m pip install --upgrade pip ~ pip install esptool
Collecting esptool
...
...
Successfully installed bitstring-3.1.7 cffi-1.14.4 cryptography-3.3.1 ecdsa-0.16.1 esptool-3.0 pycparser-2.20 pyserial-3.5 reedsolo-1.5.4 six-1.15.0 ~ esptool.py version
esptool.py v3.0
3.0
If you can run the version command at the bottom, you’re good to go.
Now, on to flashing! First, download the MicroPython firmware for the board from this location. Unless you’re feeling adventurous, I suggest you pick the recent stable version of the firmware, which at the time of writing this article, was v1.13. Download the .bin file and save it in a known location. The esptool will require two parameters: port and baud rate. The port parameter specifies the location where the device is mounted on your file system. In my case, it is /dev/cu.usbserial-0001. You can list the available serial ports in the following way:
~ ls -p /dev/cu* /dev/cu.Bluetooth-Incoming-Port /dev/cu.DanSiwiecsAirPods-Wirel /dev/cu.DansQC35-SPPDev /dev/cu.usbserial-0001
The second parameter, baud rate, defines the data transfer rate. If the rate is too high, the device won’t be able to ‘keep up’ and the transfer will get (quietly) corrupted. For the ESP8266, 921600 is a safe value. Let’s now erase the current firmware on the board, to make room for the new one we just downloaded.
Note: For the remainder of this section, you will need to disconnect the D0 <-> RST jumper cable (red on the images above). Otherwise, esptool won’t be able to connect to the board.
~ esptool.py --port /dev/cu.usbserial-0001 erase_flash esptool.py v3.0
Serial port /dev/cu.usbserial-0001
Connecting....
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 40:f5:20:2e:1e:5f
Uploading stub...
Running stub...
Stub running...
Erasing flash (this may take a while)...
Chip erase completed successfully in 12.5s
Hard resetting via RTS pin...
Next, we will write the new firmware into the board:
~ esptool.py --port /dev/cu.usbserial-0001 --baud 921600 write_flash --flash_size=detect 0 esp8266-20200911-v1.13.bin esptool.py v3.0
Serial port /dev/cu.usbserial-0001
Connecting....
Detecting chip type... ESP8266
Chip is ESP8266EX
Features: WiFi
Crystal is 26MHz
MAC: 40:f5:20:2e:1e:5f
Uploading stub...
Running stub...
Stub running...
Changing baud rate to 921600
Changed.
Configuring flash size...
Auto-detected Flash size: 4MB
Flash params set to 0x0040
Compressed 638928 bytes to 419659...
Wrote 638928 bytes (419659 compressed) at 0x00000000 in 6.1 seconds (effective 837.4 kbit/s)...
Hash of data verified. Leaving...
Hard resetting via RTS pin...
Done! We can confirm our installation was successful using a remote MicroPython shell called rshell, which we can install with pip install rshell.
~ rshell -p /dev/cu.usbserial-0001 repl "~ print(\"hello world\")~" Using buffer-size of 32
Connecting to /dev/cu.usbserial-0001 (buffer-size 32)...
Trying to connect to REPL connected
Testing if ubinascii.unhexlify exists ... Y
Retrieving root directories ... /boot.py/
Setting time ... Dec 20, 2020 10:32:50
Evaluating board_name ... pyboard
Retrieving time epoch ... Jan 01, 2000
/Users/daniel.siwiec/workspace/private/projects/iot/esp8266-thermometer> Entering REPL. Use Control-X to exit.
>
MicroPython v1.13 on 2020-09-11; ESP module with ESP8266
Type "help()" for more information.
>>>
>>> print("hello world")
hello world ~
The command above opens a Python REPL on the board, runs a command, in our case a ‘hello world’ print, and exits. If you see the above output, your firmware has been successfully written to the board. If there are errors, it’s possible the baud rate was too high, in which case you need to erase the flash again and try writing the firmware with a lower baud rate, like 115200.
The application
Let’s now dive into the MicroPython application running on the device. The overall structure of the app is very simple — there’s some setup code, which in our case is mostly establishing the WiFi connection and then there is the main loop, which executes the main functions of the application. We will walk through the important pieces of the code, which is available in full form here.
Device Application structure
Main Loop
The main loop in this app measures the temperature and sends it over MQTT to AWS IoT Core.
Main Loop code
Temperature measurement
As explained in the circuit section before, the temperature measurement is based on the thermistor’s resistance calculation and the Steinhart-Hart equation for semiconductor’s temperature. The code below is a direct implementation of the equations explained in the Circuit section earlier:
Thermistor temperature calculation
Deep Sleep
Additionally, since this is a battery-powered device, it’s wise to utilize the ESP8266’s deep sleep feature which drastically reduces the power consumption by shutting down most of the device’s functions. In this mode, the ESP8266 draws around 20 uA, which would let it run for over 12 years on a 2200mAh battery I’m using. This is where the mysterious D0 <-> RST connection comes in — according to the ESP8266 pin-out, pin D0 provides a wake functionality, which sends a signal at a predefined time and by connecting it to the RST pin — allows us to bring the board out of the deep sleep mode.
ESP8266 Deep Sleep & wake-up code
MQTT
MicroPython comes with a simple MQTT client. There are a few important parameters that need to be passed, so let’s walk through them. Below is a simplified code snippet that connects to the AWS MQTT broker and sends a message:
MQTT connection code
The important parts are connection parameters — client_id and mqtt_topic which need to reflect the IAM policy we created before, mqtt_host which AWS provisioned for us, and ssl_params that configure the certificate-based authentication required by the AWS IoT Core. These parameters will get automatically injected later on.
Let’s now get the code ready to be uploaded to the device. All the necessary files are in the same repository we used for the AWS terraform setup. There are a few steps in this process.
Fill out WiFi credentials
The board needs internet connectivity to send messages to AWS. Open the app/props.json.tmpl file and fill out WiFi SID (network name) and password fields. Inject AWS IoT Core attributes
The application will authenticate with the AWS IoT MQTT broker using a certificate. Remember those terraform outputs? This is where they come in handy. The output contained the certificate and private key needed for authentication. Both the certificate and private key need to be converted from PEM to DER format and saved to a file. Run make props in the main folder to fetch these values, convert them to DER format, and save them to files. This command will also inject the MQTT broker host endpoint into the properties file. Upload files to the board
We’re now ready to upload the files to the board. We will use rshell (the same tool we used earlier to run REPL). Run this command: rshell -p /dev/cu.usbserial-001 cp app/* /pyboard/ . Replace the port with the one you used earlier.
Let’s now confirm the application is running correctly on the device. The app makes a few print outs throughout its lifecycle. All these prints can be read from the board’s serial port. There are a few ways to read this data — Mac and Linux come with a tool called screen which will let us do this. To connect to the serial port execute screen /dev/cu.usbserial-001 115200 , substituting the port for yours, as earlier. Now press the RST button on the board, next to the USB port, previously disconnecting the RST<->D0 wire. You should see an output similar to the screen below. Don’t worry about the gibberish at the start of the output.
Note: The ESP8266 is a very simple device, with an 80MHz CPU and 128kBytes of RAM. The SSL handshake involved in the AWS connection pushes the device to its limits and takes about a minute to complete.
You can now reconnect the wire (it’ll be needed for the device wake-up) and exit the screen pressing Ctrl+A and typing :quit . It is important to exit the screen explicitly, rather than closing the terminal window, as otherwise it’ll keep the connection with the serial port open and will prevent you from reconnecting to it.
Finally, let’s confirm the MQTT messages are landing in AWS. Go to the AWS console and open the IoT Core service. Select Test from the sidebar menu, enter the name of the topic (tempReading/bedroom) and click Subscribe to topic. Since our device sleeps for a few minutes after each message, you might need to wait a bit for the message to arrive. When it does, it should look like the screenshot below.
MQTT message in AWS IoT test client
You can also see the messages being persisted in the DynamoDB table:
DynamoDB table populated with temperature readings
When you get here, you now have an assembled device, with a temperature measuring code running on it and an AWS infrastructure to receive and persist the messages. The last element in the puzzle is visualizing the data. You can unplug the device from your laptop and swap in a USB battery pack. The one I used is an old 2200mAh and it lasted around 3 days.
Jupyter Notebook
Jupyter Notebook is an open-source project that provides a browser-based environment to run Python code, including visualizations. As AWS provides a ‘manged’ (created by AWS) Jupyter Notebook service, it is ideal for our use case to connect to the database containing the temperature reads and visualize it.
Go to the Sagemaker service in the AWS console and select Notebook Instances in the left pane. Your window should look similar to this:
AWS Sagemaker Notebook instance
Open the temp-vis instance and click on Open Jupyter in the upper right corner. This should bring you to this screen:
Now, click the upload button, select the assets/Temp Visualization.ipynb file and open it once it’s uploaded. Your screen should look like this:
Now, select Cell->Run All from the menu at the top, and you should see the temperature graph over time at the bottom of the screen. Of course, your graph will start taking shape as more datapoint flow into the cloud.
Congratulations! You now have a cloud-integrated IoT thermostat.
Update: ESP32
I ended up upgrading the board from ESP8266 to ESP32. It’s a very similar chip with a more powerful processor. This helps reduce the SSL handshake from ~45 seconds down to 2. Additionally, the ESP32 has an internal wake connection, which removed the need for RST<->D0 connection. The few code changes need for ESP32 are at the same repository, in the esp32 branch. The final, soldered device looks like this:
Front
Back
Note: All the code for this article can be found in my GitHub repository here.
Thanks to Clayton Gibb and Amber Houle for providing feedback on this article. | https://medium.com/dan-on-coding/building-an-iot-thermostat-with-esp8266-python-and-aws-6b0555326dbe | ['Dan Siwiec'] | 2020-12-27 23:21:39.123000+00:00 | ['Terraform', 'Python', 'AWS', 'IoT', 'Arduino'] |
8 Ways To Hack Your Nutrition. Research-based hacks and insights to… | There is a lot of information on food out there — like A LOT a lot. I know this because I write some of it, and certainly consume just as much, whether it’s through nutrition courses, diet and lifestyle books, or my favourite foodie recipe blogs.
The tricky thing is that it can be impossibly hard to sift through the pages upon pages of information and decipher exactly what is going to benefit you most in the long-term. I emphasize the phrase ‘long-term’ here because healthy living is not, and will never be an over-night process, a quick fix, or a fad diet.
“Of all the knowledge, that most worth having is knowledge about health! The first requisite of a good life is to be a healthy person.” — Herbert Spencer
So, instead of making recommendations on what you should or should not be doing in your life, this article is designed to teach you how to do exactly what you’re currently doing — but better! | https://medium.com/beingwell/8-foods-facts-to-help-you-get-better-nutrition-with-less-effort-fe705693ee33 | ['Alexandra Walker-Jones'] | 2020-12-28 23:20:43.123000+00:00 | ['Productivity', 'Food', 'Lifestyle', 'Health', 'Nutrition'] |
7 Business Lessons You Can Learn from Amazon Founder Jeff Bezos | 7 Business Lessons You Can Learn from Amazon Founder Jeff Bezos
Skill Development Expert Profile — Jeff Bezos
Los Angeles Air Force Base Space and Missile System Center, Public domain, via Wikimedia Commons
Jeff Bezos, the founder of Amazon, is now the world’s richest person. According to the Forbes real-time billionaires list, he is worth 185.2$ billion dollars at the time of writing, up 1.4$ billion from yesterday. Whether or not you think that’s a worthwhile goal is a separate issue. One thing is clear, though. We can all learn a lot about business from someone who has achieved something that impressive.
Jeff Bezos has developed a unique ability to see the world clearly, and by navigating that world successfully, he has achieved things most people didn’t think possible. At any moment, Amazon has more than 600 million items for sale and more than three million vendors selling them. They control almost 40 per cent of all e-commerce in the United States, half of the cloud-computing industry, a third of the video-streaming market, and sell 42 per cent of all paper books. This piece will explore Bezos’ road to success through some of his most famous quotes and experiences.
1. Every business starts small
‘Big things start small.’
Bezos started Amazon by selling used books out of his garage. After receiving an order, Jeff would buy it in a thrift store and ship it to the customer. It was a cheap, low-risk model to find out if customers wanted their service.
At this point, Bezos was obsessed with the growth of the internet. He studied the 20 largest mail-order companies and predicted that there would be a tsunami of change in how people would buy things.
Jeff started with books, but his big goal was to become a store that would sell everything. Books were just a convenient product to test the model on. By combining the ideas of the internet and making the physical store digital, Amazon revolutionised how people bought goods and services. We no longer had to walk out our front door.
2. Always aim for excellence
‘Ideas are the easy part; execution is everything.’
One of Amazon’s secrets to success was their research on how people behave. They tracked every action. By understanding what people looked at, for how long and what we put in the basket but didn’t buy, they soon became experts at predicting what we hadn’t yet looked at but were most likely to buy.
They were equally obsessed with every detail. Bezos wanted everything to go a little bit faster and to be done a little bit better. Any sort of sloppiness was unacceptable. By raising the standards to the almost unachievable, Jeff pushed Amazon to improve every thinkable detail, which allowed them to offer the lowest prices and fastest delivery on their products.
3. The customer comes first
‘Think about the customer, and then work from there.’
Bezos main aim was never to become a billionaire, but to produce a service that made life easier for others. Whenever Amazon received a customer complaint, they would do anything possible to improve the experience. This obsession about what the customer wanted was crucial to why people liked them and became loyal.
Amazon created long-term loyalty by giving up short-term profitability. As an example, they started to give warnings when people put something in the shopping cart, that they had already purchased. People may have put it in by mistake, and Amazon allowed the customer to correct it. This option decreased the short-term sales, but increased customer satisfaction and the likelihood that someone would return and buy something in the future.
4. Focus on the things you can control
‘It’s impossible to imagine a future ten years from now where a customer comes up and says, ‘Jeff I love Amazon, I just wish the prices were a little higher; or I love Amazon, I just wish you’d deliver a little more slowly.’’
Some businesses try to predict the future and guess what the market wants five or ten years from now. This is risky, as the future can be very unpredictable.
Instead, Bezos tries to focus on the things that will not change. He knows that ten years from now, customers still want low prices, fast delivery and a vast selection. Putting in the effort to optimise these things will always be time well spent.
5. Think long-term
‘When we win a Golden Globe, it helps us sell more shoes.’
When making decisions, Bezos values long-term benefits over short-term costs. Amazon has several times launched efforts that have resulted in a short-term loss. They have, for example, guaranteed next-day delivery for any item over 25$ and launched the Prime service which guaranteed two-day delivery for any item. These sort of services were expensive to provide, but Amazon bet that it would lead to more content customers who would spend more in the long-run. Even though one service lost money, they earned it back somewhere else. By thinking long-term, they accomplished things that otherwise wouldn’t be possible.
Amazon only showed a profit nine years after it was established. Bezos still managed to convince share-holders of his long-term plan of taking market share. Their strategy was to lose money, to put other companies out of business who couldn’t afford to lose money. Their strategy was to dominate the market down the road.
6. Prioritise what’s important
‘I always make sure to get eight hours of sleep every night. I don’t want to risk making poor executive decisions because I am ‘tired or grouchy.’’
The world’s richest man sleeps eight hours a night, and only sets his first meeting at 10 am. This may sound lazy, but his priority is to eat breakfast with his kids and family and to be alert. In the early hours of the day, he likes to putter around the house, while drinking his coffee, staying in a relaxed mindset. This type of short-term relaxation can actually be long-term productive. By taking some time in the morning to thrive, you will be more ready to perform when it matters.
At 10 am it’s all business, however. Bezos likes to make all the most important decisions before lunch, as the first half of the day is when we have the greatest capacity to make good decisions. By having a clear mind when the most critical decisions are made, you maximise your chances of daily success.
7. Have big goals
‘In space, there is enough room for a trillion humans. That could give us a thousand Mozart’s and a thousand Einstein’s.’
The next big goal for Bezos is to bring humans into space. He fears we will outgrow our planet and its limited resources, and that this will lead to rationing and starvation. Not a very encouraging future. Thus, he started the Blue Origin program to bring us into space. It may seem far-fetched, but until recently, our only method of buying a book was to walk into a store.
Having a big dream is the big motivator to keep improving Amazon. Bezos is way past the point where earning more money matters to his life-quality. By having success in one area, he can fulfil his life-long dream. If you ever get the feeling that you’re only working for the money, it can be worthwhile to think of your purpose in life. If the money you earn in your job can help you fund projects that secure clean water or bring better education to more people, you may find a newfound drive to keep going.
Take home message
Bezos started Amazon from his garage and grew it to become the wealthiest person in the world.
He was obsessed with detail and tried to measure and improve every possible component.
The focus was on the customer experience, and he would go to great lengths to satisfy every customer.
Instead of predicting the future, Bezos focused on improving the things that he knew people would want 5–10 years from now.
Valuing long-term results over short-term profits is more likely to lead to success.
By prioritising what’s important, you maximise your chances of making good decisions where they matter most.
Finding your purpose is much more motivating than just having a goal to earn money.
If you enjoyed this article, you may also like: | https://medium.com/skilluped/7-business-lessons-you-can-learn-from-amazon-founder-jeff-bezos-86bf3d79e082 | ['Erik Hamre'] | 2020-11-19 01:12:04.697000+00:00 | ['Amazon', 'Business', 'Entrepreneurship', 'Life Lessons', 'Inspiration'] |
Laser Waveguide Technology Could Divert Lightning Strikes | Lightning strikes start over 9,000 US wildfires and cause $5 billion in damages every single year — but a new laser system for creating artificial lightning channels hopes to change that. We’re joined by Dr. Jean-Claude Diels, Professor of Physics, Astronomy and Electrical Engineering at the University of New Mexico, who’s spent over three decades studying laser-induced lightning discharges with the goal of safely redirecting this powerful force of nature.
Jean-Claude, welcome! You’ve done a lot of research into atmospheric lightning and the laser-stimulated conduction of air. Let me start by asking if you can give me an overview of your research, and I’d also like to ask what inspired you to focus on this area of study?
Dr. Jean-Claude Diels, University of New Mexico (UNM)
What inspired me? I used to fly small planes, and I got hit by lightning while flying, so I thought I should have my revenge. So far, I haven’t — and I’m up against stiff competition from European researchers with a level of funding in their project that’s considerably larger than what we have here.
To give you an idea, the European Laser Lightning Rod group is planning a field experiment in Switzerland next summer with a grant of about 20 million euro, or around $24 million US dollars.
Their laser alone will cost 2 million euros after it’s fully installed — more than my entire budget of $100 thousand dollars a year. They’re going to be difficult to compete with, but we have slightly different approaches, so I think my team has a shot.
I’ve read that each lightning bolt contains up to 1 billion joules of energy, which leads me ask whether your focus is primarily on lightning safety or if you’re also considering power generation applications?
The focus is on safety, which is a priority these days due to global climate change and the incredible number of forest fires that are occurring. There are also safety applications for protecting airports, launch pads, and maybe even golf courses. You might be surprised to learn that nearly 5% of all deaths by lightning happen on golf courses when electricity conducts down the club.
An artist’s concept of a laser lightning rod on a tall building (EuroNews)
Let’s talk about how this laser lightning rod works. In the past, your research involved using an ultraviolet laser to create a “wire in the sky” that safely directs lightning down an ionized air-channel to ground. Is that still the path you’re pursuing with your research?
Creating an ionization channel in the air was our initial intent, but nobody has been successful in getting electricity to conduct down it over distances. Yes, we can ionize the air with a laser, but the ionization dissipates before we can trigger the lightning. It goes feet — but we need miles of range.
So, how can we overcome this limitation? One solution appears to be using a laser to create ionization, which in turn creates a shockwave. Inside of that shockwave, you’ll have a column of rarified air, and because it’s at low pressure, you’ll have created an easier conduction path for the lightning.
So, in other words, the laser is superheating the air to rarefy it rather than relying on ultraviolet photoionization of a conductive air channel?
Yes, that is exactly the approach that the Europeans are trying to use. Now in my case, I’m trying to use the same rarefaction of air to create a waveguide for a multiple laser system that could still perhaps ionize a column of air.
A schematic of the waveguide approach being used by Diels team.
Would you envision the laser system being placed on tall buildings and other typical lightning targets to safely conduct this charge to ground?
That’s indeed the approach because our laser-based systems are stationary and cannot be moved. Before this, the most successful approach was rocket-triggered lightning discharges. However, what goes up must come down, and you don’t want to have the spectacle of having a spent rocket and spools of discharge wire falling back to Earth on a city.
I think this takes us into lasers: you’ve worked mostly with femtosecond-pulsed UV excimer lasers, right? Is that because these are the most efficient ionizers because each photon in the UV spectrum will ionize 1 air molecule?
Yes. I began my work using excimer-gas lasers, but since then I’ve changed to a type of solid-state laser that’s still in the ultraviolet range. The advantage of ultraviolet lasers isn’t only the single-photon ionization process, though.
Most researchers are using infrared lasers, but each pulse of light in that spectrum makes little filaments of only 1 millijoule, while in the ultraviolet, the filaments created can be up to 1 joule each.
The European experiment using infrared lasers was able to generate about 1 joule of energy altogether by producing thousands of tiny filaments, but it still creates the desired rarefaction in air, so their team is counting on that to direct the lightning.
A 1kHz filamenting laser installed by Clemens Herkommer for the European LLR project. (Twitter)
So, it comes back to rarefying the air then, and not simply ionizing it. It sounds like rarefaction is the key.
Yes. The past experiments we did showed that the delay between the laser and the lightning was considerably more than the time that it takes for the ionization to dissipate, making this approach unsuccessful.
Since single-beam ionization hasn’t worked, there are currently two other approaches being tried. The first involves creating a rarified air-channel, which is what the Europeans are attempting, and then there’s my approach, which combines what I call a “waveguide in the sky” with a multiple-beam laser assembly, which I believe will create a continuous conductor where a single beam would not.
The difference between the ultraviolet and infrared approach is that the infrared laser makes thousands of tiny millijoule-energy filaments, whereas the UV approach essentially produces fewer, larger filaments averaging around 0.2 Joule energy per filament.
After getting the laser in place, we need to project a focused, high-intensity beam over a long distance, perhaps around 10 kilometers. The classical solution is to use a huge aperture lens, but given the size required that isn’t very practical.
However, there are other ways to focus our beam. One of those involves dynamic focusing — or manipulating the laser so that the beam becomes shorter and shorter, and ultimately compresses to a high enough density to create an ionized air channel. Alternatively, there’s also something called an acoustic waveguide, in which the filament creates an acoustic wave that you can use as a waveguide up to one hundred microseconds.
An analysis of the ultrashort pulses used by Diels to generate filamentation.
In the case of the acoustic waveguide, you have to use a very high repetition rate laser to sustain a conductive air channel. My goal is to do this at a frequency of 50 kilohertz using a laser that is frequency triple-compressed, generating a beam that creates filaments of 300 millijoules at 107 picoseconds in the UV.
This is the system we have for our attempt at laser-induced discharge using a laboratory on the roof of our building. Our laser is inside the building, but we can send the beam up from the lab to the roof, and from there we can direct the beam to a rooftop lab, to the top of a nearby mountain, or to any other targets that are accessible in our line of sight.
With our experimental apparatus, we are creating a lightning channel by superheating the air inside the filament, which produces a 40-micron channel within 300 nanoseconds. This channel is essentially an acoustic shockwave that becomes permanent at a repetition-rate over 10 kilohertz, and our apparatus is designed to achieve a rate of 50 kilohertz.
For this test, we built a laser out of parts salvaged from the trashcan at the Air Force research lab in Albuquerque, where we found five laser skeletons that met our specifications. It was a challenging task to build one laser out of five skeletons, but it’s working now and produces an 80 Watt beam at 50 kilohertz that is pumping an amplifier for the filamenting laser.
Before we go further, could you describe for us what you mean by the term filament? Could we describe this as a channel in the air that the beam is traveling down as it propagates?
Yes, filamentation is the propagation of the beam through the air without diffraction. Now, during our early attempts to create a single-beam ionization channel, we found that sending the laser beam to a distance of even one kilometer leads it to spread out too much. For instance, if you start with a one-millimeter diameter beam, it will spread out to a few meters after one kilometer, and the energy is totally dissipated in the air.
However, if you have a high enough beam intensity, the air will begin to act as a lens, and the laser beam will self-focus. At that point, when the air starts to ionize, the electrons tend to counteract that focusing and this creates what is essentially a waveguide in the air.
This initial channel only lasts for a few microseconds, but it creates an acoustic shock wave when it dissipates that continues to act as a waveguide and lasts for a much longer period of time. This is the effect we’re trying to exploit by taking our repetition rate to 50 kilohertz.
So, what you’re describing sounds like a waveguide made out of shockwaves. In this case, is the filament itself a shockwave?
Yes, but keep in mind there are two filaments: the first is self-focusing weakly ionized air, and the second is a shockwave with a longer duration. These two continue in a cycle at 50 kilohertz, one creating the other, and the result is a stable conduction path through the air.
Now I haven’t touched much on your teaching work, but I understand that you’ve graduated over 50 students with specialization in various areas related to high-speed pulsed lasers. Are any of your students following in your footsteps in terms of your work with lightning and atmospheric conductivity?
One of my students went to work for a project in Romania, the Extreme Light Infrastructure, which boasts having the largest laser in the world. His budget just to build the laser was 310 million euros.
The US is trying to compete by giving $10 million to the University of Michigan, but this is kind of ridiculous proportion. Altogether, the total budget for the Extreme Light Infrastructure project is 855 million euros, and we’re not going to effectively compete with that on a budget of $10 million.
The Extreme Light Infrastructure beamlines building, boasting the world’s most powerful laser. (ELI)
So, your student is working on a project in Romania, and then you also mentioned another European team. Can you tell me about them?
Yes. Now, this team has a really huge facility. You see the gigantic building once the French team is making a laser it’s a French & Swiss team. They’re making a laser of that they plan to bring to a peak in the Alps. You can learn more about them on their website, which is online at http://llr-fet.eu/
You’re working with solid-state lasers now, right? Back in the old days, I understand this research was done with massive excimer gas lasers, but if I understand things correctly you’re working with something like a 355-nanometer solid-state laser, right?
Yes. Solid-state lasers can be considerably smaller than gas lasers, and if you want to build a device that you can move to the top of a mountain, it needs to be as small and compact as possible. However, excimer gas-lasers are still used in some cases.
For example, there’s an institute in Moscow is doing similar research with a gigantic excimer laser that takes up a whole building — but the Russians are not applying this research to lightning like my team or the Europeans are.
How will you identify where the lightning is in order to point the laser at it? I understand that the laser creates a conduction path, but in order for conduction to happen you need to know where the lightning originates. Do you have a way to identify where to shoot the laser in the clouds?
Identifying where to send the beam is something that will require a large, multidisciplinary team. This is a resource that the European project has, and they’re well-positioned to design equipment to detect where the lighting is going to start.
However, this is an area that still requires a lot more research before we fully understand it. You can’t simply shoot a laser into the nearest cloud and expect results. Clouds are big, and we don’t know exactly where the field strength will be highest in them. We still need to find better ways to measure that.
About Our Guest
Jean-Claude Diels received a Ph.D. degree in 1973 from the University of Brussels, Belgium, for his research on coherent pulse propagation performed at the University of California, Berkeley, under advisement of Prof. E. L. Hahn.
He is currently Professor of physics and electrical engineering at the University of New Mexico, Albuquerque. He has graduated over 50 students in various areas including coherent interactions, ultrashort pulse generation and diagnostics, nonlinear propagation of intense pulses, and laser-induced discharges.
He co-authored with Wolfgang Rudolph the graduate textbook Ultrashort Laser Pulse Phenomena: Fundamentals, Techniques and Applications on a Femtosecond Time Scale and with Ladan Arrisian the book, Lasers: The Power and Precision of Light, celebrating the 50th anniversary of the laser, and published 5 book chapters.
Dr. Diels has been honored with a fellowship in the Optical Society of America, and is the recipient of the 51st Annual Research Lecturer Award (April 2006), and of the 2006 Engineering Excellence Award of the Optical Society of America. You can learn more about him online at his website. | https://medium.com/swlh/laser-waveguide-technology-could-divert-lightning-strikes-65d0238cff4c | ['Tim Ventura'] | 2020-10-05 10:20:37.277000+00:00 | ['Science', 'Technology', 'Physics', 'Weather', 'Lightning'] |
Allocate AWS Costs with Resource Tags | When you start having many projects — each with different environments — in your AWS account, it is very important to have an overview of the costs by project (and/or environment).
Under AWS Cost Management / Cost Explorer is possible to view reports and aggregate costs by Services and you can set up many different filters to have a more granular view.
The one filter I find more useful is by Tag.
Assuming you are deploying your serverless infrastructure via CloudFormation ( directly, or through AWS SAM or Serverless framework) you are immediately able to Filter costs by Tag using the auto-generated Cloudformation:StackName ( which is the name you assign to the service in the yml file).
Often though this is not enough, because maybe your infrastructure is split into multiple separate stacks.
In some of our recent projects for example we have :
a React frontend which we deploy to S3 and serve with a CloudFront Distribution ( which handles caching for us ) + Route53 for the domain name.
which we deploy to and serve with a ( which handles caching for us ) + for the domain name. a RestAPI which relies on API Gateway and many different Lambdas which read and write to DynamoDB , and make use of SQS .
which relies on and many different which read and write to , and make use of . a Cognito User Pool to handle the Authentication to the FrontEnd and allows it to make calls to the RestAPI.
to handle the Authentication to the FrontEnd and allows it to make calls to the RestAPI. some additional CloudFront distributions (+ Route53 ) to expose some specific endpoints of the RestAPI to a 3rd party ( not the Cognito Authorized frontend ) and deal with caching based on query parameters.
That means that these 4 components of the Architecture are described with different cloud formation stacks (in some case we don’t even have a stack because we created the resources via AWS CLI or CDK, or forgive me, directly with the AWS UI Console).
The simple filter by stack name, therefore, is not enough.
This does not mean you can not monitor the costs of the entire Application/Project.
Many AWS services support user-generated tags, therefore it is possible to assign the same tag to resources from different services to indicate that the resources are related. For example, you could assign the same tag to an API Gateway stage that you assign to a CloudWatch Events rule.
User-generated tags are simple key-value pairs assigned to the resource you are creating ( or have already created ).
As usual, the quickest and easy approach could be adding them for each resource directly in the UIConsole:
but this is definitely something you don’t want to do for anything else than a quick test or prototype project.
Also adding tags with the AWS CLI is quick and easy, just check the docs for the CLI for your resource. ( unfortunately, the right way of passing the tags is slightly different from a resource to another).
For example to add tags for a Cognito User Pool is just:
aws cognito-idp tag-resource --resource-arn YOUR_USER_POOL_ARN --tags Project=PROJECT_NAME,Environment=YOUR_ENV
but for S3 you need to pass a TagSet:
# add cost allocation tags export tagSet="TagSet=[{Key=Project,Value=$PROJECT},{Key=Environment,Value=$YOUR_ENV}]" aws s3api put-bucket-tagging --bucket $S3_BUCKET_NAME --tagging=$tagSet
and for CloudFront the tags are defined in an Array called Items. You can, therefore, create a JSON file and use that.
# in tags.json { "Items": [ { "Key": "Project", "Value":"cap" }, { "Key": "Environment", "Value": "dev" } ] } aws cloudfront tag-resource --resource arn:aws:cloudfront::Q1W2E3R4T5:distribution/Q1W2E3R4T5Y6 --tags file://tags.json tags.json
And the examples and differences could go on.
As you see, a better and more maintainable approach is defining your tags along with the resource in your Infrastructure As Code ( be it Terraform, SAM or Serverless).
With serverless framework it is as simple as adding this snippet to the provider section of your serverless.yml:
stackTags: Project: PROJECT_NAME Environment: YOUR_STAGE/ENV
Everything will work out of the box for your Lambdas and for DynamoDb, but again, you might need some adjustments for Cognito, SQS, S3 and API Gateway.
For example for Cognito you need to specify them in the Cognito User Pool properties:
UserPoolTags: {'Project': '${self:provider.stackTags.Project}', "Environment": '${self:provider.stackTags.Environment}'}
For SQS is instead you have to specify key-value pairs under the properties of your Queue.
Properties: Tags: - Key: Project Value: ${self:service} - Key: Environment Value: ${self:provider.stage}
If you have many queues in your serverless.yml it could make sense to create a Custom Variable and refer to it in each queue.
custom: sqsTags: - Key: Project Value: ${self:service} - Key: Environment Value: ${self:provider.stage} // and then in SQS block: Properties Tags: ${self:custom.sqsTags}
As soon as we did that, with just one more google search we found out that there is a plugin to simplify the script.
This has brought us to another plugin needed to tag the ApiGateway (which as you can see by the source code, I guess it’s clearly forked by the SQS one).
In the serverless world, things change fast and it seems that at least for the API Gateway that plugin is not necessary anymore and we can just use the stackTags that come with the framework (alone or together with another node tags that allow customizing even more the API Resources. Honestly, though, I haven't yet figure out what is the difference between using stackTags and tags.
It seems that the tags added with the tags node are added to the APIS/Stages/Configure Tags (together with the stackTags), while the stackTags are added only to APIS/Settings/Configure Tags. If you are interested in more granularity in that tagging you might want to check out the API Gateway Resources That Can Be Tagged
AWS Resource Groups and Tag Editor
Something that I really found useful while working on Cost Allocation Tags is the Tag Editor and Resource Group section of the UI Console.
Tag editor allows you to select all the resources you have in your account, filter them and edit directly their tags.
Resource groups allows you to have an aggregated view of all the resources under specific tags.
These 2 tools are very handy to have an overview of the tags that you have applied and make sure all the different resources from different stacks will belong nicely together when it comes to the Cost Reports.
Unfortunately, CloudFront distributions are not supported( yet ) in the Resource Groups (outside us-east-1) but you still can add tags directly to them. They will not be shown in the Tagged Resource Groups, but they will in the Cost Reports.
Err on the side of using too many tags rather than too few.
As you saw you can play around a lot and customize the tagging for each service. You can set the Tags at the provider level, or at the resource level. And of course, you can do both.
One of the suggestions from AWS in regards to tagging is to be as precise and granular as you like, it is better to have too many tags than too few and this is why they set a rather high limitation of 50 tags for each resource, you will have plenty of room to customize and be specific your tags!
As with many things AWS as soon as you start working on something, you realize there is an enourmous amout of stuff that you do NOT know ( and that’s never good for your Imposter Syndrome.. ).
In the end Tagging resources for Costs Allocation is very simple, but it is only one aspect, because for example you could use tags to grant access to resources ( constraining IAM permissions by specific tags). If you want to know more you can start reading this page
Something else you might want to check out is the version of AWS CLI you have installed on your machine, recently I wasted good 20 minutes trying to figure out why
aws cognito-identity tag-resource --resource-arn arn:aws:cognito-identity:my_region:my_account:identitypool/my_region:my_identity_pool_id --tags MyTagKey=MyTagValue
wasn't working: I was getting the error response Argument Operation: Invalid Choice despite apparently everything being exactly as described here
I then run pip3 install --upgrade awscli and notice that my aws cli was updated from 1.16.70 to 1.16.283. (quite a big jump of patches) and then magically list-tags-for-resource and tag-resource where finally available.
Photo by David Carboni on Unsplash | https://dvddpl.medium.com/allocate-aws-costs-with-resource-tags-277de240487f | ['Davide De Paolis'] | 2019-11-21 10:03:42.531000+00:00 | ['Serverless', 'Software Engineering', 'AWS', 'Infrastructure As Code'] |
Getting Started with Plot.ly | Getting Started with Plot.ly
A Guided Walkthrough for Powerful Visualizations in Python
Authors: Elyse Lee and Ishaan Dey
Matplotlib is alright, Seaborn is great, but Plot.ly? That’s on an entirely new level. Plot.ly offers more than your average graph by providing options for full interactivity and many editing tools. Differentiated from the others by having options to have graphs in offline and online mode, it is also equipped with a robust API that when set up will work seamlessly to have the graphs displayed in a web browser as well as the ability for saving a local copy. One of the only frustration you’ll come across is dealing with many options to figure out the tools you want to use for your plots.
Overview
We’ll start with the basics of setting up plot.ly in Python. After that, we’ll get started with some basic visualizations ranging from typical box & whisker plots to choropleth maps, with code breakdowns along the way. We’ve made all visualizations in this guide using the Zillow Economics Dataset, which contains time-series data from 1996 to 2017 on various housing metrics aggregated by location. If you’re interested in the full code for this post, check out the GitHub link below; otherwise, all the code used to create the visualizations will be included for each visualization.
Our hope is that by the end, you’ll have developed a basic intuition for how the plotly API works, as well as a feel for the generalizable framework you can apply towards your own projects. You can find a link to a plotly cheatsheet here, and if you’re interested in fine tuning any of the parameters used for the visualization, you can access the documentation using the help() function. For more details on all types of plots and parameters, here is a link to more information on Plotly’s Python open source graphing library.
Setting Up
Plotly is a platform that runs on JSON, a format in which parameters are passed to the plotly API in dictionary formats. We can access this API in python using the plot.ly package. To install the package, open up terminal and type $ pip install plotly or $ sudo pip install plotly .
Plotly’s graphs are hosted using an online web service, so you’ll first have to setup a free account online to store your plots. To retrieve your personal API key, follow the link here: https://plot.ly/settings/api#/. Once you’ve done so, you can begin setting up plotly with the set_credential_files() function, as shown below.
import plotly plotly.tools.set_credentials_file(username=’YourAccountName’, api_key=’YourAPIKey’)``
Plotting Online & Offline
When displaying visualizations on plotly, both the plot and data are saved to your plotly account. Without paying for more space in cloud, you’ll have a maximum of 25 plots that can be stored on the cloud, but these images can easily be stored locally and deleted when making space for more.
There are two main ways to display plotly plots. If you’re using Jupyter Notebook or another interactive python environment (files with the .ipynb extension), the py.iplot() function displays the plots in the output below the cell. py.plot() , on the other hand, returns a url that can be saved, and also opens using the default web browser.
The Plotly offline mode also enables you to save graphs locally. To plot offline, you can use plotly.offline.plot() or plotly.offline.iplot() . Again, the iplot() function is used for Jupyter notebook, and will display the plots within the notebook. plot() creates an HTML page that is saved locally to be opened in a web browser.
Basic Structure
As we mentioned before, all plot.ly visualizations are created using Json structure which are list of parameters to be modified using API, so essentially you’ll see the parameters and general structure to make each plot which if you learn one, you can make the rest.
import plotly.plotly as py
import plotly.graph_objs as go
import plotly.plotly as py : This has the functions for communicating with the plotly servers
import plotly.graph_objs as go : This has the functions for generating graph objects. This is a useful module for calling help on to see all the attributes taken as parameters of an object. There are also different useful methods of the object available such as the update method that can be used to update the plot object to add more information onto it.
Generalized Structure
The graph_objs class contains several structures that are consistent across visualizations made in plot.ly, regardless of type.
We begin with trace , which can be thought of as an individual layer that contains the data and specifications for how the data should be plotted (i.e. lines, markers, chart type). Here’s an example of the structure of trace:
trace1 = {
"x": ["2017-09-30", "2017-10-31", "2017-11-30", ...],
"y": [327900.0, 329100.0, 331300.0, ...],
"line": {
"color": "#385965",
"width": 1.5
},
"mode": "lines",
"name": "Hawaii",
"type": "scatter",
}
As you can see, trace is a dictionary of parameters of the data to be plotted, as well as information about the color and line types.
We can compile several traces by appending them to a list, which we’ll call data . The order of traces in the list determine the order in which they’re laid onto the final plot. Typically, data should look something like this:
data = [trace1, trace2, trace3, trace4]
layout = go.Layout() : This object is used for the layout of the data including how it looks and changeable features such as title, axis titles, font, and spacing. Just like trace , it is a dictionary of dictionaries.
layout = {
"showlegend": True,
"title": {"text": "Zillow Home Value Index for Top 5 States"},
"xaxis": {
"rangeslider": {"visible": True},
"title": {"text": "Year from 1996 to 2017"},
"zeroline": False
},
"yaxis": {
"title": {"text": "ZHVI BottomTier"},
"zeroline": False
}
}
We can finally compile the data and the layout using the go.Figure() function, which eventually gets passed to the plotting function that we choose.
fig = go.Figure(data = data, layout = layout)
Bar Chart
go.Bar() creates a bar chart type figure. Within the go.Layout() function, we can specify important information such as barmode = “group” , which groups the different bars for each year together, labels for the x and y axes, and a title for the full graph.
Line Plot
go.Scatter() instantiates a trace of scatter type, as opposed to a bar chart or other form.
We can change the mode of the marker using the mode parameter. Even though we are using a scatter plot, we can generate a scatter plot which creates lines and markers (points) on the lines.
mode = “lines+markers”
Time Series Line Plot
Here, we’ve added a range slider that adjusts the domain of data that can be included in the main plot using the rangeslider parameter.
We’ve also passed a colors dictionary containing a unique color for each state. To do so, we used the seaborn color_palette() function, specified the color range, as well as the number of discrete values we need from the distribution. Because plot.ly will not accept RGB tuples, we can convert the output to HEX codes using the as_hex() function.
Multiple Scatter Plots
To create this layout, instead of appending the traces to a single dictionary, we create subplots using the make_subplots() function, and add the trace to a specific location on the grid using the append_trace() function.
Choropleth Map
With the choropleth, we can take a shortcut using the figure factory class, which contains a set of functions to easily plot more complex figures such geographical maps.
import plotly.figure_factory as ff
From the ff.create_choropleth() function, we pass a set of FIPS values, or geographical identification codes specific to each county, city, or state, where the values (ZHVI_BottomTier) correspond to the data to be assigned to that region.
Final Thoughts
As depicted from the examples of different types of graphs above, Plot.ly is a powerful tool for developing both visually pleasing and comprehensible plots for a wide range of audiences. It has many benefits including being widely accessible with having both offline and online modes, and containing functions that can display generated graphs in the notebook and in a web browser. With extra advantages in interactivity, Plotly is a great alternative to Matplotlib and Seaborn, and can boost impact for presentation.
Let us know if you have any questions! | https://towardsdatascience.com/getting-started-with-plot-ly-3c73706a837c | ['Ishaan Dey'] | 2019-06-14 19:04:29.685000+00:00 | ['Data Visualization', 'Plotly', 'Data Science', 'Python'] |
The Snowman | It was one of those magical days.
The kids had a snow day and I decided to blow off work.
So, we spent the day outside. There were snowball fights and sledding after I spent approximately four hours shoveling.
The showcase of course was the Snowman.
We spent the afternoon on him. Three nearly perfect spheres of snow in three different sizes stacked on top of each other.
My wife brought out a baby carrot for the nose. I had no idea where to get coal so the kids dug in the yard to find some frozen gravel which became the eyes and mouth.
Then, all it took was some buttons, an old hat and scarf, and some well placed sticks.
The youngest had just seen Frosty the Snowman and was half convinced that he would come to life when we put the hat on.
She was disappointed when he didn’t.
And now we were in the house. The kids were playing some video game and I was dicking around on my phone like a modern Norman Rockwell painting.
There was even hot cocoa.
I went to let the dog in and happened to pass by the front window that overlooked the front yard.
A flash on movement caught my eye.
“Honey did you see that?”
“See what?”
“The snowman. I swear it’s head turned,”
“You’re ridiculous.”
“No I swear.”
“Whatever you say.”
I stood there watching the snowman for 15 minutes, and I swear it was turning its head slightly to see if I was still watching him.
But soon it was time for bed, and I forgot about the weird feeling.
Until the next day I was walking out to work and as I opened my car door I was hit in the back with a snowball.
I spun around and the street was quiet. It was 5:00 am the street was dark.
I waited but still nothing.
Later that day, I picked the kids up from school and there was a puddle of water in the kitchen.
At first I thought the dog had an accident, but realized it was just…water.
“Where’d this come from?”
But the kids just shrugged.
“Maybe Muffin spilled her water again?” one of them said.
“From other room?”
Again they just shrugged. At that age that was about the extent of their communication skills.
So I cleaned it up.
I looked back out front and the snowman was still there, but I swore he was in a slightly different spot.
Again, it slipped to the back of my mind as I had other things to do.
But that night, I felt cold air on my face.
I woke up and I heard the bedroom door shut.
I sprang up and when my feet hit the floor they were wet. I was standing in another puddle.
I sprinted downstairs to look out the window.
But the snowman was there. Though, I could swear there was the slightest trail in the snow.
I tried to wake my wife up but she was having none of it.
“Honey,” I whispered.
“I swear to god you say something else about the snowman I’m leaving you.”
“Ugh fine.”
“You’re crazy.”
“It’s alive babe.”
“Whatever go to sleep.”
The next day, nothing happened, and I was slightly relieved because there was supposed to be a warm spell. So my snowman worries were over.
I went to bed.
And I awoke to the same cold air I felt the other night.
And something being pushed on my face.
My eyes popped open to see what the snowman trying to smother with his scarf.
I tried to fight him off, but soon it went dark.
I woke up to more cold air.
So, so cold.
I realized I was standing in my front yard but when I tried to walk back in the house I couldn’t move.
I was able to see and turn my head slightly.
But the horror came when I saw that I was no longer in my body.
I was made of snow. My arms were sticks, and my nose a carrot.
It got worse as I saw my family leaving the house in the morning. I could see my old body moving just fine.
What happened?
But my body turned and I saw the same wicked grin that was on the snowman’s face the night before.
Oh god.
He switched our bodies.
I tried to move but nothing.
I tried to scream, to say anything, but there was only silence.
I was stuck in a frozen body as the sun rose and I could already feel my body starting to melt. | https://medium.com/the-inkwell/the-snowman-4ef92500f4b1 | ['Matthew Donnellon'] | 2020-11-28 04:17:24.882000+00:00 | ['Creativity', 'Short Story', 'Life', 'Books', 'Fiction'] |
Before It Goes | a part-time writer’s frustration
is the state of brimming with
ideas threatening to disappear
you
searching for nooks and crannies
of time in the workday’s drudgery
everything is drudgery, after all,
compared to writing
to jot it down, the brilliance
that has possessed you. | https://medium.com/meri-shayari/before-it-goes-1cd271aa22f9 | ['Rebeca Ansar'] | 2019-10-30 15:01:01.640000+00:00 | ['Writing', 'Creativity', 'Poetry', 'Ideas', 'Writer'] |
How the Kardashev Scale Will Determine Our Future | One of the most fascinating theories regarding space is that of the Kardashev scale. Meant to describe what alien civilizations would look like, it labels our own in comparison to what may be in the cosmos.
What is the Kardashev Scale?
Proposed originally by Russian astrophysicist, Nikolai Kardashev, this scale measures potential civilizations by energy output in 3 stages.
He believed that a civilization is measured on a cosmic level by its’ energy usage and the technology it uses. According to Kardashev, these two aspects ran parallel to one another.
As more energy is produced, higher levels of technology are needed to produce it. Therefore, a society that has a high energy output must have matching technology.
In other words, more energy output = more technologically advanced.
Here’s the sad part…humans aren’t even on this scale yet.
We still harvest most of our energy from dead animals, plants, and the Earth itself. Society as we know it is a Type 0 civilization. Ouch.
Current estimates on when we may even be promoted onto the scale aren’t for another few hundred years. That being said, what does each stage of the Kardashev Scale mean and how is it applicable to our future?
Type I Civilization
This relatively low-level civilization that we have almost achieved is characterized by one that harnesses the complete energy of its’ neighboring star. Basically solar power.
Essentially, natural disasters can be converted into energy rather than destruction by civilizations of this caliber. Our energy production would need to be 100,000 times what it is now to achieve this.
Yeah, that’s the low level of this scale…start imagining the rest.
A very important characteristic, and perhaps why we haven’t achieved it, is that a Type I civilization has the ability to store enough energy for its’ growing population.
In other words, overpopulation is no longer a problem.
The entire planets’ resources would be utilized for energy and the light from our star as well.
While it may sound extremely far fetched, energy production is largely exponential and one or two large breakthroughs could help us achieve this stage.
Type II Civilization
This form of civilization is similar to Type I in the sense that a civilization of this rank controls the energy from its’ star. However, it goes beyond just converting solar energy.
Type II civilizations have the ability to directly manipulate their star and convert its’ energy to something more powerful than anything we’ve seen.
An idea would be a device that can capture the energy released from fusion which powers stars. This energy could then theoretically be transferred back to a home planet for use.
If gathered this way, the energy would be far more powerful than any form of solar energy we have seen.
Having the ability to actually manipulate a star means that all-natural universal disasters would no longer pose a threat to the planet.
The capability of celestial manipulation means that any asteroid on a collision path with us could be vaporized, for example. We are an estimated 1000–2000 years away from this level of technology, provided we don’t wipe ourselves out first.
Type III Civilization
Finally, we get to the big dogs of the universe. Advanced civilizations of this level have harnessed all the energy available from their galaxy in a sustainable way.
Complete colonization and the energy gathered from hundreds of millions of stars power a civilization of this magnitude. A society this advanced is at least a million years ahead of us in development.
One of the only ways society would successfully reach this point is by overcoming light-speed travel, which may entail using wormholes or some other form of travel.
The technology used by a society such as this would most likely appear as magic to us at our current level. Truthfully, if beings of this strength came across us they would probably terraform our world due to our low development.
It’s worth noting that in recent years, researches have extended the scale by two notches: Type IV and Type V. Universal and Multiversal civilizations, respectively, these groups would transcend time and space as we know it.
Given that civilizations of those types aren’t even comprehensible, scientists haven’t officially added them to the scale.
The truth of the matter is that we are a Type 0. We’ve wasted centuries fighting one another for resources, attempting to beat one another when we really should have been collaborating.
Any hope we have of advancing into the stars and onto the scale requires that we work together. Achieving a Type I civilization would put an end to our resource use and overpopulation problems, but only if we can put our differences aside.
The future of our society depends on it. We’ll see how it all plays out. | https://medium.com/predict/how-the-kardashev-scale-will-determine-our-future-723706cf33c1 | ['Trevor Mahoney'] | 2019-11-23 22:45:30.352000+00:00 | ['Astronomy', 'Space', 'Ideas', 'Science', 'Future'] |
I’m No Longer a Developer | I’m No Longer a Developer
A path to being a software engineer
Photo by Austin Distel on Unsplash
People, organizations, companies, and even nations are all dependent on software. The process of developing, deploying, upgrading, and even the decommissioning of software all can include costly exercises that can hold real life-threatening consequences. Projects have budgets, users have requirements, requirements change, technical solutions are plentiful, all solutions are not equal in outcome, and this is why software engineers are important. Simply being a good developer does not equate to being a great software engineer. If you have studied computer science or attended developer boot camps, most likely you have more skills you must have or obtain in order to truly fulfill the role of being a software engineer. In this article, we will introduce additional reasons why software engineering is important, the scope of a developer vs the scope of a software engineer in order to clarify areas that might exist between the two, and how developers unknowingly transition into being a software engineer.
What costs more the hardware or the software?
Like all professions in this world, cost plays a big role in determining how we operate. The amount of economic impact we have in the daily functions we execute in our professions often determines the training we must have before being allowed to perform those functions and often determines the number of precautions we put in place to prevent us from making mistakes while executing those functions.
Software always dominates the cost of your average PC when compared to hardware. There is a longer life cycle for software, where hardware for the user is complete, software needs to be updated for applications and even to improve the performance of the hardware itself. According to Bloomberg reports the iPhone X with 64 gigabytes of storage cost about $370.25 to make and it’s being sold a little under $1000. Although that figure does not include manufacturing and software it aids in giving us an idea of what software adds to the cost of the product.
Developer vs Software Engineer
Software engineering is centered around all of the methods, tools, and theoretical practices centered around the software development process where developers are focused on implementing customer requirements by writing lines of code. In other words, software engineering includes all facets of software production. Both developers and software engineers should be well organized and use systematic approaches to delivering solutions, but the shortfalls or accomplishments of a software engineer will surely have a greater impact on the product and the organization it resides in.
It helps if both developers and software engineers have some form of computer science background, but it is by no means a requirement. For developers, a strong understanding of computer science fundamentals can be sufficient in some positions, for software engineers computer science fundamentals are not sufficient for being successful. Software engineers also need to have stronger soft skills similar to successful project managers and product owners. The cost of the development process, product specifications, validation, and evolution are all in-scope for the software engineer. Although this can vary based on team or company size, in general, software engineers are rarely writing code if any at all. To get a better understanding of the scope of a software engineering, the ACM, IEEE-CS approved the following eight principles that software engineers shall adhere to:
1. PUBLIC — Software engineers shall act consistently with the public interest.
2. CLIENT AND EMPLOYER — Software engineers shall act in a manner that is in the best interests of their client and employer consistent with the public interest.
3. PRODUCT — Software engineers shall ensure that their products and related modifications meet the highest professional standards possible.
4. JUDGMENT — Software engineers shall maintain integrity and independence in their professional judgment.
5. MANAGEMENT — Software engineering managers and leaders shall subscribe to and promote an ethical approach to the management of software development and maintenance.
6. PROFESSION — Software engineers shall advance the integrity and reputation of the profession consistent with the public interest.
7. COLLEAGUES — Software engineers shall be fair to and supportive of their colleagues.
8. SELF — Software engineers shall participate in lifelong learning regarding the practice of their profession and shall promote an ethical approach to the practice of the profession.
Sorry, Your No Longer a Developer
Photo by Mason Kimbarovsky on Unsplash
Most Computer Science programs ensure to embedded great theoretical understanding in their students before they graduate and move into the professional world. Depending on the number of hands-on practical implementations of those concepts you were able to perform, you may or may not have a steep learning curve in adjusting from school problems to the real problems of the world. This may include simply learning how to integrating functionality into a large code base that is not yours, or even the art of testing your code without being able to simply compile what you have just implemented to see if it works, like you would on a simple school CS assignment.
Soon after mastering the professional requirements of being a developer, you start to gain more responsibilities, attend more meetings, and eventually start writing less code as you soon find yourself in meetings with more customers, management, and business colleagues. You have proven that you can not only write code, but you can also clearly provide the pros and cons of different decisions for your non-technical leadership. You have officially passed the skills obtained through your computer science degree, boot camp training, and early carrier. You must now accept that in reality, you have a decision to make.
Do you really want to ask for less responsibility so you can return to just being a developer?
Do you accept the challenge and write less code?
Do you find another job and start this process over just so you can be a developer once more?
All companies do not follow this same progression based on their size and philosophy, but this is a reality that many well-spoken developers may soon find themselves in. Many very talented software engineers would love to be just a developer, is there anything wrong with that? | https://medium.com/dev-genius/im-no-longer-a-developer-35e2c5b8eb90 | ['Terrence Pugh'] | 2020-07-01 15:23:13.865000+00:00 | ['Software Development', 'Software Engineering', 'Development', 'Careers', 'Programming'] |
AWS Encryption SDK in Baby Steps | How to protect customer data from physical and applicative data breaches is a challenge that every developer will face sooner than later.
When developing a cloud application based on AWS, the responsibility of protecting customer data is shared between the AWS infrastructure and the application developer. The AWS infrastructure supplies the infrastructure for encryption at rest, against physical data breaches and data protection in transit, while the application developer implements a Client side encryption.
In reality, designing a client side encryption solution is a challenging task with lots of considerations to take care of, like managing the encryption keys’ lifecycle, complying with encryption industry standards, choosing the encryption library, etc.
Luckily, AWS developed an open source solution — the AWS Encryption SDK that applies encryption industry best practices and innovations, while hiding most of the complexity in a simple set of APIs and configurations. In addition, the SDK integrates natively with the AWS Key management service. | https://medium.com/cyberark-engineering/aws-encryption-sdk-in-baby-steps-a2a5a99cea24 | ['Albert Niderhofer'] | 2020-10-07 07:33:17.907000+00:00 | ['Aws Sdk', 'Python', 'Encryption', 'AWS'] |
4 Books to Help You Become a Seasoned Python Programmer | 4 Books to Help You Become a Seasoned Python Programmer
A review of some of the best Python books
Photo by Link Hoang on Unsplash.
I know, I know. The internet has been around for many years, so why on earth are there people who still want to read books? There are tons of free resources online. Who is “stupid” enough to pay $30-40 or even more to buy a book on programming?
I’m one of the “stupid” people who have bought many books on Python to improve my coding skills, and they have benefited my work as a scientist whose job has lots of coding requirements.
In this article, I’d like to share four Python books that I find to be more useful than others.
Disclaimer: I’m providing Amazon links to these books for your convenience. They’re not affiliate links and I have no conflicts of interest to declare. The discussion of these books is solely based on my personal learning experience. | https://medium.com/better-programming/4-books-to-help-you-become-a-seasoned-python-programmer-7dea2fade7ed | ['Yong Cui'] | 2020-10-14 15:05:30.663000+00:00 | ['Machine Learning', 'Python', 'Artificial Intelligence', 'Software Development', 'Programming'] |
Things to Consider When Choosing a Component Library | Things to Consider When Choosing a Component Library
A wrong choice can lead to bad, hacky code
Photo by Oliver Roos on Unsplash
Modern UIs are composed of multiple components. Popular component libraries include components like:
Buttons
Forms: input fields, select elements, checkboxes, radio buttons
Dialogs, modals, and popovers
Cards
Tabs
Some of the most important things to consider when picking a component library:
Maintainability: Who are the people behind the library? Is it a company, a group of passionate open-source contributors, or a one-man army?
Flexibility: To what degree can this library be customized to your needs?
Ease of use: How difficult is it to use and integrate this library into a project?
Popularity: Is this library used by millions of developers or just a few developers in one company?
Dependencies: How much does this library depend on other third-party libraries?
Documentation and resources: Where and how can I get tutorials and other information about this library?
In this post, I want to take a closer look at this with a real-life example. The project will be an Angular web application and I’ll use the following component libraries:
Angular Material: A framework that contains material design components for Angular.
Bootstrap 4: The most popular CSS framework, originally created by Twitter.
ngx-bootstrap: An implementation of the popular Bootstrap framework which is intended for Angular applications.
Nebular: A customizable Angular UI Library with multiple components, themes, and further feature modules like Auth and Security.
While we’re taking a closer look at component libraries many of these aspects apply to libraries and frameworks in general. | https://medium.com/better-programming/things-to-consider-when-choosing-a-component-library-5c864de6d693 | ['Ali Kamalizade'] | 2020-07-21 20:43:15.946000+00:00 | ['Programming', 'JavaScript', 'React', 'Angular', 'Software Engineering'] |
For The Love Of A Cam Girl | For The Love Of A Cam Girl
Grant Amato stole hundreds of thousands of dollars from his family. They did everything possible to keep him out of jail and he thanked them with bullets.
Grant Amato | Source: publicpolicerecord.com
The Amato family was an American success story. Chad and Margaret had three boys; Jason, Cody, and Grant. They lived in a beautiful home in Chuluota, Florida, and were a tight familial unit.
Chad was employed as a pharmacist and also worked on computers. He strove to provide his family with a comfortable life. Margaret loved horses, especially ones that were abused or left for dead. She would go to the stable daily to work with a horse that she rescued and never gave up on it no matter how many times it bucked her.
The three boys got along with each other well, with Cody and Grant being each other’s best friend and the closest in age. The Amato’s loved football and often went to Florida Gators games together. From the outside looking in, they were the ideal happy family.
When Grant and Cody were in high school they became interested in health and wellness. They joined the school’s weightlifting team and would push each other in the gym.
After high school, Grant joined Cody at the University of Central Florida and this is when life began to drastically change for Grant and the whole Amato family.
Problems
Grant and Cody Amato were on the same career path. They both were going to go through nursing school then anesthesiology school. After they graduated, the plan was for Cody and Grant to buy matching BMWs and eventually their parent’s house. Chad and Margaret had bought a house in Tennessee that they were going to retire to. Cody stuck to the plan, Grant had his difficulties.
He was able to finish his nursing requirements but failed out of anesthesiology school. Still, he landed a nursing job. It didn’t last long.
Grant was suspended while an investigation began into him stealing and improperly administering medication. The company he worked for had him arrested and subsequently fired. Somehow the charges were dropped, possibly from Cody paying $8,000 for his brother’s lawyer.
Out of work, Grant retreated to his bedroom in his parent’s house. Initially, he would mostly play video games and live stream. He wanted to become popular on the gaming streaming site Twitch and use that for his source of income.
When Grant wasn’t gaming or live-streaming, he watched porn. Through his pornographic excursions, he met a Bulgarian cam girl named Silvie and became infatuated with her.
The Catalyst
Silvie was gorgeous, with her long dark hair, beautiful figure, and exotic Eastern European dialect. Grant was in love and obsessed.
Silvie | Source: screenshot Social Evil TV on YouTube
According to Michael Williams’ Orlando Sentinel article, “Obsession, money, lies tore Grant Amato’s family apart. A jury will decide whether he killed them,” Grant began to steal money from his father and Cody to pay for Silvie’s virtual company. He would spend up to four hours a night watching Silvie do what cam girls do. The site used a token system to buy time with their models.
Grant typically bought 5,000 tokens at a time which cost $600. Silvie’s shows cost 90 tokens per minute. If he watched for four hours, that added up to 21,600 tokens and roughly $2,500. His desperation to keep Silvie in his life made him resort to thieving from the people who loved him.
The young man also needed to keep up the facade he portrayed when interacting with Silvie as someone rich and successful. He would send her lingerie and sex toys to use during her performances as well as extra cash aside from the tokens.
Grant stole credit cards from his father and Cody that would quickly get maxed out. When these transactions were first discovered, Grant explained that he needed the money to promote himself on Twitch. The two of them expected that he was lying, but Grant felt no need to stop because he knew they would never press charges.
The Ultimatum
The amount of money that Grant stole from his family became astronomical. Between his father and brother, he had taken about $200,000 in a span of months.
Vulnerable Silvie | Source: screenshot from That Chapter on YouTube
But the family protected their youngest member and continued to live as his hostages. Cody went above and beyond normal brethren duties. He shelled out $10,000 so he and Grant could go on an already planned trip to Japan. He hoped the vacation would allow Grant to clear his head. After the 10-day trip, they could seek treatment and therapy for the troubled man.
When the brothers returned home, the tension between Grant and his father increased. Chad Amato was becoming impatient with Grant’s inability to find a job. The stress he experienced from seeing his retirement being handed over to a woman his son had never even met in person was too much to contain and he rode Grant hard to turn his life around.
After one argument, Grant walked out of the house. His family reported him missing and told the police that he was extremely depressed.
Grant went to the home of his aunt, Donna Amato, and was allowed to stay over. She noticed that something was wrong with her nephew. Then she started seeing bizarre charges in her bank account and thought that she had been hacked, but figured out that they were from Grant. Chad and Margaret begged Donna not to press charges and Cody even promised to cover all the money stolen by his brother.
In Williams’ article, he said that Chad broke down during one phone call, the only time Donna Amato had heard her brother-in-law cry in the 27 years she knew him. Chad explained that he had to remortgage his house to cover $150,000 of Grant’s debt.
One day the family surprised Grant and took him to a rehab facility. Cody footed the bill to the tune of $15,000. After a couple of weeks, Grant was allowed to return home, but there were strict rules with a zero-tolerance policy.
The requirements were: Grant had to get a job, he could not get online at night, he couldn’t have a cell phone, he could not contact Silvie, he had to go to therapy, and the family was done covering any of Grant’s debts. Chad had let another fraudulent line of credit go through and it was the last straw.
The End
The rules meant almost nothing to Grant as he never faced real consequences. He was able to con his mother into letting him use her phone to contact Silvie. According to Williams’ article, when Chad became aware of this, he grabbed his son by the shirt, told him to pack his things and get out of the house. He was no longer welcome.
On January 24, 2019, when Grant was supposed to be gathering his belongings to move, he decided on a different plan. While his mother worked on the computer, Grant walked up behind her and shot her in the head. He then waited for his father to get home.
Chad Amato carried a gun on him, so Grant knew he needed to sneak up on him. When his father arrived home and walked into the kitchen, Grant shot him twice in the head. Two down, one to go.
The finale was Cody. He was the key piece of Grant’s twisted plot. When Cody got out of his nursing shift and went home, he had no idea what he was walking into. Cody came inside the house and was immediately shot in the head.
Cody Amato’s lifeless body | Source: screenshot from That Chapter on YouTube
Grant then attempted to stage the scene to give the impression that Cody had killed their parents then turned the gun on himself. After a full day of murder, Grant went to a hotel.
The next day when Cody failed to show up for work, his coworkers became worried when they could not contact him, so they alerted the police. When the cops got to the home and could not get any response from banging on the door and blowing an air horn, they went inside to discover the gruesome scene. They tracked down Grant and took him in for questioning.
The Aftermath
Authorities found several pictures on Grant’s computer of credit cards belonging to his parents, Cody, and various members of the extended Amato family.
Grant Amato was questioned and interviewed. Three hours into the interrogation he hadn’t even asked what happened to his family or why he was there. When confronted with pictures of his dead family Grant teared up, but claimed he had nothing to do with it.
Towards the end of the long interrogation, the police had Grant’s oldest brother Jason come and talk to him. Grant didn’t budge. He wouldn’t come clean and was arrested on three charges of first-degree murder.
Grant Amato in court | Source: screenshot from That Chapter on YouTube
Grant lied during questioning and during the hearings. None of it worked and on July 31, 2019, jurors found Grant guilty of all three counts of first-degree murder. He received a life sentence for each.
The depths of loneliness can take people to a place they never imagined. Between pornography and virtual dating, Grant Amato finally found something and someone that made him feel good about himself. He knew the cam girl game was all about money, but still wanted to believe someone as beautiful as Silvie liked him for more than just his cash and gifts even though he could only talk to her by spending money.
Obsession is powerful. Grant’s infatuation made him throw away the only people who loved him. Within a matter of months, Silvie became the only thing that mattered to him. His parents gave him an amazing life full of privileges that many American kids don’t experience.
He took advantage of his brother and only friend’s generosity, then used him as a pawn in an attempt to implicate him in the murders. Cody spent countless days and tens of thousands of dollars to try and help his little brother and was discarded like trash for his 30 plus years of loyalty.
The Amato’s were incredibly compassionate people whose lives ended in heartbreaking fashion. Their unwavering love for their son couldn’t compete with his desire for a young vixen with a sexy accent. | https://medium.com/crimebeat/for-the-love-of-a-cam-girl-9e8e83805ead | ['Aj Wiseman'] | 2020-12-13 18:14:58.377000+00:00 | ['Social Media', 'True Crime', 'Psychology', 'Society', 'Addiction'] |
How to Configure AWS Route 53 | by Serguey Martinez, Staff Engineer
Photo by Procreator UX Design Studio
If your architecture is based on microservices it’s a really good idea to buy a domain in Route 53 and make the connections with API gateway so that you can enjoy URL’s like:
dev.mydomain.com/docs dev.mydomain.com/microservice-basePath dev.mydomain.com/another-microservice-basePath prod.mydomain.com/docs prod.mydomain.com/microservice-basePath prod.mydomain.com/another-microservice-basePath
In this quick tutorial, I’m going to show you how to configure AWS Route 53. The first step is to buy a domain:
Next, go to AWS certificate manager and request certificates for the subdomains that you will need. Heads up that AWS requires you to have TLS encryption.
So, choose the email owner to verify the certs:
Then, wait for them to get issued…
Once you have the certificates issued let’s go to API gateway -> custom domains -> create .
The domain name here should be something like dev.mydomain.com or api.example.com depending on the certificates you requested.
Choose the appropriate certificate and create the custom domain:
You can map now which API rest will handle which basePath.
In the example below if we try to access the URL https://dev.mydomain.com/docs it will delegate the request to swagger-dev API:
In this case, swagger-dev has the following structure. Remember that if you are mapping X basepath in this case docs , then you don't need to create another X basepath in the API where the requests are being delegated.
The last step consists of creating a record in Route 53 hosted zones. Whenever you buy a domain a hosted zone is created for you automatically. You can see your name servers (NS) and start of authority (SOA). On very rare occasions you have to touch those configurations — be careful not to do it now.
Go to Route53 -> hosted zones -> your domain -> create record -> simple routing . As the record name you should put the name of the subdomain: api , dev , etc.
In route traffic to , choose the API gateway alias you created in custom domains for a specific service:
Summary
We just created the recommended flow that will support a microservices development with Route 53 and API gateway. Due to the limit of cloud formation resources set to 200 we can split our services into microservices with dedicated business logic. | https://medium.com/tribalscale/how-to-configure-aws-route-53-c8fa99ce66fb | ['Serguey Arellano Martínez'] | 2020-12-02 21:34:07.907000+00:00 | ['Aws Route53', 'AWS', 'How To', 'Microservices', 'Tutorial'] |
Building a Design System Package With Storybook, TypeScript, and React in 15 Minutes | Building a Design System Package With Storybook, TypeScript, and React in 15 Minutes
A design system will make you more productive and help you build new features and components faster
Background photo by Nils Johan Gabrielsen on Unsplash.
When building out a UI component library for my own work, I ran into a couple of pain points while searching for how to create a simple workflow that “just works.”
Most tutorials I saw for TypeScript component libraries made use of build tools (which at times caused headaches) and my current job took the opposite extreme of publishing the UI component library as TypeScript and relying on individual projects to transpile it to JavaScript directly from the library itself (if you are from my company… you didn’t read anything). | https://medium.com/better-programming/building-a-design-system-package-with-storybook-typescript-and-react-in-15-minutes-b5fd5711339e | ["Dennis O'Keeffe"] | 2020-12-07 18:17:52.020000+00:00 | ['Programming', 'React', 'Typescript', 'Design Systems', 'JavaScript'] |
Monitor your infrastructure with InfluxDB and Grafana on Kubernetes | Grafana in action — Learn how to set it up in your AWS cloud
Monitoring your infrastructure and applications is a must-have if you play your game seriously. Overseeing your entire landscape, running servers, cloud spends, VMs, containers, and the applications inside are extremely valuable to avoid outages or to fix things quicker. We, at Starschema, rely on open source tools like InfluxDB, Telegraf, Grafana, and Slack to collect, analyze, and react to events. In this blog series, I will show you how we built our monitoring infra to monitor our Cloud infrastructure, applications like Tableau Server and Deltek Maconomy, data pipelines in Airflow among others.
In this part, we will build up the basic infrastructure monitoring with InfluxDB, Telegraf and Grafana on Amazon’s managed Kubernetes service: AWS EKS.
Create a new EKS Kubernetes cluster
In case you have an EKS cluster already, just skip this part.
I assume you have a properly set up aws cli on your computer, if not, then please, do it, it will be a life-changer. Anyway, first, install eksctl which will help you to manage your AWS Elastic Kubernetes Service clusters and will save tons of time by not requiring to rely on the AWS Management Console. Also, you will need kubectl , too.
First, create new a Kubernetes cluster in AWS using eksctl without a nodegroup:
eksctl create cluster --name "StarKube" --version 1.18 --region=eu-central-1 --without-nodegroup
I used eu-central-1 region, but you can pick another one that is closer to you. After the command completes, add a new nodegroup to the freshly created cluster that uses only one availability zone (AZ):
eksctl create nodegroup --cluster=StarKube --name=StarKube-default-ng --nodes-min 1 --nodes-max 4 --node-volume-size=20 --ssh-access --node-zones eu-central-1b --asg-access --tags "Maintainer=tfoldi" --node-labels "ngrole=default" --managed
The reason why I created a single AZ nodegroup is to be able to use EBS backed persistent volumes along with EC2 autoscaling groups. On multi-AZ node groups with autoscaling, newly created nodes can be in a different zone, without access to the existing persistent volumes (which are AZ specific). More info about this here.
TL;DR use single-zone nodegroups if you have EBS PersistentVolumeClaims.
If things are fine, you should see a node in your cluster:
$ kubectl get nodes
AGE VERSION
ip-192-168-36-245.eu-central-1.compute.internal Ready <none> 16s v1.18.9-eks-d1db3c
Create a namespace for monitoring apps
Kubernetes namespaces are isolated units inside the cluster. To create our own monitoring namespace we should simply execute:
kubectl create namespace monitoring
For our convenience, let’s use the monitoring namespace as the default one:
kubectl config set-context --current --namespace=monitoring
Install InfluxDB on Kubernetes
Influx is a time-series database, with easy to use APIs and good performance. If you are not familiar with time-series databases, it is time to learn: they support special query languages designed to work with time-series data, or neat functions like downsampling and retention.
To install an application to our Kubernetes system, usually we
(Optional) Create the necessary secrets as an Opaque Secret (to store sensitive configurations) (Optional) Create a ConfigMap to store non-sensitive configurations (Optional) Create a PersistentVolumeClaim to store any persistent data (think of volumes for your containers) Create a Deployment or DaemonSet file to specify the container-related stuff like what we are going to run. (Optional) Create a Service file explaining how we are going to access the Deployment
As stated, the first thing we need to do is to define our Secrets : usernames and passwords we want to use for our database.
kubectl create secret generic influxdb-creds \
--from-literal=INFLUXDB_DB=monitoring \
--from-literal=INFLUXDB_USER=user \
--from-literal=INFLUXDB_USER_PASSWORD=<password> \
--from-literal=INFLUXDB_READ_USER=readonly \
--from-literal=INFLUXDB_USER_PASSWORD=<password> \
--from-literal=INFLUXDB_ADMIN_USER=root \
--from-literal=INFLUXDB_ADMIN_USER_PASSWORD=<password> \
--from-literal=INFLUXDB_HOST=influxdb \
--from-literal=INFLUXDB_HTTP_AUTH_ENABLED=true
Next, create some persistent storage to store the database itself:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: monitoring
labels:
app: influxdb
name: influxdb-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
If you are new to Kubernetes, the way to execute these files is to call kubectl apply -f <filename> , in our case kubectl apply -f influxdb-pvc.yml .
Now, let’s create the Deployment , that defines what containers we need and how:
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: monitoring
labels:
app: influxdb
name: influxdb
spec:
replicas: 1
selector:
matchLabels:
app: influxdb
template:
metadata:
labels:
app: influxdb
spec:
containers:
- envFrom:
- secretRef:
name: influxdb-creds
image: docker.io/influxdb:1.8
name: influxdb
volumeMounts:
- mountPath: /var/lib/influxdb
name: var-lib-influxdb
volumes:
- name: var-lib-influxdb
persistentVolumeClaim:
claimName: influxdb-pvc
It will create a single pod (since replicas=1 ), passing our influxdb-creds as environmental variables and influxdb-pvc PersistentVolumeClaim to obtain 5GB storage for the database files. If all good, we should see something like:
[tfoldi@kompi]% kubectl get pods -l app=influxdb
NAME READY STATUS RESTARTS AGE
influxdb-7f694df996-rtdcz 1/1 Running 0 16m
After we defined what we want to run, it’s time for how to access it? This where Service definition comes into the picture. Let’s start with a basic LoadBalancer service:
apiVersion: v1
kind: Service
metadata:
labels:
app: influxdb
name: influxdb
namespace: monitoring
spec:
ports:
- port: 8086
protocol: TCP
targetPort: 8086
selector:
app: influxdb
type: LoadBalancer
It tells that our pod’s 8088 port should be available thru an Elastic Load Balancer (ELB). With kubectl get service , we should see the external-facing host:port (assuming we want to monitor apps outside from our AWS internal network).
$ kubectl get service/influxdb
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
influxdb LoadBalancer 10.100.15.18 ade3d20c142394935a9dd33c336b3a0f-2034222208.eu-central-1.elb.amazonaws.com 8086:30651/TCP 18h curl http://ade3d20c142394935a9dd33c336b3a0f-2034222208.eu-central-1.elb.amazonaws.com:8086/ping
This is great, but instead of HTTP , we might want to use HTTPS . To do that, we need our SSL certification in ACM with the desired hostname. We can either do it by generating a new certificate (requires Route53 hosted zones) or upload our external SSL certificate.
Amazon Issued SSL Certs are great but require Route 53 hosted zones. Alternatively, you can import existing SSL certificates.
If we have our certificate in ACM, we should add it to the Service file:
apiVersion: v1
kind: Service
metadata:
annotations:
# Note that the backend talks over HTTP.
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# TODO: Fill in with the ARN of your certificate.
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:{region}:{user id}:certificate/{id}
# Only run SSL on the port named "https" below.
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
labels:
app: influxdb
name: influxdb
namespace: monitoring
spec:
ports:
- port: 8086
targetPort: 8086
name: http
- port: 443
name: https
targetPort: 8086
selector:
app: influxdb
type: LoadBalancer
After executing this file, we can see that our ELB listens on two ports:
[tfoldi@kompi]% kubectl get services/influxdb
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
influxdb LoadBalancer 10.100.15.18 ade3d20c142394935a9dd33c336b3a0f-2034222208.eu-central-1.elb.amazonaws.com 8086:30651/TCP,443:31445/TCP 18h
SSL is properly configured, the only thing is missing to add an A or CNAME record pointing to EXTERNAL-IP .
We all set, our database is running, and it is available on both HTTP and HTTPS protocols.
Installing Telegraf on Kubernetes
We need some data to validate our installation, and by the way, we already have a system to monitor: our very own Kube cluster and its containers. To do this, we will install Telegraf on all nodes and ingest cpu, IO, docker metrics into our InfluxDB. Telegraf has tons of plugins to collect data from almost everything: infrastructure elements, log files, web apps, and so on.
The configuration will be stored as ConfigMap , this is what we are going to pass to our containers:
kind: ConfigMap
metadata:
name: telegraf
namespace: monitoring
labels:
k8s-app: telegraf
data:
telegraf.conf: |+
[global_tags]
env = "EKS eu-central"
[agent]
hostname = "$HOSTNAME"
[[outputs.influxdb]]
urls = ["
database = "$INFLUXDB_DB" # required apiVersion: v1kind: ConfigMapmetadata:name: telegrafnamespace: monitoringlabels:k8s-app: telegrafdata:telegraf.conf: |+[global_tags]env = "EKS eu-central"[agent]hostname = "$HOSTNAME"[[outputs.influxdb]]urls = [" http://$INFLUXDB_HOST:8086/ "] # requireddatabase = "$INFLUXDB_DB" # required timeout = "5s"
username = "$INFLUXDB_USER"
password = "$INFLUXDB_USER_PASSWORD" [[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
report_active = false
[[inputs.disk]]
ignore_fs = ["tmpfs", "devtmpfs", "devfs"]
[[inputs.diskio]]
[[inputs.kernel]]
[[inputs.mem]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]
[[inputs.docker]]
endpoint = "unix:///var/run/docker.sock"
To run our Telegraf data collector on all nodes of our Kubernetes cluster, we should use DaemonSet instead of Deployments .
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: telegraf
namespace: monitoring
labels:
k8s-app: telegraf
spec:
selector:
matchLabels:
name: telegraf
template:
metadata:
labels:
name: telegraf
spec:
containers:
- name: telegraf
image: docker.io/telegraf:1.5.2
env:
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: "HOST_PROC"
value: "/rootfs/proc"
- name: "HOST_SYS"
value: "/rootfs/sys"
- name: INFLUXDB_USER
valueFrom:
secretKeyRef:
name: influxdb-creds
key: INFLUXDB_USER
- name: INFLUXDB_USER_PASSWORD
valueFrom:
secretKeyRef:
name: influxdb-creds
key: INFLUXDB_USER_PASSWORD
- name: INFLUXDB_HOST
valueFrom:
secretKeyRef:
name: influxdb-creds
key: INFLUXDB_HOST
- name: INFLUXDB_DB
valueFrom:
secretKeyRef:
name: influxdb-creds
key: INFLUXDB_DB
volumeMounts:
- name: sys
mountPath: /rootfs/sys
readOnly: true
- name: proc
mountPath: /rootfs/proc
readOnly: true
- name: docker-socket
mountPath: /var/run/docker.sock
- name: utmp
mountPath: /var/run/utmp
readOnly: true
- name: config
mountPath: /etc/telegraf
terminationGracePeriodSeconds: 30
volumes:
- name: sys
hostPath:
path: /sys
- name: docker-socket
hostPath:
path: /var/run/docker.sock
- name: proc
hostPath:
path: /proc
- name: utmp
hostPath:
path: /var/run/utmp
- name: config
configMap:
name: telegraf
Please note that this will use the same influxdb-creds secret definition to connect to our database. If all good, we should see our telegraf agent running:
$ kubectl get pods -l name=telegraf
NAME READY STATUS RESTARTS AGE
telegraf-mrgrg 1/1 Running 0 18h
To check the log messages from the telegraf pod, simply execute kubectl logs <podname> . You should not see any error messages.
Set up Grafana in Kubernetes
This will be the fun part, finally, we should be able to see some of the data we collected (and remember, we will add everything). Grafana is a cool, full-featured data visualization for time-series datasets.
Let’s start with the usual username and password combo as a secret.
kubectl create secret generic grafana-creds \
--from-literal=GF_SECURITY_ADMIN_USER=admin \
--from-literal=GF_SECURITY_ADMIN_PASSWORD=admin123
Add 1GB storage to store the dashboards:
apiVersion: v1
kind: PersistentVolumeClaim metadata:
name: graf-data-dir-pvc spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Define the deployment. As Grafana docker runs as 472 uid:gid, we have to mount the persistent volume with fsGroup: 472 .
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: monitoring
labels:
app: grafana
name: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
containers:
- envFrom:
- secretRef:
name: grafana-creds
image: docker.io/grafana/grafana:7.3.3
name: grafana
volumeMounts:
- name: data-dir
mountPath: /var/lib/grafana/
securityContext:
fsGroup: 472 volumes:
- name: data-dir
persistentVolumeClaim:
claimName: graf-data-dir-pvc
Finally, let’s expose it in the same way we did with InfluxDB:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-central-1:<account>:certificate/<certid> service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
labels:
app: grafana
name: grafana
namespace: monitoring
spec:
ports:
- port: 443
name: https
targetPort: 3000
selector:
app: grafana
type: LoadBalancer
Voila, we should have our Grafana up and running. Let’s check the ELB address with kubectl get services , point a nice hostname to its hostname/IP, and we are good to go. If all set, we should see something like:
I am glad that you made it here, now let’s log on!
Use the username/password combination you defined earlier, and see the magic.
Home screen for our empty Grafana
Define database connection to InfluxDB
Why this can be done programatically, to keep this post short (it’s already way too long), let’s do it from the UI. Click on the gear icon, data source, Add data source:
You know where should you click
Select InfluxDB:
Add http://influxdb:8066/ as URL, and set up your user or readonly influxdb user.
Adding our first Grafana Dashboard
Our telegraf agent is loading some data, there is no reason to not look at it. We can import existing, community-built dashboards such as this one: https://grafana.com/grafana/dashboards/928.
Click on + sign on the side bar, then Import. In the import screen add the number of this dashboard (928).
After importing, we should immediately see our previously collected data, in live:
This is really cool
Feel free to start building your own dashboards, it is way easier than you think.
Next steps
In the next blog, I will show how to monitor our (and our customers) Tableau Server, and how to set up data-driven email/slack alerts in no time. | https://medium.com/starschema-blog/monitor-your-infrastructure-with-influxdb-and-grafana-on-kubernetes-a299a0afe3d2 | ['Tamas Foldi'] | 2020-11-22 15:54:00.989000+00:00 | ['Kubernetes', 'Influxdb', 'Aws Eks', 'Data Engineering', 'Grafana'] |
Making Grids in Python | Making Grids in Python
Hip to be square
Photo by the author.
At some point, you might need to make a grid or grid-like visual structure for a project or coding interview question (mazes, search). A grid is also the foundation for placing objects on a canvas surface in an orderly manner and more complex cases like isometric views and tiled games.
This article is meant to take us both from zero grids to Intermediate Gridology in a hopefully painless way.
Note: The code and concepts discussed here are somehow interchangeable between languages, but some languages provide a better native experience and primitives are needed for grids. Python is notorious for not having a native/simple GUI solution, so I’ll use the next best thing: pygame (install it if you want to code along):
pip install pygame
Or:
python3 -m pip install pygame==2.0.0
Pygame is not really a GUI library but rather a simple game engine you can install and understand in a few minutes and then adapt to your GUI or language of choice. See the following section for a crash-course snippet… | https://medium.com/better-programming/making-grids-in-python-7cf62c95f413 | ['Keno Leon'] | 2020-11-16 16:55:25.220000+00:00 | ['Programming', 'Python', 'Coding', 'Data Science', 'Software Engineering'] |
Only God Is an Atheist | 1. The boy with horns
I’ve known about the two horns growing out of the top of my head since about middle school. The one on the right comes to a sharper point than the other. My pediatrician, as far as I could tell, didn’t seem too worried or repulsed. Just a misshapen skull. Nothing a good haircut can’t disguise. As long as you can hide your horns, you can blend right in with the gen pop.
Typically, they’re known as cutaneous horns, which can look like tusks, bones, or tree limbs sprouting from a human skull. Mine are subcutaneous, just below the skin. If you were to shave my head, you’d find two half-inch-wide by half-inch-tall nubs trying to push through. They’re located where my frontal lobe meets my parietal lobe, and I can trace a straight line directly up to them from the top of my ears. When I was a young hypochondriac, I thought they were cysts, but my doctor assured me they’re just abnormalities in my skull. Nothing to fear.
I think it’s safe to say that every kid, even those to whom popularity comes as if by divine right, will at some point feel misunderstood and extraterrestrial. As a kid, my horns, despite their literal location, were the furthest things from my mind. By the end of middle school, there were plenty of other physical features that made me feel like an outsider: buck teeth, braces, a unibrow, puberty, and a mess of freckles that refused to disappear no matter what old wives tales promised about smearing lemon juice on the face.
It wasn’t until a visit to the hair salon that I became acutely self-conscious of my horns.
Haircuts almost always go the same way for me: Hairdressers run their hands through my hair and inadvertently grope the horns. Some gasp, while others go wide-eyed in the mirror. When I was too young to drive myself to the salon, they’d also make sure my mother knew her son had the Mark of the Beast.
“Do they hurt?” they like to ask.
There’s always been something somewhat sinister in the image of two of me, the real me and my reflection, grinning and blushing in the chair with a black cape draped around my body, saying, “Yeah, I know about the horns. No, they don’t hurt.”
You might be wondering why I didn’t just stick to one hairdresser as soon as I found one who knew about my weird skull. The truth is that, as a child, I was on a quest to find a hairdresser who could duplicate a very specific haircut. Specifically, this haircut was inspired by the scene in Casper — the live-action film starring Christina Ricci, Bill Pullman, and Devin Sawa — when Casper is temporarily transformed into a flesh-and-blood boy with a most righteous haircut, sculpted with gel into the shape of an M that arches atop his forehead.
In 1998, I was gearing up to honor the Jewish ritual of one’s passage into adulthood: my bar mitzvah. For the event, I’d wear a suit, a yarmulke, and, for the first time, a tallit over my shoulders — a prayer shawl that’s essentially a sacred scarf only to be worn by those who complete this initiation.
Thus, my ghost haircut had to be impeccable. The immortality of photo albums was a real concern even in seventh grade, well before the age of the hashtag. Photos had a shelf life of at least a thousand years in the homes of Jewish grandparents.
2. Finding MTV in the synagogue
Casper, a dead boy stuck with his family in an old house, cut off from the world of the living, was not so dissimilar from me. In fact, I’d never related to a character more. I wasn’t dead, per se. But during those early years, we didn’t have a computer, the internet, or cable TV. So at school, I might as well have been dead. This wasn’t so much about my parents trying to cut my sisters and I off from the modern world — it was just our living situation on a farm in the middle of the woods. We didn’t even have an air conditioner until 10 years ago.
The library of the temple where I studied Hebrew was a dark room filled with relics: ancient scrolls, shofars, candlesticks. These objects had survived invasions, fires, and death, passed in secret between believers and survivors, and preserved in glass. I certainly recognized their significance. I stared at them for years in that synagogue, and I held them in my hands at Sunday school to feel the weight of time, the word of God, and a people destroyed and regenerated. But there was something far more intriguing to me hiding in the corner of the library, especially when I was 12 going on 13: a television with access to MTV.
By that time, MTV had been around for almost two decades. I’d caught clips here and there, and I knew what I was missing. I knew there were videos that went along with the songs I curated on my DIY mixtapes. But before I found that television, my only chance to hear any new music was on the radio — Hot 97, K104, and 92.3 K-ROCK. The disembodied voices of radio hosts offered my only connection to artists I became obsessed with — from Tupac, to TLC, to Ace of Base, to the Beastie Boys.
The idea that the past, present, and future could be mapped out by something, or someone, far greater than me felt boring and limited.
I’d soon start sneaking out of Shabbat services to catch music videos: Biggie, Third Eye Blind, the Spice Girls, Bone Thugs, Aaliyah, and the Cranberries. Seeing a music video would forever change the way I listened to a song. I studied music videos on an almost anthropological level. I, for sure, studied them closer than my Hebrew. The TV was my vessel to worlds I’d been so eager to explore, but my friends who had MTV at home seemed rather blasé about it. They could survive without it. For me, seeing the bright visuals that correlated with the songs I loved seemed as necessary as air and water.
If a house of worship is supposed to be a kind of bridge to God, and ancient Hebrew a holy connection to your ancestral lineage, the sanctuary in which I was supposed to discover myself as an adult in the eyes of the Lord had instead become my place to experiment with new ways of interpreting the world.
I studied MTV as if desperate to find something new to satisfy my hesitation about becoming a devoted member of a religion. There was a kid, just a few years older than me and born with a Biblical name, whose life I could clearly see mapped out from middle school to death. Law school, Volvo, and khakis… I needed otherwise. To my teenage self, any religion that seemed so sure of human existence likewise felt as sure about the trajectory of mankind. The idea that the past, present, and future could be mapped out by something, or someone, far greater than me felt boring and limited.
Back at school, I was in no way immune to that particular solipsism that tends to manifest itself in the tender hearts of 13-year-olds: the idea that the whole world is working to destroy you and only you. My close friends in seventh grade joked that my bar mitzvah would really be a public circumcision hosted by a DJ, which was better than hearing jokes about my Jewish ancestors killing Jesus. But nowhere was my Judaism more controversial than among the puritanical lunch monitors. These women prowled the cafeteria like the Gestapo in gaucho pants. The most wicked ones made me eat my lunch when I was supposed to be fasting for holidays like Yom Kippur; there was no arguing with them. I dreaded these women more than the God of the Old Testament who, in the book of Job, swiftly murdered Job’s entire family, covered his body in boils, and killed all his livestock just to prove how loyal man can be to the Lord even in the face of absolute despair.
After my mom had a talk with the school about not punishing me for fasting during certain holidays, I began to taunt the lunch monitors. I’d march into the cafeteria some days and make up my own Jewish holidays, just to tell those women I was fasting. There was nothing they could do about it. In a way, I welcomed their animosity. This was about the time I recognized in myself a fetish for rebellion. I was only eight when I sang, at the top of my lungs in the lunch line around Christmastime, “Joy to the world — your God is dead.” That one landed me in the principal’s office. I shudder to think what those women would’ve done if they ever found out about my horns.
3. Liberated by Marilyn Manson
In seventh grade, in the temple library, I got my first look at a Marilyn Manson music video: It was his cover of the Eurythmics’ “Sweet Dreams.” I’d already heard the original but his depraved, distorted version awoke something in me. Manson’s image alone was confirmation that there might be others out there who understood what it was like to feel isolated from the modern world. He seemed to rejoice in the fact that life was a mad and ridiculous series of events. Here was this ghoulish, skeletal man covered in mud, riding a pig, and wearing a tattered wedding dress. He was smiling so wide and distantly that it was almost intimidating.
I was hooked. MTV, through the grace of the temple, gave me a new icon. Self-righteous politicians and talking heads in the mainstream media told America to fear his music. Religious groups protested his concerts. (Shit, even a young Katy Perry picketed Manson concerts with her parents.) They said this man was an assault on the children of America; they warned that his music was causing widespread moral panic. It’s hard to imagine now, but this was back when the name Marilyn Manson had the power to shock people. How dare he sandwich the likeness of Marilyn Monroe with the savagery of Charles Manson?
Manson performed bare-assed, in a black corset and thigh-highs, at the 1997 MTV Video Music Awards. He screamed at the audience that he could see them out there “trying your hardest not to be ugly… trying your hardest to earn your way into heaven.” His second album, Antichrist Superstar, celebrated the discovery of the individual and gave me a newfound courage to be myself, whoever that happened to be.
I admired Manson because I believed he was exposing listeners to the idea that you should question what you believed and live your life accordingly. For example, “Get Your Gunn” is all about Dr. David Gunn, an OB-GYN in Atlanta who was murdered by an anti-abortion Christian fundamentalist. In 1999, Manson wrote in Rolling Stone that Gunn’s death “was the ultimate hypocrisy I witnessed growing up: that these people killed someone in the name of being ‘pro-life.’”
What I gathered was someone telling me to keep curious. Question everything. I appreciated someone drawing back the curtain. In a 1997 appearance on the TV show Politically Incorrect, Manson said, “I want people to think about what they believe. I want them to consider if everything they’ve been taught, if that’s what they want to believe or if that’s what they’ve been told that they have to believe.” (This episode, for those interested, also featured a wonderful interaction between Manson and Florence Henderson — which, in itself, is the exact kind of balancing act of the American grotesque and beauty that gave birth to Manson’s persona.)
Looking back, my Bar Mitzvah became more of a funeral for my faith than a promise to God.
It wasn’t as if Manson offered me some life-changing epiphany when it came to faith and belief, but he did offer me the chance to feel confident about questioning what I’d already begun to suspect wasn’t entirely for me. At school, my Jewishness became an act of rebellion against the lunch monitors. At temple, my reluctance to worship in the way that was expected of me also became an act of rebellion. Outside the temple, I was the only Jew in school (aside from my little sisters). Inside the temple, I was the only metalhead, rap-loving Jew who’d found solace in music videos.
So when I finally stood before my friends and family for my bar mitzvah on May 31, 1998, my father’s birthday, I felt like an impostor. I wore the suit. I recited the lines. I’d learned this new language to the best of my ability, and yet I was becoming the exact opposite of what I was supposed to be promising to God and family. Looking back, my bar mitzvah became more of a funeral for my faith than a promise to God. (Grandma, if you’re still reading this, please forgive me.)
In the bar mitzvah photograph that still hangs in my parents’ hallway, everyone is dressed up, smiling at the camera, proud, and joyful. I’ve got my ghost haircut but my jacket’s missing, my tie is gone, and I’m looking off camera, squinting at something in the distance, something that no one else seems troubled by. Perhaps it’s a wonderful, lurking darkness, and perhaps I’m wondering what I’ll do now that I’ve shed this skin.
Photos courtesy of the author
4. I’m not attached to your world
Thus began my morning routine: Wake up and spike hair. Paint fingernails black. Snap on faux-leather studded bracelets. Put on three to five fake-silver rings with skulls, anarchy signs, and pentagrams. Step into a pair of JNCO jeans so wide they looked like a denim dress. Throw on long-sleeved, fishnet shirt. Layer black Marilyn Manson tee over the fishnet. Attach wallet to chain and clip to belt loop. Add two more chains that’d hang from waist to knees for no purpose other than aesthetics.
The first goth I ever met was an Edgar Allan Poe impersonator. In fifth or sixth grade, we filed into the auditorium and watched a man in all black pretend to be a crazed-drunk, death-obsessed mad poet, flailing about the stage. This was where we typically sang Christmas carols for winter recitals.
It’s possible that the goth aesthetic appealed to me because I felt like a teenage monstrosity; being goth gave me the ability to become the very monster I believed others saw in me. (Read: an embarrassing amount of teenage angst.) Being goth, if you ask me, isn’t about how you dress or what you listen to. It’s more about the willingness to accept the macabre, the unknown, and all the frights that come with being a human. It’s a way of becoming your very own memento mori.
My parents were teenagers in 1969 when the Manson Family murders capped off what was already a violent decade. Three decades later, my parents had a teenage son who looked up to a man who took on Manson as a stage name, and whose followers adorned themselves with dark, occultist supplies. For their patience, my parents deserve a Nobel.
High school yearbook photo of the author
My father was disappointed, but also encouraging. This is a man who liked to remind us of his time at Catholic school, where punishment was doled out by ruthless nuns. Even though I was aware that he and my mom agreed to raise us in her faith, he’d share with me his own idea of a comprehensive, kind of multifaceted God — a powerful being, sure, but one that shifted shape from person to person depending on age, time, and background.
Regardless of what I looked like, I still went to temple. There were endless amounts of blessed wine to sneak sips of behind the rabbi’s back. Plus, I had a crush on his daughter. The temple, maybe because it was fairly empty, felt like the place where I was most free of judgement. I also wasn’t going to give up my dose of MTV; after all, Marilyn Manson was speaking directly to me in that dark library.
Only three months after my bar mitzvah, the very first episode of Total Request Live (TRL), hosted by Carson Daly, aired on September 14, 1998. It was the day before the release of Manson’s follow-up to Antichrist Superstar, Mechanical Animals. Manson would be live on air to debate with youth pastors about the nature of his image, the meaning of his music, and how he may or may not have been a bad influence on listeners.
Manson became my new, androgynous rabbi during this time.
The streets were mobbed with fans in black when Manson went live. He had a new look: He’d mutated into this androgynous, Bowie-esque glam alien. Meanwhile, I’d become a vacuum of color: a metallic, sputnik-looking mall-goth (as, I’m sure, older, truer goths might’ve called me behind my back if they ever caught me walking out of Hot Topic). But that day I witnessed something far different from what I expected: an honest and open conversation between different types of people from various backgrounds and belief systems. Here were Christians sitting with the “antichrist,” having a genuine and respectful dialogue about art. As someone who’d become so eager to lob insults at anyone who was going to hate on Manson, my rage was deflated by everyone’s eloquence and respect.
Manson might’ve performed beneath the banner of the ultimate freak of nature, but, after this TRL episode, it seemed clear to me that anyone was welcome under that banner, no matter what your affiliations were, so long as you could keep an open mind about people who differed from you. And, above all, you had to show some respect. Manson said himself that day on TRL, “As long as someone’s expressing themselves, I can’t hate them for it.”
In a way, Manson became my new androgynous rabbi during this time. This isn’t to say I wasn’t already learning some morality at temple. The Ten Commandments have some obvious and valid core principles — don’t murder, don’t cheat, and love yourself and those around you in a way that will allow reciprocal love. I think I needed my time at the temple to truly appreciate this conversation between Manson and the pastors. I needed to know there was a choice, and that if you choose to believe in something you’re not obligated to obey it in strict accordance. It was okay to hold alternating, if not conflicting, views of the world. The temple helped me better understand both believers and non-believers. I’d opened both doors.
5. Manson’s downfall (and my own)
I was 13 in 1998. Despite the Lewinsky scandal, the birth of Viagra, the United States Embassy bombings in Kenya, the popularity of the Furby, El Niño, the bomb that went off in an Alabama abortion clinic, and the murder of James Byrd, it felt like an okay time to be a teenager cut off from the news. I had friends, not all of whom were goth. But we were all, in some way, outsiders. As it tends to go, by virtue of becoming a cast of outsiders, we formed a tight group of curious kids who spent nights jumping in and out of shadows in our small town. We were the kind of kids who hung out in the cemetery after school, French kissing beneath the Virgin Mary on some poor soul’s headstone.
But we didn’t know then what devastation waited just around the corner: the Columbine High School massacre was less than a year away. Among all the horrors that spilled out of that nightmare, we also did not yet know how the mass media would falsely accuse Marilyn Manson of inspiring the massacre.
Misinformation spread fast. The news was quick to claim the attackers were fans of Manson. Almost immediately, the news spiraled out of control; it seemed everyone who’d never listened to him, outside the short clips news anchors played for shock value, found it pretty easy to blame him. It was almost as if Manson had predicted this type of smear campaign with the stories he’d been telling in his songs, about how the culture and the media exploit fame, TV, drugs, and violence. Everyone who liked Manson, or who looked like Manson, became a kind of pariah. A new Manson Family had emerged, a loose network of fans in black.
Two months after Columbine, Manson wrote for Rolling Stone: “It is no wonder that kids are growing up more cynical; they have a lot of information in front of them. They can see that they are living in a world that’s made of bullshit. In the past, there was always the idea that you could turn and run and start something better. But now America has become one big mall, and because the Internet and all of the technology we have, there’s nowhere to run.”
6. A message in blood
Sometime around the start of high school, a year after Columbine, I completely shaved off my eyebrows. It was an accident. I was trying to get rid of my unibrow with a razor and I did not yet know what tweezers could do. The razor slipped and took off half an eyebrow so I figured the best I could do to fix it was to shave them both off. Without eyebrows, I looked like I was in constant shock. The look, however, made sense with my whole goth aesthetic. I was working at the local hardware store, so I’d also been cutting all the chain I needed to wrap around my waist and hang from my wallet. When I walked through school, I must’ve looked like Marley’s ghost from A Christmas Carol whose heavy chains were “long and wound about him like a tail.”
One day, in the great, big, white hallway right outside the high school gym, I found a pane of shattered glass. To this day, I do not know who shattered that glass. It was the kind of pebbled glass with chicken wire inside, so although it was shattered, the shards remained suspended in the air.
When I saw the glass, I was curious. I decided to touch it. I wanted to see what would happen if I just pushed on it a little. Would it come falling down? It didn’t. But I must’ve cut my finger ever so slightly in the process. A very small bit of blood appeared. For some reason, I thought it was a good idea to wipe it on the white walls. I wish I could say I stopped there but I can’t. I decided to write words like FUCK and SHIT and the letter X and an inverted cross or two. And, thanks to a true lack of imagination, I combined the words “die” and “evil” to form “dievil.” I hate to say that in the moment, I was very proud of this.
The bleeding stopped and I walked to global studies class. I forgot all about my ghastly painting.
The next morning when I got to school, the hallway was behind police tape. An officer took photographs of the wall. How long would it be until they found me? Any time the phone rang in a classroom, I jumped.
If the police found my blood on a Tuesday morning, it took until Friday afternoon for the call to finally come. Some seventh graders came forward and pointed me out to the principal: freckles, spiked hair, and chains. Easy to identify.
There was no use in denying it. It’d only been a little over a year or so since Columbine. I understand now how proximity in time to that massacre only added to the authorities’ dread. What they saw was a threat. What I saw, and knew to be true, was a really, absurdly, empty-headed and spontaneous act of stupidity.
But I wasn’t done.
Afraid of my parents, the police, and the principal, I blamed my actions on the music. They were so worried about “dievil.” What did it mean? I lied and said it was a Manson song. Of all the things, I blamed it on what I knew they expected me to blame it on. They already had a certain idea about me and I fed them exactly what they wanted to hear. I figured the punishment would be swift, but lean, if I just caved.
The principal swore that I broke the glass to cut myself on purpose, as if I was performing some kind of satanic ritual outside the gymnasium after lunch. They believed I was in that hallway signing a death warrant in blood.
The principal handed me 10 days of out of school suspension and the police walked me out of the school. Not long after, I received word that the superintendent wanted me expelled.
I think she’d already written me off before she ever let me speak. I think her swift decision to rid the school of me was a knee-jerk reaction. My appearance, coupled with my actions, was alarming and, I get it: What I did was not only embarrassing but it scared a lot of people. Violence is everywhere and in one form or another, it always will be. I just didn’t know how to prove to the school, or my parents, that what I’d done was not a threat, only a perverse stunt.
7. Violence and a box-cutter
I often think about what violence awaits me, those I love, or the world at large. It seems like you can’t even go outside anymore. How long have our flags been at half-mast?
Three years ago, the summer my son was born, a young man from my town was in the produce aisle at a nearby grocery store when a stranger came up from behind him and, for no reason other than unhinged meanness, opened his throat with a box cutter. I did not know the victim personally but my proximity to this senselessness shook me. I remember sitting up in bed beside my sleeping and very pregnant wife, trying to make sense of such cruel violence. That next morning, June 12, we learned that 49 people were shot to death at the Pulse nightclub in Orlando, Florida. I remember looking at my wife and wondering how the hell children could even be allowed outside on this maniacal planet. I marveled at how my grandparents raised children in the aftermath of world wars and atom bombs, and throughout the string of assassinations in the ’60s.
I have no clue what the answers are. What I do know is that we can promote a better future by encouraging the next generations to really listen to each other, to be empathetic, to speak the truth, and to be open-minded, free-thinking people who cherish the planet and themselves.
We’re quick to blame Islam, Christianity, video games, rap, heavy metal, sexual preferences, immigration, Twitter, or, well, you get the point. We’re quick to point the finger in the aftermath of violence.
Being a fan of Marilyn Manson’s music has taught me to always distrust the media, or politicians, when it comes to blaming anybody, or any thing, other than the perpetrators of a violent act — unless, of course, it’s been explicitly incited by a specific person or group. I can’t tell you how many times, as a teen, I had to tell people much older than me that Manson had nothing to do with Columbine.
I don’t remember what I wore when the superintendent beckoned me to her office for one last chance to say my piece before expulsion, but I imagine it was something like what a vampire might wear to a wedding. She said I should stop wearing black, that just the sight of me promoted violence. Her apathetic tone struck something open in me.
“Speaking of violence,” I said, doing my best Manson. I felt like I had to redeem myself in the eyes of my friends who knew I’d blamed my actions on the music.
I reminded the superintendent about “the mouth,” a Rolling Stones-style pair of big red lips that formed a window into the kitchen of the cafeteria. This was where you could order slushies. One popular order had a special name: It was every available flavor combined and the cafeteria called it “the suicide.”
The suicide wasn’t a unique flavor to our school; kids liked to buy “suicides” at 7-Elevens, too. But I evoked it in the superintendent’s office to try to cancel out her argument that I was the only seemingly violent entity in the school. I reminded her that our school was selling suicides as much as she thought my appearance promoted it.
I was back in school within a week.
8. My shadow, myself
Four years after the incident outside the gymnasium, I was hired by the same principal who suspended me to be a substitute teacher at my old high school. Now I wore slacks, boat shoes, and colorful sweaters. Outwardly, I looked like the inverse of my previous high school incarnation — but this was only a costume. Mostly, I was an impostor trying to blend in. I walked with a cane because I’d recently broken my knee while singing on stage in my metal band. I still spent most of my weekends performing in dive bars and college towns, banging my head until it practically fell off and rolled into the crowd.
My first job at school was to “shadow” a kid the school had deemed dangerous; they worried that he was a threat to himself and others. The student had been out of school for a few weeks due to the fact that he was caught trying to mutilate himself in class with a blade he’d picked up from the art supply room. Supposedly, he’d tried to castrate himself.
Perhaps this was the school’s way of dealing revenge, but I felt rather at home with that student. Some teachers and students were openly repulsed by him. He’d sit alone at lunch. The kid couldn’t sleep due to night terrors, so he’d doze off during class. He didn’t talk much. And here I was following him around my old school, sitting in my old teachers’ classrooms. After a few weeks of silence, I learned he was a drummer. So, in a move to open some dialogue, I gave him a copy of Marilyn Manson’s “Sweet Dreams” — the version from the Last Tour on Earth live album. That particular rendition ends with a drum solo that’s listed on the album as the “Hell Outro.”
The kid dug it. He started speaking and he even kind of half-smiled. We sat together in the cafeteria for lunch, the mouth smiling back at us. I think he felt less alone. I wanted him to know it was okay to be himself — even if it felt impossible. Music has a way of helping us put into words whatever darkness has latched onto us. That semester, I watched as the kid became friends with other musicians. They talked about jamming and playing in the talent show.
My first band started in that school. It was, go figure, a Marilyn Manson cover band. I sang. When we played at the talent show, we covered “The Nobodies” from Manson’s Holy Wood. I screamed and threw myself across the stage like that crazed Poe-impersonator.
Someone’s mother shot up from her seat in the audience and started yelling that I was the devil. I was flattered. It’s kind of a family tradition, being called the devil. My family has been passing down a story since the 1940s: When my poppop, my mother’s father, was a teen, this little kid, a neighbor, ran up to him on the street and asked to see his horns. The kid had been told that all Jewish people had horns.
Maimonides, the 12th century Jewish philosopher, wrote that the word “Satan” derives from the Hebrew root “to turn away.” To be honest, I turned away from my faith at 13. I’ve attempted prayer since then, sure, in the same way I’ve experimented with drugs. It’d be disingenuous to tell you that in times of profound grief, I haven’t revisited some of those passages I once studied, speaking those ancient words in an attempt to connect myself with the dead.
My horns act as a physical reminder that I will never know all things.
I wouldn’t discover Flannery O’Connor until college, but I’d find a kindred spirit in her when I did. She and Marilyn Manson have more in common than you might first believe. O’Connor was a devout Catholic and Manson has said he’s not atheist but spiritual. I, at the time of this writing, am a meandering agnostic. But I think we could all find common ground on the idea that you aren’t truly your best self, spiritual or not, unless you are relentlessly investigating your thoughts and beliefs and testing them within yourself and against the world, building a kind of immune system for everything you thought was true and good and just. Not unlike Marilyn Manson, O’Connor didn’t shy away from showing gruesome acts of violence in her art in an attempt, I think, to remind people we should be consistent in our pursuit to better ourselves and others. Both Manson and O’Connor promote the ideas of curiosity and introspection. I believe these are paramount to how one might orient themselves against their world and, hopefully, find even a little meaning.
In her prayer journal, O’Connor wrote: “No one can be an atheist who does not know all things. Only God is an atheist. The devil is the greatest believer and he has his reasons.”
My horns act as a physical reminder that I will never know all things. I find comfort in this, as not-knowing is an engine for unending curiosity. My horns are also a reminder that we’re all capable of varying degrees of bad behavior, that we all live with some amount of secret shame and confusion. They remind me to embrace the strangest, most freakish parts of humanity. My mom thinks my poppop willed these very real horns onto me. The horns, she says, weren’t noticeable when I was an infant. I’m told there was no sense of malevolence in the delivery room when I was born, either. The hospital staff did not freak as if they’d witnessed the birth of a human baby with, like, the head of a goat.
If I arrived on Earth with horns, they went unnoticed for a few years. My guess is the small crests of bone rising up out of my skull — exactly where you’d expect devil horns to be — grew as I grew. In fact, they might still be growing. | https://humanparts.medium.com/onlygodisanatheist-70dbe5f2a6de | ['Shane Cashman'] | 2019-10-03 15:59:49.844000+00:00 | ['Music', 'Religion', 'Culture', 'Identity', 'Writing'] |
Was an Iranian Scientist Really Assassinated With an A.I. Weapon? | Was an Iranian Scientist Really Assassinated With an A.I. Weapon?
A.I.-assisted weapons are proliferating quickly
A funeral ceremony for Mohsen Fakhrizadeh in Tehran, Iran, on November 30, 2020. Photo: Anadolu Agency/Getty Images
OneZero’s General Intelligence is a roundup of the most important artificial intelligence and facial recognition news of the week.
In late November, Iran’s top nuclear scientist, Mohsen Fakhrizadeh, was assassinated on a highway outside of Tehran.
Iranian military and state-owned news outlets blame Israel for the attack but also claim that Fakhrizadeh was killed by an A.I.-controlled machine gun mounted to a Nissan truck. A deputy commander of Iran’s Revolutionary Guards described the machine gun as “equipped with an intelligent satellite system which zoomed in on martyr Fakhrizadeh.” Little other information is known.
Eyewitnesses and the scientist’s family contest claims that A.I. technology had anything to with the assassination, according to the New York Times. Instead, they say, the story of an A.I.-powered boogeyman is an attempt to save face after Iran’s failure to protect one of its top scientists.
Surprising as it may be, this internet column about A.I. research doesn’t have the inside scoop as to whether international assassins used a robot. But we can shed light on how far-fetched this claim really is based on what we know about military robots.
Unlike most of the A.I. research community, eager to post its latest work in conferences and public-facing repositories like arXiv, defense contractors are notoriously secretive about their R&D projects. In the United States, these projects can be branded as national security secrets to shield themselves from public records laws. This handy loophole is used to hide how sophisticated our military systems have become — in fact, $76 billion was spent on classified defense projects in 2020 alone.
But we do know that hundreds of autonomous military systems already exist. A 2017 report from the Stockholm International Peace Research Institute (SIPRI) surveyed publicly available information to catalog 381 autonomous military systems, 175 of which were armed.
“Autonomous military system” is a vague term that encompasses everything from a self-flying drone to record intelligence footage to a robotic gun. Self-guided missiles, autonomous submarines, and automatic missile defense systems all fall into this category. The word “autonomous” is also a gray area. For instance, the U.S. military is making an “optionally manned” turret that can allegedly identify and aim at an enemy while a human pulls the trigger. On the other end of the spectrum, suicide drones armed with explosives can be equipped to find their own target.
Some of these weapons are used to guard military bases. The SIPRI report identified three stationary autonomous weapons used to guard tactical positions: a Samsung device called the SGR-A1, another device made by Israeli defense contractor Rafael called the Sentry Tech, and a third made by South Korean company DoDaam called the Super aEgis II. These automated turrets are equipped with cameras and infrared sensors that allow them to see and recognize the heat of human bodies. The Super aEgis II can allegedly detect and track human-sized targets from nearly two miles away, according to the SIPRI report.
Other autonomous military systems are being deployed as mobile weapons, as detailed in a 2019 report from Pax, a Dutch humanitarian organization. An Estonian company called Milrem has been building a kind of autonomous mini-tank called THeMIS since 2014. The THeMIS isn’t a weapon itself but a mobile robot like Boston Dynamics’ robot dog except with tank treads instead of legs. Other companies like Raytheon, Lockheed Martin, and ST Engineering build autonomous and remotely operated weapons made to be carried into battle on top of a THeMIS robot.
One of the first automated machine guns for the THeMIS was made by ST Engineering in 2016, formerly known as Singapore Technologies. The company has expanded its “remote weapon stations” to include seven kinds of weapons.
All publicly available information says that A.I.-enhanced turrets for both defense and attack can be sold with some semblance of human control, whether that be literally controlling the machine from afar or simply designating what to attack. But much less is known about which weapons have full autonomous capabilities and how those systems function.
In July 2020, Israeli company Smart Shooter unveiled a portable and autonomous weapon mounting system called Smash Hopper, which can aim and fire a gun either autonomously or controlled from a distant tablet computer. The whole thing weighs around 50 pounds, and there’s even a smaller version that folds up like a camera tripod.
The existence of these kinds of autonomous weapons doesn’t mean they were involved in the killing of Fakhrizadeh, and there’s no hard evidence to suggest the Iranian military’s claims. The truck that allegedly held the automated weapon exploded at the scene.
But the era of autonomous weapons has already begun, and countries like Britain are already sketching what an army of robots might look like in the future. | https://onezero.medium.com/was-an-iranian-scientist-assassinated-with-an-a-i-weapon-50ec9d5b1206 | ['Dave Gershgorn'] | 2020-12-11 14:52:37.512000+00:00 | ['General Intelligence', 'AI', 'Artificial Intelligence', 'Military', 'Machine Leraning'] |
Manipulating File Paths with Python | Photo by Viktor Talashuk on Unsplash
Python is a convenient language that’s often used for scripting, data science, and web development.
In this article, we’ll look at how to read and write files with Python.
Files and File Paths
A file has a filename to reference the file. It also has a path to locate the file’s location.
The path consists of the folder, they can be nested and they form the path.
Backslash on Windows and Forward Slash on macOS and Linux
In Windows, the path consists of backslashes. In many other operating systems like macOS and Linux, the path consists of forward slashes.
Python’s standard pathlib library knows the difference and can sort them out accordingly. Therefore, we should use it to construct paths so that our program will run everywhere.
For instance, we can import pathlib as follows and create a Path object as follows:
from pathlib import Path
path = Path('foo', 'bar', 'foo.txt')
After running the code, path should be a Path object like the following if we’re running the program above on Linux or macOS:
PosixPath('foo/bar/foo.txt')
If we’re running the code above on Windows, we’ll get a WindowsPath object instead of a PosixPath object.
Using the / Operator to Join Paths
We can use the / operator to join paths. For instance, we can rewrite the path we had into the following code:
from pathlib import Path
path = Path('foo')/'bar'/'foo.txt'
Then we get the same result as before.
This will also work on Windows, macOS, and Linux since Python will sort out the path accordingly.
What we shouldn’t use is the string’s join method because the path separator is different between Windows and other operating systems.
For instance:
path = '/'.join(['foo', 'bar', 'foo.txt'])
isn’t going to work on Windows since the path has forward slash.
The Current Working Directory
We can get the current working directory (CWD), which is the directory the program is running on.
We can change the CWD with the os.chdir function and get the current CWD with the Path.cwd function.
For instance, we can write:
from pathlib import Path
import os
print(Path.cwd())
os.chdir(Path('foo')/'bar')
print(Path.cwd())
Then we get:
/home/runner/AgonizingBasicSpecialist
/home/runner/AgonizingBasicSpecialist/foo/bar
as the output.
As we can see, chdir changed the current working directory, so that we can use manipulate files in directories other than the ones that the program is running in.
The Home Directory
The home directory is the root directory of the profile folder of the user’s user account.
For instance, we can write the following:
from pathlib import Path
path = Path.home()
Then the value of path is something like PosixPath(‘/home/runner’) .
Absolute vs. Relative Paths
An absolute path is a path that always begins with the root folder. A relative is a path that’s relative to the program’s current working directory.
For example, on Windows, C:\Windows is an absolute path. A relative path is something like .\foo\bar . It starts with a dot and foo is inside the current working directory.
Creating New Folders Using the os.makedirs() Function
We can make a new folder with the os.makedirs function.
For instance, we can write:
from pathlib import Path
Path(Path.cwd()/'foo').mkdir()
Then we make a foo directory inside our current working directory.
Photo by Lili Popper on Unsplash
Handling Absolute and Relative Paths
We can check if a path is an absolute path with the is_absolute method.
For instance, we can write:
from pathlib import Path
is_absolute = Path.cwd().is_absolute()
Then we should see is_absolute being True since Path.cwd() returns an absolute path.
We can call os.path.abspath to returns a string with of the absolute path of the path argument that we pass in.
For instance, given that we have the directory foo in the current working directory, we can write:
from pathlib import Path
import os
path = os.path.abspath(Path('./foo'))
to get the absolute path of the foo folder.
We then should get something like:
'/home/runner/AgonizingBasicSpecialist/foo'
as the value of path .
os.path.isabs(path) is a method that returns True is a path that is absolute.
The os.path.relpath(path, start) method will return a string of the relative path from the start path to path .
If start isn’t provided, then the current working directory is used as the start path.
For instance, if we have the folder /foo/bar in our home directory, then we can get the path of ./foo/bar relative to the home directory by writing:
from pathlib import Path
import os
path = os.path.relpath(Path.home(), Path('./foo')/'bar')
Then the path has the value ‘../../..’ .
Conclusion
We can use the path and os modules to construct and manipulate paths.
Also, we can also use the / with Path objects to create a path that works with all operating systems.
We can also path in paths to the Path function to construct paths.
Python also has methods to check for relative and absolute paths and the os module can construct relative paths from 2 absolute paths.
A note from Python In Plain English
We are always interested in helping to promote quality content. If you have an article that you would like to submit to any of our publications, send us an email at submissions@plainenglish.io with your Medium username and we will get you added as a writer. | https://medium.com/python-in-plain-english/manipulating-file-paths-with-python-72a76952b832 | ['John Au-Yeung'] | 2020-05-04 14:50:44.216000+00:00 | ['Programming', 'Technology', 'Python', 'Software Development', 'Software Engineering'] |
How to Set Up Your Own PaaS Within Hours | How to Set Up Your Own PaaS Within Hours
Five simple steps to quickly set up your very own private PaaS environment using available free open-source technologies
Photo by Fotis Fotopoulos on Unsplash
There are many useful and free open-source software on the Internet, we just need to know where to look.
The PaaS setup that I’m going to recommend works well for private/on-premise setup as well. There’s no coding involved, just some CLI configurations.
Having your own PaaS is useful if you’re running a team of engineers requiring flexibility, privacy, and data ownership. I mainly use it for rapid prototyping purposes and hosting my own suite of web applications with minimal traffic.
I would not recommend this setup for production purposes with high load unless you really know what you’re doing.
I’ll keep the article short and list down the high-level steps as the instructions on the websites are very easy to follow. I’ve done a couple of setups with minimal issues and they are usually done within a few hours.
Assuming you’re starting from scratch with the intent to deploy in the cloud, here are the five simple steps:
Setup a cloud account and rent a VM (e.g. AWS/EC2) Register a domain from a provider (e.g. AWS Route53) Set up the PaaS by following simple steps (CapRover) Create and deploy open-source applications out of the box (Wordpress, Jenkins, GitLab, and many more) Configure backups just in case
Please note that I’m not paid in any way by any of the companies listed in this article. I’m recommending them soley based on positive experiences.
#1: Setup a cloud account and rent a VM
You’ll need a virtual machine to host the applications. There are a variety of cloud providers that you can choose from. Here are some of the popular ones with 1-year free-tier option(s)/credits:
You can click on any of the above links to set up a new free account.
After creating the account, provision a virtual machine—recommended server setup to be Ubuntu 18.04 with at least 1GB RAM.
#2: Register a domain name
Applications will be deployed under a sub-domain, e.g. appone.apps.domain.com, apptwo.apps.domain.com, etc. so it’s essential to have your own domain.
Here are some websites which I’ve used for my domains:
If you’re using a cloud provider, e.g. AWS/Azure, it may be more convenient to register your domain with them to have everything managed centrally.
#3: Set up the PaaS
This section forms the main bulk of the setup. Although there are a number of available open-source PaaS available, e.g. Dokku, CapRover, Flynn, etc. I’ll be using CapRover as an example.
I’ll further break down this portion into four steps:
The steps above should get your PaaS up and running.
#4: Create and deploy applications
Here comes the fun part. After logging into the CapRover dashboard, navigate to the Apps screen via “Apps” at the left sidebar.
Click on the “One-Click Apps/Databases” button. | https://medium.com/the-internal-startup/how-to-set-up-your-own-paas-within-hours-83356523413d | ['Jimmy Soh'] | 2020-06-24 00:58:38.258000+00:00 | ['Programming', 'Technology', 'Software Development', 'Startup', 'Software Engineering'] |
Interesting AI/ML Articles On Medium This Week (Dec 5) | Interesting AI/ML Articles On Medium This Week (Dec 5)
Artificial Intelligence and Machine Learning articles that might have flown under your radar.
We are officially in the last month of 2020, and what a crazy ride it has been.
I have to say that amidst lockdown and a year of limited to no social activity, Medium has been, and continues to be a platform where you feel connected to different areas of the world.
Medium is one of my source of connection to the world of Machine Learning. There’s no shortage of interesting AI/ML/DS articles written by machine practitioners and AI enthusiasts.
Below are four articles that have mainly stuck out to me for either the high quality of the information provided or the relevance of the content of the article to ML practitioners. There is definitely an article or two with information that is of value to ML practitioners of different levels.
Happy reading. | https://towardsdatascience.com/interesting-ai-ml-articles-on-medium-this-week-dec-5-a1ac1b8bad8c | ['Richmond Alake'] | 2020-12-05 04:55:37.605000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'Technology', 'AI', 'Data Science'] |
Handling Outliers in Clusters using Silhouette Analysis | Handling Outliers in Clusters using Silhouette Analysis
Identify and remove outliers in each cluster from K-Means clustering
Image by Gerd Altmann from Pixabay
The real-world data often has a lot of outlier values. The cause of outliers can be data corruption or failure to record data. The handling of outliers is very important during the data preprocessing pipeline as the presence of outliers can prevent the model to perform best.
There are various strategies to handle outliers in the dataset. This article will cover how to handle outliers after clustering data into several clusters using Silhouette Analysis.
Silhouette Analysis:
The silhouette method is a method to find the optimal number of clusters and interpretation and validation of consistency within clusters of data. The silhouette method computes silhouette coefficients of each point that measure how much a point is similar to its own cluster compared to other clusters. by providing a succinct graphical representation of how well each object has been classified. The analysis of these graphical representations is called Silhouette Analysis.
The silhouette value is a measure of how similar an object is to its own cluster (cohesion) compared to other clusters (separation). The value of the silhouette ranges between [1, -1].
Important Points:
The Silhouette coefficient of +1 indicates that the sample is far away from the neighboring clusters.
The Silhouette coefficient of 0 indicates that the sample is on or very close to the decision boundary between two neighboring clusters.
Silhouette coefficient <0 indicates that those samples might have been assigned to the wrong cluster or are outliers.
Computing Silhouette Coefficient:
Steps to find the silhouette coefficient of an i’th point:
Compute an (i): The average distance of that point with all other points in the same clusters. Compute b(i): The average distance of that point with all the points in the closest cluster to its cluster. Compute s(i) — silhouette coefficient or i’th point using below mentioned formula.
(Image by Author), Diagramatic representation of a(i) and b(i) from the above-mentioned formula to compute silhouette coefficient — s(i)
Find the optimal value of ‘k’ using Silhouette Analysis:
Use the Silhouette Method to find the optimal number of clusters. It can also be found using the Elbow method, but the Silhouette Method is considered a better approach than the Elbow method. Read the below article to know more:
(Image by Author), Left: Avg distance vs the number of clusters, Right: Silhouette score vs the number of clusters
The silhouette plot displays a measure of how close each point in one cluster is to points in the neighboring clusters and thus provides a way to assess parameters like the number of clusters visually.
Key Takeaways from Silhouette Analysis:
From the silhouette line plot and silhouette analysis for different values of n_clusters, it is observed that n_cluster=3 is the best value of the number of clusters (k). In the above image for “silhouette analysis for KMeans clustering on sample data with n_clusters=3”, it is observed that all the clusters [0,1,2] have most of their points have silhouette coefficients more than average silhouette score. In the above image for “silhouette analysis for KMeans clustering on sample data with n_clusters=3”, it is observed that for cluster_label 2, there are few points that have negative silhouette coefficients, that can be considered as outliers. Also for cluster_label 1, some points have silhouette coefficients less than the average silhouette score, which are points on cluster boundaries away from its cluster center.
To find the outliers, find the points that have a negative silhouette coefficient, and remove it. Points lying on the cluster boundaries away from its cluster center can also be removed to create a robust model, but it depends on the case study. | https://towardsdatascience.com/handling-outliers-in-clusters-using-silhouette-analysis-5a7d51118dac | ['Satyam Kumar'] | 2020-10-21 02:23:47.081000+00:00 | ['Artificial Intelligence', 'Machine Learning', 'Data Science', 'Education', 'Clustering'] |
Interactive Data Visualization became much easier with help of Plotly-express. | Difference between Plotly and Plotly-express (in terms of plotting).
Plotly
Please Note: Plotly has been updated recently. Plotly as well as any sources from plotly updates frequently (as they are new libraries when compared to other libraries in python).
Till Plotly 3, we have two modes in plotting with plotly (Online and Offline).
Plotly online
When plotting online, plot and data will be saved to your plotly’s cloud account. There are two methods to plot online. plotly.plot() — used to return the unique URL and optionally open the URL. plotly.iplot() — used when working in jupyter-notebook, to display the plot within the notebook. These both methods create a unique URL for the plot and save it in your plotly account and internet connection is required to use plotly online.
Plotly offline
Plotly offline, allows to create plots offline and save them locally (which doesn’t require any internet connection).There are two methods to plot offline. plotly.offline.plot() — used to create a standalone HTML, that is saved locally and opened inside your web browser. plotly.offline.iplot() — used when working offline in jupyter-notebook, to display the plot in the notebook. When we intend to use plotly.offline.iplot(), we need to run an additional step, i.e., plotly.offline.init_notebook_mode() at start of each session.
From Plotly 4, (which is updated and recent version of Plotly)
Fig-1 : plotly3_vs_4
Plotly 4 made life much easier as it is completely offline (So, there is NO plotly online from plotly 4).
Whoever loves to work with plotly.offline in jupyter-notebook, they can avoid connection statement in their code (which includes connecting plotly in offline mode to their notebook ), now they can directly import plotly rather than importing plotly.offline.
plotly.graph_objs — This has several functions, which is useful in generating graph objects. grpah_objs — It is a class, which contains several structures that are consistent across visualizations made in python regardless of type.
From plotly 3 to plotly 4, plotly.graph_objs package has been aliased as plotly.graph_objects “because the latter is much easier to communicate verbally “— according to official documentation.
Plotly Express
Fig-2:Importing plotly express (careful with versions)
Plotly Express was separately-installed using plotly_express package but it is now part of plotly. Plotly should be updated to plotly 4 before using it Or you will be encountered with error as shown in Fig-2.
Comparing Scatter plot with plotly and plotly express
Scatter plot allows the comparison of two variables for a set of data. Depending on the trend of the scatter plot, we could interpret a correlation.
With Plotly:
Fig-3:plotting between sepal_length and sepal_width
Plotly follows a particular syntax as seen in Fig-3. Initially, a variable is to be created to assign a plot (Note:plot type should be given in the form of list, as shown in first line in Fig-3). In this case, we named it “data” (most followed notation), variable name can be of your choice. This “data” variable contains a plot type call. go.Scatter is one among many graph objects, each plot type has it’s own graph object. These objects typically accepts few parameters. For instance, scatter graph objects requires two mandatory parameters (assigning x-axis and y-axis). go.Layout(this is also one of the graph object), which is used to define layout for the plot. Then figure object from graph objects, is created to use both data and layout variables to plot.
The dataset I have used is very famous and best known database to be found in the pattern recognition literature .This dataset contains 3 classes(Setosa, Versicolour, Virginica) of 50 instances each, where each class refers to a type of iris plant. In Fig-3, we have plotted all classes to find relation between (Sepal_length and Sepal_width). But, as all the datapoints are represented in same color, we are unable to draw any conclusions from the plot because plotly doesn’t give you hue in a plot (which is a parameter in seaborn). So, alternative to it is to either data should be plotted using grouped-by method or individual traces should be created for each class variables.
Grouping by data to plot with variation in each class | https://medium.com/analytics-vidhya/interactive-data-visualization-became-much-easier-with-help-of-plotly-express-64c56e781b53 | ['Chamanth Mvs'] | 2020-10-06 02:07:34.860000+00:00 | ['Plotly Express', 'Plotly', 'Data Visualization', 'Python'] |
IBM-Oxford Team Uses Supercomputers to Design New Drugs Against COVID-19 | By Katia Moskvitch
With the second wave of COVID-19 gaining strength, researchers are in a race against time to find a treatment or a vaccine.
One international team of scientists from IBM Research and Oxford University is trying to design molecules that would interfere with the molecular machinery of coronavirus, the virus that triggers the disease. If successful, such molecules could become the basis of a new drug to treat or slow COVID-19 infections.
“We are blending techniques such as advanced machine learning, computer modelling and experimental measurements to accelerate the discovery of these new molecules,” says the lead researcher Jason Crain, IBM Research physicist and visiting professor at the University of Oxford. He details his team’s work in a recent COVID-19 High-Performance Computing Consortium’s webinar.
It’s still early days — the team is only four months into the project — but the researchers have already identified several compounds that look promising based on the computational modelling. The scientists now have to test them in a lab, says Crain, and the experiments will take several weeks.
While the ongoing COVID-19-related work is new, Crain’s team has been for many years working on drug discovery, most recently in the area of antibiotic resistance. “We pivoted this earlier work, quickly adapting some of the fundamental methods we had previously developed, to address COVID-19,” Crain says.
The biggest challenge for the team, just like for any other team searching for a new drug to halt the pandemic, is dealing with an immensely vast chemical space within which to identify new functional compounds. To address it, the researchers are combining cutting-edge AI methods with modelling on two supercomputers offered by the COVID-19 HPC Consortium — IBM Summit at Oak Ridge National Laboratory and Frontera at the Texas Advanced Computing Center.
Without these extra computing resources, Crain says, “the throughput of the computational screening stages would have been prohibitively slow.” After all, the computational modelling of a myriad of AI-generated candidate compounds is among the most demanding and time-consuming steps in the discovery pathway.
Computer modelling on Summit and Frontera has allowed the team to screen compounds and reveal their mode of action at the molecular scale, so that they have to synthesize and test experimentally only the most promising ones. “Summit and Frontera allow us to perform calculations of how candidate drug molecules bind to viral proteins much faster than would have been possible otherwise,” says Crain. “The Consortium resources have allowed us to incorporate very HPC-intensive steps into the screening protocol, which is a very powerful approach but rarely possible to do.”
The Consortium has also helped, says Crain, to bring together an international team of experts. “Some of the Oxford team, for example, have extensive experience in the structure of viral proteins, and techniques related to screening of candidate drugs,” he says. “The AI teams at IBM in New York and in the UK have been working on developing new methods that can ‘discover’ functional molecules — which may or may not have been made previously — very efficiently.”
This article first appeared on the COVID-19 HPC Consortium blog | https://ibm-research.medium.com/global-ibm-oxford-team-uses-supercomputers-to-design-new-drugs-against-covid-19-6293ced5720a | ['Inside Ibm Research'] | 2020-11-27 13:06:05.842000+00:00 | ['IBM', 'Technology', 'Covid 19', 'AI', 'Coronavirus'] |
A Next Level Webpack Dashboard | A Next Level Webpack Dashboard
Level up your webpack using webpack-dashboard
Beautiful webpack dashboard inside the terminal
Webpack-dashboard has over 13,000 stars on GitHub, yet I almost never encounter developer teams making use of the plugin.
Why not take advantage of this great plugin?
Note: Don’t forget you can pretend you work for NASA if someone shoulder peeks!
When using webpack, especially for a dev server, you are probably used to seeing something like this — according to webpack-dashboard GitHub page.
We’re all familiar with the webpack log above. Although sometimes I wonder — who exactly is this screen for? A vastly superior alien race?
This is why the webpack log desperately needs a new friendlier UI, and webpack-dashboard does that.
Webpack-Dashboard to the Rescue!
This plugin improves your visuals for the webpack logs. Now when you run your dev server, you basically work at NASA. | https://medium.com/better-programming/webpack-dashboard-with-create-react-app-vue-cli-and-custom-configs-49166e1a69de | ['Indrek Lasn'] | 2020-11-11 22:21:40.947000+00:00 | ['Programming', 'JavaScript', 'Vuejs', 'React', 'Webpack'] |
9 Healthy Foods You Should Eat Every Week | 9 Healthy Foods You Should Eat Every Week
And why they are so incredibly beneficial.
Photo by engin akyurt on Unsplash
I’m sure everyone has heard the saying:
“An apple a day keeps the doctor away.”
While that rhyme might roll off the tongue with ease and get children more excited to eat their fruit, there isn’t a whole lot of truth to the saying.
If you consume an apple every day, you will certainly provide your body with nutrients, but you won’t magically become immune to all illnesses and diseases like the phrase implies.
Besides, if your diet consisted entirely of apples, you wouldn’t have any substantial sources of protein and healthy fat, and your diet would be severely unbalanced.
If you want to boost your overall health, you don’t need to search for a single miracle superfood. In fact, there are many powerful foods that you should try to eat as often as possible. They will nourish your body correctly and provide you with ample energy. The following are nine foods you should attempt to eat each week (at a minimum). Some of them are foods I enjoy almost every single day. | https://medium.com/in-fitness-and-in-health/9-healthy-foods-you-should-eat-every-week-467b32e61acc | ['Alyssa Atkinson'] | 2020-12-17 15:37:20.694000+00:00 | ['Health', 'Food', 'Fitness', 'Science', 'Lifestyle'] |
Cheap Renewable Energy Has Arrived | Cheap Renewable Energy Has Arrived
The cost of renewable energy is now on par with natural gas.
Photo by Science in HD on Unsplash
At a time when the only climate-related news that gets relayed paints a bleak picture of our future on this planet, it feels good to share a positive story about how far science has come in renewable energy.
According to research conducted by the University of Calgary, the cost of renewable energy has dropped massively, such that it can now compete with natural gas. The age of affordable renewable energy has arrived.
Over the last ten years, wind power costs have dropped by 70% and solar power costs have dropped by 90%. This decline in cost is dramatic and relates to how the levelized cost of wind and solar power is now similar to that of the marginal cost to run an efficient natural gas plant. Levelized cost is a measure that includes the cost of building and running power plants. Therefore, not only have renewable resources begun to match natural gas in price, they have actually become cheaper to operate than existing fossil fuel power plants.
A report conducted a year ago by the Pembina Institute supports these claims. According to the study, renewable energy (including solar, wind, and battery storage) provided the same services to consumers as new fossil fuel power plants even during peak demand scenarios.
This study was conducted in Alberta, Canada, which already has some of the lowest natural gas costs in the world. For renewable energy to compete with an already cheap competitor is incredible, and speaks to the innovation that has occurred in the last ten years in the renewable energy industry.
Furthermore, the report discusses case studies from the United States that discovered how investment in renewable energy portfolios would save consumers over $29 billion a year and would cut greenhouse gas emissions by 100 million tons.
The Pembina report goes on to describe how an analysis conducted by the Rocky Mountain Institute found that when compared to the energy generated at fossil fuel power plants, the cost of renewable energy was $9 to $24 less per megawatt-hour. After reviewing these findings, Pembina announced that these costs would drop further as the technology advanced, something that was proven by the report conducted by the University of Calgary back in November of this year.
The University of Calgary found that the leading causes for the reduction in solar energy costs included “improvements in PV (photovoltaic) module prices, advancements in solar technology and an increase in global average capacity factor (actual energy production relative to potential).” Wind energy saw similar reductions in cost attributed to “lower turbine prices, more efficient operations, and maintenance, and a better global average capacity factor.”
However, despite cost reductions, renewable energy sources still only account for 8.5% of the total global energy supply. This share in the total global energy supply is projected to increase due to increases in renewable energy investment that are outpacing investment in any other energy source by 7.6% per year. Furthermore, thanks to countries like China who are leading the world in solar panel production, the cost of manufacturing solar panels is quickly decreasing thanks to demand.
Some hurdles remain in the way of renewable energy taking center stage though. First, renewable energy is notoriously intermittent, so to many consumers, it seems like lower-quality energy. To mitigate this issue, the University of Calgary suggests implementing improved storage technology in the form of batteries, compressed air, and pumped hydro for times of high energy demand or when the sun isn’t shining and the wind isn’t blowing.
Second, there needs to be the ability to send energy to locations lacking in energy to improve and support a renewable energy-based power grid.
Finally, the University of Calgary report suggests that low-carbon sources of energy will be required to support renewable energy. Having backup energy will be vital when dealing with transitioning to renewable energy and for supplying reliable electricity daily.
While the report isn’t sure whether this supporting energy will come in the form of biomass, new nuclear reactors, or hydrogen-peaking plants, they conclude low-emission sources will need to be reliable in the next decade to support the switch to renewable energy. | https://medium.com/climate-conscious/cheap-renewable-energy-has-arrived-3ffbcc459566 | ['Madison Hunter'] | 2020-12-09 12:02:31.952000+00:00 | ['Energy', 'Sustainability', 'Climate Change', 'Technology', 'Science'] |
How to Improve Your Postpartum Anxiety with an App. Today. | Postpartum Anxiety
What I didn’t know at the time was there is a completely different set of screening questions - with completely different symptoms - for postpartum anxiety (PPA) vs. postpartum depression.
Symptoms like racing thoughts, irrational fear of harm coming to your child — with kitchen knives (I had that one!), constant worrying, OCD tendencies, feeling restless, trouble sleeping and eating. The kind of thoughts you don’t want to admit to yourself — let alone another human.
And, after taking to google, I discovered I had all twelve of those PPA symptoms. And that 10% of postpartum women experience anxiety.
That finally explained why all I could think about, all day long, was, “When was my daughter going to breastfeed next?” and “When did that mean she would nap?” and “What time would I finally get to sleep?”
It was the most debilitating and annoying tape that played in my head. All. Day. Long.
In fact, I was so incapacitated, not having a history of anxiety or depression, I hired another doula to teach me how to get out of the house with my baby. Sounds like a first world problem, I know. But, without her help, I might not be here to share this story with you.
She came every morning for a week and we went on small outings — to Target, the park, on a hike and, for my last day, on a boat ride in the marina. That felt like mommy graduation.
You see, I did not have a village where I live. You know, the village that it takes to raise a child. My family is all far away. My best friends aren’t close by. And most of my friends either don’t have kids or they already had them a decade ago.
That’s what happens when you wait until you’re in your 40s to have a baby. Although it’s super common these days. So, I needed to create my own village. I put a super honest post on Facebook asking for company. I said, “I don’t want to isolate. Please come visit me!”
And I had friends, acquaintances, moms I knew, moms I didn’t know, and even perfect strangers coming over to help me feel less lonely.
Don’t get me wrong. I did get help. In many ways. I got diagnosed by a psychiatrist. I got on Lexapro. I went to a PPA support group. But I needed more to get me through the day.
(If you are experiencing PPA or any type of anxiety, please get professional help. Call your doctor.) | https://medium.com/in-fitness-and-in-health/how-to-improve-your-postpartum-anxiety-with-an-app-today-84536b05d86c | ['Katie Grant'] | 2020-10-30 14:33:54.075000+00:00 | ['Mental Health', 'Health', 'Postpartum Anxiety', 'Parenting', 'Life'] |
Theories of Aging | Simple single-celled organisms called prokaryotes, such as bacteria are the earliest forms of life on earth, and still abundant today. Much later evolved the more complex, but still single celled organisms called eukaryotes. From those humble beginnings came the multi-cellular life forms called metazoans. All animal cells, including humans, are eukaryotic cells. Since they share a common origin, they bear a resemblance to each other. Many molecular mechanisms (genes, enzymes, etc.) and biochemical pathways are conserved throughout the evolution towards more complex organisms.
Humans share approximately 98.8% of their genes with chimpanzees. This 1.2% genetic difference is enough to account for the differences between the two species. It may be even more surprising, however, to learn that organisms as far apart as yeast and humans have many genes in common. At least 20% of genes in humans that play a role in causing disease have counterparts in yeast. When scientists spliced over 400 different human genes into the yeast Saccharomyces cerevisiae, they found that a full 47% functionally replaced the yeast’s own genes.
With more complex organisms, such as the mouse, we find even greater similarities. Of over 4,000 genes studied, less than ten were found to be different between humans and mice. Of all protein-coding genes — excluding the so-called “junk” DNA — the genes of mice and humans are 85% identical. Mice and humans are highly similar at the genetic level.
Many aging related genes are conserved throughout species enabling scientists to study yeast and mice to learn important lessons for human biology. Many of the studies cited in this book involve organisms as diverse as yeast, rats, and rhesus monkeys, and all vary in the degree of their similarity to humans. Not every result necessarily applies to humans, but in most cases the results will be close enough that you can learn a great deal about aging from them. While it is ideal to have human studies, in many cases, these simply do not exist, forcing us to rely on animal studies.
Theories of aging
Disposable Soma
The disposable soma theory of aging, proposed originally by University of Newcastle professor Thomas Kirkwood, holds that organisms have a limited finite amount of energy that may be used in either maintenance and repair of the body (soma), or in reproduction. Like antagonistic pleiotropy, there is a trade-off: if you allocate energy to maintenance and repair, then you have fewer resources for reproduction. Since evolution directs more energy towards reproduction, which helps propagate its genes to the next generation of organisms, the soma after reproduction is largely disposable. Why devote precious resources to living longer, which doesn’t help passing on the gene? In some cases, the best strategy may be to have as many offspring as possible, and then for the individual to die.
The Pacific salmon is one such example, as it reproduces once in its life and then dies. The salmon expends all of its resources for reproduction, after which it tends “simply to fall apart”. If there’s little chance that a salmon would survive predators and other hazards to complete another round of reproduction, then evolution will not have shaped it to age more slowly. Mice reproduce quite prodigiously, reaching sexual maturity by two months of age. Subject to heavy predation, mice allocate more energy to reproduction than to fighting the deterioration of their bodies.
On the other hand, a longer lifespan may allow development of better repair mechanisms. A 2 year-old mouse is elderly, while a 2-year-old elephant is just starting its life. More energy is devoted to growth, and elephants produce far less offspring. The gestation period of an elephant is 18–22 months, after which only 1 living offspring is produced. Mice produce up to 14 young in a litter, and can have 5 to 10 litters per year.
While a useful framework, there are problems with the disposable soma theory. This theory would predict that deliberate calorie restriction, by limiting overall resources would result in less reproduction or a shorter life span. But calorie restricted animals, even to the point of near starvation, do not die younger — they live much longer. This effect is seen consistently in many different types of animals. In effect, depriving animals of food causes them to allocate more resources to fighting aging.
Further, the female of most species live longer than males. Disposable soma would predict the opposite, since females are forced to devote much more energy to reproduction, and so would have less energy or resources to allocate to maintenance.
Verdict: it fits some of the facts, but has some definite problems. It is either incomplete or incorrect.
Free Radical Theory
Biological processes generate free radicals, which are molecules that can damage surrounding tissues. Cells neutralize them with things like anti-oxidants, but this process is imperfect so damage accumulates over time, causing the effects of aging.
Yet large-scale clinical research trials show that antioxidants vitamins like vitamin C or vitamin E may paradoxically increasedeath rates or result in worse health [13]. Some factors known to improve health or increase lifespan, such as calorie restriction and exercise, increase production of free radicals, which act as signals to upgrade its cellular defenses and energy-generating mitochondria. Antioxidants can abolish the health-promoting effects of exercise.
Verdict on the free radical theory: unfortunately, a number of facts contradict it. It too is either incomplete or incorrect.
Mitochondrial Theory of Aging
Mitochondria are the parts of the cells (organelle) that generate energy so they are often called the powerhouses of the cell. They are subject to lots of damage so they must be recycled periodically and replaced to maintain peak efficiency. Cells undergo autophagy and mitochondria have a similar process of culling defective organelles for replacement called mitophagy. The mitochondria contain their own DNA, which accumulate damage over time. This leads to less efficient mitochondria, which in turn produce more damage in a vicious cycle. With adequate energy cells may die, a manifestation of aging.
Muscle atrophy is related to high levels of mitochondrial damage. But in comparing energy production in mitochondria in young and old people, little difference was found. In mice, very high rates of mutation in mitochondrial DNA did not result in accelerated aging.
Verdict: Interesting but research is very preliminary and ongoing. Arguments can be made both for and against it.
Hormesis
In 120 BC, Mithridates VI was heir to Pontus, a region in Asia Minor, now modern-day Turkey. During a banquet, his mother poisoned his father to ascend to the throne. Mithridates ran away and spent seven years in the wilderness. Paranoid about poisons, he chronically took small doses of poison to make himself immune. He returned as a man to overthrow his mother to claim his throne and became a very powerful king. During his reign, he opposed the Roman Empire, but was unable to hold them back. Prior to his capture, Mithridates decided to commit suicide by drinking poison. Despite large doses, he failed to die and the exact cause of his death is still unknown to this day. What doesn’t kill you, may make you stronger.
Hormesis is the phenomenon in which low doses of stressors that are normally toxic instead strengthen the organism, and make it more resistant to higher doses of toxins or stressors. Hormesis itself is not a theory of aging, but has huge implications for other theories. The basic tenet of toxicology is ‘The dose makes the poison’. Low doses of ‘toxin’ may make you healthier.
Exercise and calorie restriction are examples of hormesis. Exercise, for example puts stress on muscles causing the body to react by increasing strength. Weight bearing exercise puts stress on bones, which causes the body to react by increasing the strength of those bones. Being bed ridden or going into zero gravity, as with astronauts causes rapid weakening of the bones.
Calorie restriction can be considered a stressor and causes a rise in cortisol, commonly known as the stress hormone. This lowers inflammation and increases the production of heat shock proteins. Low levels of stress increases resistance to subsequent stressors. So, calorie restriction satisfies the requirements of hormesis. Because both exercise and calorie restriction are forms of stress, they involve the production of free radicals.
Hormesis is not a rare phenomenon. Alcohol, for example, acts via hormesis. Moderate alcohol use is consistently associated with better health than complete abstention. But heavier drinkers have worse health, often developing liver disease. Exercise is well known to have beneficial health effects, but extreme exercise can worsen health by causing stress fractures. Even small doses of radiation can improve health where large doses will kill you.
Some of the beneficial effects of certain foods may be due to hormesis. Polyphenols are compounds in fruits and vegetables, as well as coffee, chocolate, and red wine, and they improve health, possibly in part by acting as low-dose toxins.
Why is hormesis important for aging?
Other theories of aging presuppose that all damage is bad, and accumulates over time. But the phenomenon of hormesis shows the body has potent damage-repair capabilities that can be beneficial when activated. Take exercise as an example. Weight lifting causes microscopic tears in our muscles. That sounds pretty bad. But in the process of repair, our muscles become stronger. Gravity puts stress on our bones. Weight bearing exercise, such as running causes micro-fractures of our bones. In the process of repair, our bones become stronger. The opposite situation exists in the zero gravity of outer space. Without the stress of gravity, our bones become osteoporotic and weak. Not all damage is bad — small doses of damage are in fact good. What we are describing is a cycle of renewal. Hormesis allows breakdown of tissue like muscles or bones that are then rebuilt to better withstand the stress placed upon them. Muscles and bones grow stronger. But without breakdown and repair, you cannot get stronger.
Growth vs. Longevity
Hormesis, like the disposable soma theory, suggests that there exists a fundamental trade-off between growth and longevity. The larger and faster an organism grows, the faster it ages. Antagonistic pleiotropy may play a role, in that some genes that are beneficial in early life may be detrimental later. When you compare lifespans within the same species, such as mice [18], and dogs, smaller animals (less growth) live longer. Women, on average smaller than men, also live longer. Among men, shorter men live longer. Think about a person who is aged 100. Do you imagine a 6’6’’ man with 250 pounds of muscle, or a small woman? Obesity, caused by excessive growth of fat cells, is clearly correlated with poor health.
Comparing across different species, however, larger animals live longer. Elephants, for example, live longer than mice. But this can be explained by the slower development of larger animals.[21]The relative lack of predators for large animals has meant that evolution has favored slower growth and slower aging. Small animals, for example bats, which have fewer predators than other animals the same size, also live longer.
Aging isn’t deliberately programmed, but the same physiological mechanisms that drive growth also drive aging. Aging is simply the continuation of the same growth program and is driven by the same growth factors and nutrients. If you rev a car’s engine very quickly, you can reach high speeds, but continuing to rev the engine will also result in burnout. It’s the same essential program, but different timescales (short-term performance versus long-term longevity). All the theories of aging point out this essential tradeoff. This is powerful information because certain programs may be beneficial at certain times of our lives. During youth for example, we need to grow. During middle and older age, however, this high growth program may cause premature aging, and it would be more beneficial to slow growth. Since the foods we eat play a large role in this programming, we can make deliberate adjustments to our diet to preserve our lifespan as well as our ‘healthspan’. For more about healthy aging, check out my new book, The Longevity Solution. | https://drjasonfung.medium.com/theories-of-aging-440cf9916755 | ['Dr. Jason Fung'] | 2019-02-20 20:01:05.491000+00:00 | ['Longevity', 'Wellness', 'Aging', 'Health', 'Nutrition'] |
Is Six Feet of Social Distancing Always Necessary? | 6 Feet Apart Is the Gold Standard, but Should It Be?
Exploring the origins, and difficulties, of the 6-foot rule
It may have been the most bizarre card game in history.
Groups of men — some sick with the common cold, some healthy — sat around card tables for 12 hours, playing poker. The healthy men wore specially designed arm braces or plastic “collars” that allowed them to handle the cards and chips but made it impossible for them to touch their faces. The sick men were unencumbered and could freely touch the cards, the chips, or their own runny noses. The men were seated about 4.5 feet from one another.
The gonzo poker game was organized by researchers at the University of Wisconsin Medical School for a 1987 study that sought to measure how viral pathogens pass among people via different routes of transmission. Since the healthy men couldn’t touch their faces, the only way they could get sick was by breathing in airborne virus particles expelled by their unwell poker buddies.
Once this first part of experiment was over, the presumably cold virus–infested playing cards and chips that the sick men had handled were immediately transferred to a new lab room, where a fresh batch of healthy volunteers was waiting. These men played poker with the cards and chips for 12 hours and were directed to touch their faces every 15 minutes.
So who got sick? Among the healthy men in the first part of the experiment — the ones who couldn’t touch their faces but were sitting close to ill people — more than half ended up coming down with the common cold. Among the men who had to play with the germ-ridden cards, none got sick. “These results point to aerosol transmission as the most important mechanism of natural spread,” the study authors wrote.
That study is one of several older research efforts that — coupled with more recent work — have helped the Centers for Disease Control and Prevention (CDC) create guidelines designed to stop the spread of viruses and other pathogens. Those guidelines form the foundation of the government’s current SARS-CoV-2 recommendations, including its advice to stay at least six feet away from other people.
Three feet is the “area of defined risk” for health care workers exposed to patients who may carry an infectious disease.
“These studies looked at how likely it was that someone infected would communicate [that infection] to others in a shared environment, and then how far apart people were who became infected,” says Julie Fischer, PhD, an adjunct professor of microbiology and immunology at the Georgetown University Center for Global Health Science and Security. The results of these sorts of experiments are not always perfectly consistent, and most of the data is not specific to SARS-CoV-2. But Fischer says that the CDC’s guidelines are based on the best evidence to date and are designed to afford the public the greatest level of protection.
But guidelines are not laws. And some organizations that are planning to reopen this fall — in particular, some schools — are discussing whether a full six feet of physical distance is necessary to keep people safe from Covid-19. Some experts also say that the question of “what is a safe distance?” and “what distance is safest?” may have two different answers.
Balancing risks and benefits
Back in June, the University of North Carolina at Chapel Hill announced that when classes resume this fall, student desks and seating would be spaced a minimum of three feet apart — not the six feet recommended by the CDC.
The university said that its decision was based on input from infectious disease and public health experts. But the move triggered an immediate backlash among concerned students, their parents, and some faculty members. The school soon revised its policy to conform with the government’s six-foot guideline.
While many observers may have wondered just what the university’s administrators were thinking, some who were close to the controversy say that the debate isn’t as clear-cut as one might assume. “The three-foot rule — that’s classically what’s considered the safe distance,” says Efraín Rivera-Serrano, PhD, a molecular virologist at UNC-Chapel Hill. He’s referring to long-standing infectious disease guidelines, produced by the CDC, that say three feet is the “area of defined risk” for health care workers exposed to patients who may carry an infectious disease.
Rivera-Serrano says there’s no question that maintaining at least six feet of physical distance is optimal. But he points out that the World Health Organization, along with countries such as France and Denmark, have adhered to a one-meter (3.2 feet) physical distancing guideline throughout the pandemic. “Three feet should be enough, especially if [everyone is] wearing a mask,” he says.
The question of “how much distance is enough?” is a challenging one for schools and other institutions or businesses that are attempting to balance public safety with reopening imperatives. Everyone agrees that more distance is better when it comes to reducing exposure risks. But classrooms do not have unlimited space, and as the distance between two people increases, the amount of added risk reduction falls.
If people are wearing masks, it’s unclear whether there’s a large amount of additional risk reduction when people move from three feet to six feet.
To illustrate this point, imagine that someone has tossed a water balloon up in the air. If the balloon lands and explodes within three feet of your legs, you’ll probably get hit with some droplets. The farther away you move, the lower your risk falls of being splashed. But at a certain point, the odds of any water hitting you become so small that putting more distance between yourself and the balloon doesn’t do much to lower your risk.
The same basic rules apply to virus transmission. Rivera-Serrano says that when an infected person talks, sneezes, coughs, or even just breathes, that person expels droplets of saliva. The closer someone is to that person, the more likely they are to inhale one or more of those droplets. But if people are wearing masks, it’s unclear whether there’s a large amount of additional risk reduction when people move from three feet to six feet.
To his point, a July study published in the Lancet looked at data related to SARS, MERS, and Covid-19. It found that people’s risk of infection dropped from 13% to 3% when they maintained at least three feet of physical distance. “[P]rotection was increased as distance was lengthened,” the authors of that study concluded. But graphs included in the study suggest that the magnitude of the risk reduction beyond three feet may be quite small.
Rivera-Serrano says that whenever six feet or more of physical distancing is possible, people should follow that guideline. But he also says that if people maintain at least three feet of distance and are also wearing masks, it’s not yet clear whether the additional three feet of distance provides a significant added layer of protection — or, at least, one that is significant enough to keep a large percentage of U.S. students out of the classroom this fall.
The origins of the six-foot rule
As recently as the 1950s, health officials didn’t have a solid understanding of the ways in which common respiratory illnesses spread from person to person. That changed during the 1960s and 1970s when some pioneering research at the Common Cold Unit — a former initiative of the British Medical Research Council — revealed that close proximity to infected individuals, more so than touching infected surfaces, seemed to carry the greatest transmission risk.
A later study of English school children, published in 1982, suggested that virus transmission was elevated when students sat three feet or less from one another. “These studies formed the foundation of the standards developed by CDC and [the National Institutes of Health] and used in health care facilities, which are that anything closer than three feet carries the most risk,” Georgetown’s Fischer explains.
Before Covid-19, the three-foot guideline was still widely used in health care settings. So how did U.S. health authorities come up with the current six-foot recommendation?
“During the SARS epidemic, epidemiologists realized that three feet might not be enough to prevent droplet transmission,” Fisher says. “There was evidence that health workers who had moved through spaces within about two meters, or six feet, of SARS patients had become infected.”
SARS and Covid-19 are transmitted by related coronaviruses. Assuming that they are passed from person to person in similar ways, it follows that three feet might not be enough space to effectively lower the spread of SARS-CoV-2. But if everyone’s wearing masks, that could change the math. It’s uncertain how much added Covid-19 protection a person enjoys if they’re wearing a mask and they maintain six feet of distance from others, as opposed to three feet.
“People love to think about things in absolutes, but in biology there are always exceptions. With distance and risk, it’s a continuum.”
Experts say the answer likely depends on dozens of different variables. “We understand that unique air patterns, like the way air-conditioning flows, can make a big difference in how far droplets can move,” Fischer says. (At a restaurant in China, a person infected with Covid-19 sat close to an air-conditioning unit, which was believed to have carried the virus to diners sitting “downstream” in the path of the unit’s air flow.)
Also, a person who is sneezing, as opposed to talking or breathing, may expel droplets much farther. When people are outdoors, Fischer says that several different factors — such as UV light or humidity — may cause virus droplets to degrade or fall to the ground more quickly than they would indoors, and so transmission may be less likely. But the risks associated with all these scenarios are “hard to quantify” and highly situation-dependent, she says.
To sum all this up, public health authorities are doing their best to provide people easy-to-follow, evidence-supported guidelines that minimize the risk of virus spread. The best evidence to date suggests that maintaining six feet of physical distance is likely to be a highly effective way to reduce the odds of SARS-CoV-2 transmission. But some open questions remain, and debates about what distance is appropriate — especially when coupled with masks and instituted among low-risk groups — are sure to continue.
“People love to think about things in absolutes, but in biology there are always exceptions,” Rivera-Serrano says. “With distance and risk, it’s a continuum.” | https://elemental.medium.com/six-feet-apart-is-the-gold-standard-but-should-it-be-3af48fe56ff0 | ['Markham Heid'] | 2020-07-31 14:14:44.265000+00:00 | ['Health', 'Pandemic', 'Covid 19', 'The Nuance', 'Coronavirus'] |
Grandma Wants Revenge on Reindeer Who Ran Her Over | Grandma Wants Revenge on Reindeer Who Ran Her Over
After a heated Twitter feud, Santa, the reindeer, and Grandma have agreed to duke it out in a WWE match
Photo made by author on Canva Pro
Twitter has exploded over the chaos of the legal battles of Grandma suing reindeer who ran her over, Dusty. #TeamSanta #TeamDusty and #TeamGrandma become the top three trending hashtags on Christmas, in what becomes the most bitter and divisive Twitter topic since the 2020 election.
The reason? Grandma wants revenge. About 50% of Twitter stands with Grandma, while 40% of Twitter thinks Dusty is the real victim. Santa, in his attempt to avoid all liability, blamed the accident all on Dusty. After the accident with Grandma, Dusty has had a broken leg which has twisted almost 90 degrees, and Dusty has had no funding from his abusive boss in paying for his medical bills. The National Reindeer Union has thrown their hat behind Dusty and started a GoFundMe to pay for his surgery.
10% of people on Twitter are still with Santa, equating him with God since he gives them whatever they want when they wish for it. Santa, a government employee, has been charged by the U.S. Department of Justice for corruption and embezzling funds. When Santa was confronted by the media, however, he doubled down on defending himself and equating himself with Jesus:
“I give people their dreams, their presents, and all of a sudden they want to crucify me for not carrying a receipt? I may as well nail myself on the cross right now — a real ‘Christian’ wouldn’t turn his back on me.”
Meanwhile, Grandma has banded with Cousin Mel to find justice and demand accountability for the reindeer’s hit and run. Cousin Mel said that if Grandma was hit by a car, it would have automatically involved the police and insurance companies. However, there’s all of a sudden an exception when it comes to reindeer.
“A reindeer is actually more dangerous than a car,” Cousin Mel said. “They have antlers that, you know, can impale you.”
Politicians across the board have been divided on the issue. Overwhelmingly, Donald Trump has thrown his hat behind Santa, tweeting that he deserved to be “PARDONED” for not doing anything wrong. Trump has also thrown his hat behind building a Santa brand of hotels and casinos to help raise money for his legal battles against Grandma and Dusty. While Santa’s support only comprises 10% of Twitter, his supporters are the most vocal, accusing the elite media of distorting the truth and claiming Santa is the only one who can tell the truth.
Twitter has stepped up its moderation due to the viciousness of the Twitter battles. The company didn’t see anything wrong when Santa made death threats against Dusty, especially since Dusty doesn’t constitute an “individual or a group of people” in its guidelines and policies. Death threats against Santa and Grandma, however, have been banned and deleted off the platform, and Dusty’s attorney, Rudolph, has pledged to file suit against Twitter for selective enforcement.
Outside of the courts, Twitter wanted to see more action. Since Cousin Mel’s house is undergoing the foreclosure process, and all parties have lost significant income from legal fees and litigation, all have agreed to extrajudicial justice. Vince McMahon at the WWE has announced a Triple Threat match between Grandma, Santa, and Dusty that will finally settle the argument between the three, with $2,000,000 in prize money. The event at Wrestlemania sold out within minutes. Vegas has betting lines for all three, with Santa coming in at -200, Dusty coming in at +50, and Grandma coming in at +150.
When Grandma heard she was the favorite to lose the match, she felt offended. Behind the scenes, she’d been working on her wrestling moves with Grandpa, diving from the roof to elbow him in the chest, putting him in a chokehold that sends him into an asthma attack, and twisting his arm until his shoulder tears from his socket. She even has threatened to take away his medication if he reports her to authorities. Although she never expressed such violence earlier in her life, Grandma wants revenge against both Santa and Dusty, and she has never felt so alive. Grandpa, from his stretcher, has put all of his retirement funds on Grandma.
“Once my wife has three cups of eggnog, you do not want to be in the same room as her,” he said. | https://medium.com/jane-austens-wastebasket/grandma-wants-revenge-on-reindeer-who-ran-her-over-54efc63d17e7 | ['Ryan Fan'] | 2020-12-24 06:11:36.566000+00:00 | ['Social Media', 'Satire', 'Humor', 'Books', 'Music'] |
Exploring the question of whether submarines can swim | Exploring the question of whether submarines can swim
An Argument for Verified Humans (Part I)
As I read, I often get lost in the text, my thoughts diverging from the author’s words toward something related yet different. This process seems almost like a conversation between the author and me, with a person’s words provoking my response. At times, I enjoy this aspect of reading, but more often, it is quite distracting.
Still, if I find the thoughts especially compelling (and if a pen is within reach), then I write them down in a place where they can be occasionally reviewed until I find them either ruinously obvious or oblivious to accepted fictions; less often, I build and branch until the original note takes on enough of its own life to become an article, a project, or a poem.
Such was the case when I read the following paragraphs from Deep Learning with PyTorch¹ about a text response produced by OpenAI’s GPT-2 model (which has since been bested by GPT-3):
That’s remarkably coherent for a machine, even if there isn’t a well-defined thesis behind the rambling. Even more impressively, the ability to perform these formerly human-only tasks is acquired through examples, rather than encoded by a human as a set of handcrafted rules. In a way, we’re learning that intelligence is a notion we often conflate with self-awareness, and self-awareness is definitely not required to successfully carry out these kinds of tasks. In the end, the question of computer intelligence might not even be important. Edsger W. Dijkstra found that the question of whether machines could think was “about as relevant as the question of whether Submarines Can Swim.”
These few sentences have inspired in me a handful of tangents that I want to follow here to support propositions for (1) human verification on social media platforms and (2) transparency regarding the use of artificial intelligence in communication tasks. To do so, I will go over each bolded bit from above one by one, and to keep things interesting, I will not do so in order.
Rather, can submarines think?
I admit that I do not know the original context of Edsger Dijkstra’s quote (having only seen it applied in myriad secondhand contexts), and thus my understanding of its intentions may be limited. However, it seems as if the example aims to highlight the innate irrationality of any questions into machine thinking by drawing a comparison to the question of submarine swimming, which somehow seems more obviously unquestionable.
I have two issues with Dijkstra’s quote: (1) it is entirely correct, as there is no true difference between a machine thinking and a submarine swimming; and (2) it stealthily cows the interpreter of the quote — perhaps unintentionally — into accepting the premise that submarines cannot swim, which, as I will argue, is not necessarily true.
…suppose we build a submarine with arms to paddle and legs to kick, one that could navigate the murkiest depths without intervention and respond to danger in real time
On the surface, there seems to be a fundamental difference between asking if a machine thinks and asking if a submarine swims, as swimming is a physical action undertaken by all sorts of creatures to maneuver through liquids, while thinking is a fairly mysterious cognitive experience that we assume is shared by beings who are similar to ourselves (i.e., other humans). However, upon diving deeper, one finds that these questions are essentially equivalent.
Suppose I replaced “to maneuver through liquids” in the paragraph above with “who want to maneuver through liquids.” For all intents and purposes, these two phrases impart the same meaning; however, the second highlights a hidden cognitive component of swimming that suggests that the action is done with purpose. For example, in contrast to an actor swimming, an actor maneuvering through water without intention may be said to drift or to drown, depending primarily on the outcome, and an object controlled by a separate actor may be said to be driven or to be controlled (i.e., to be passive in its action). Therefore, a question of swimming is essentially equivalent to a question of thinking.
Furthermore and interestingly, it seems as if the hidden component is more important than the cognitive component to the classification of a movement through water as swimming. For example, a person may be comfortable claiming that a fish swims, as even the most feebleminded of fish seems to have a certain amount of control over its cold-blooded body as it maneuvers through water. However, would it be correct to say that a eukaryote swims by the flick of a tiny flagellum? Perhaps yes, even still, as the thing, when viewed under the microscope, seems to just go without external manipulation.
But a submarine? Well, we know that humans make submarines go, whether directly through steering or indirectly through programming, and so it does not seem as if a submarine swims. But suppose we build a submarine with arms to paddle and legs to kick, one that could navigate the murkiest depths without human intervention and respond to danger in real time? My guess: Not even then.
The pointlessness of semantic arguments
As discussed above, both questions referred to in the original comparison ask about the unseen processes undertaken by an actor; however, there is no reason why we cannot say that a submarine swims or that a machine thinks, other than a certain linguistic reluctance to do so. Specifically, whether a machine thinks depends on whether we find it appropriate to refer to the actions undertaken by a computer as “thinking,” and likewise for a submarine swimming. Thus, the questions are not worth asking — not because they are either true or false, but because they are entirely dependent on semantic interpretation and are thus neither true nor false. However, taken out of context, as the quote often is, it may seem as if, to be on the side of reason, one must agree with the premise that a submarine cannot swim, which could lead one to conclude — without doubt or ambivalence — that machines cannot think.
Taken out of context…it may seem as if…a person must agree with the premise that a submarine cannot swim, which could cause the person to conclude that machines cannot think, which…is neither true nor false.
Note that I am not claiming that Dijkstra wanted to bully anyone into admitting that a submarine cannot swim, nor am I disagreeing with his statement, as I believe his intention was rather to state the pointlessness of such semantic arguments when there are more interesting subjects on which to expend breath (or keystrokes). Even still, as I will discuss in the next part, machines have a certain non-negligible potential for deceptiveness when they are programmed to perform tasks that are inextricably linked to human cognition, such as those requiring human language to be wielded, as such tasks were (correctly) thought to be performed only by humans until recently. | https://medium.com/linguaphile/an-argument-for-verified-humans-part-i-2b4c987e462a | ['Danielle Boccelli'] | 2020-12-20 03:18:30.680000+00:00 | ['AI', 'Artificial Intelligence', 'Data Science', 'Computer Science', 'NLP'] |
Predicting American ICU Saturation During COVID-19 | Like everybody else in the free world, I’m obsessed with coronavirus. You are too, that’s why you’re reading this. It conjures a lot of very interesting thoughts in my mind, like this one:
Everyone’s yelling at everyone else about what the correct policy is, but the yelling isn’t ever going to stop because of the curious predicament in which public policy makers find themselves.
If they institute Policy Measure X, and it still gets bad, then they will be publicly thrashed for not doing enough.
If they institute Policy Measure X, and it turns out to be no big deal, then they will be publicly thrashed for overreacting.
If they do nothing, and it’s bad, then they will be publicly thrashed for doing nothing.
If they do nothing, and it’s no big deal, then they get to claim “I told you so.”
The only choice they have of the four possible options that doesn’t get them thrashed is to stick their fingers in their ears and do nothing, and see what happens. This is obviously the wrong thing to do, but this sort of analysis is probably weighing on every policy maker’s minds right now, from the world leaders to the school boards. It’s a real pickle. I have no answers for that. It may in fact have been exactly what happened in Italy.
I sat down at my desk on the morning of Saturday March 14th, 2020, and started digging into the numbers, to try and project when the United States medical system was going to reach the breaking point like Italy. I did it so I could tell my family members what to do. What I discovered was curious, and somewhat non-narrative, and has a little bit of hope buried in the fear, so I felt I’d share it. I did this because this graphic flipped past my feed:
Reddit, the bastion gatekeeper of all academic sciences, propagated this very wrong chart. No we are absolutely NOT “11 days behind Italy.”
This is a neat chart, that shows some interesting stuff, but it’s misleading in several ways and brings us to inaccurate conclusions. And since I’ve seen it being shared by a lot of pretty smart people, I figured I’d tear it apart and rebuild it properly.
The implication of this graph is that we are going to be in the state that Italy’s in currently on March 22nd, with our ICU wards full and in triage, deciding who lives and who dies. That’s a bad conclusion because there’s a buried assumption in the graph that the United States has the same raw treatment capacity as Italy, without accounting for the relative population difference. This graph is not per capita.
It also doesn’t account for the true treatment limits. We shouldn’t be looking at total number of ICU beds, even on a per capita basis. We should be looking at the total number of available ventilators per capita with which the very sick can be treated. That’s the hard deck for when triage begins, and the very hard decisions start to be made, about who to just let die.
Gathering the Data
For all the wailing and gnashing of teeth over the United States healthcare system, our core infrastructure for critical care is world class.
What matters in this case, though, isn’t beds, it is hardware. Treatment for the worst cases of COVID-19 requires ventilation. Everything coming out of Italy right now in the media is one resonant cry, “we need ventilators.” They have mobilized their army to try and build more ventilators.
The United States has approximately 170,000 available ventilators with which to treat COVID-19 extreme cases. That’s a ratio of about 52 ventilators per 100,000 population. Not a lot, but it’s significantly more than any country in Europe. Germany is one of the most well equipped countries in the EU in terms of critical care infrastructure, and it has 25,000 ventilators. That works out to be about 30 per 100,000 population. The United States has almost twice as many as Germany, and probably the highest number per capita in the world.
How many does Italy have?
From the same link, Italy has about half the critical care beds per citizen that Germany has. It’s hard to know if the per capita number of ventilators scales with beds by their neighbor, but if so, then they likely have somewhere around 15 per 100,000. But if that estimate were true, then Italy would have around 9,000 ventilators, and they only had 10,149 confirmed cases of any severity on March 11, when the media reports about Italian triage started filtering in. That estimate must be far too high, unless they are either over-ventilating everyone or their infection rate is being misreported by a factor of 20.
If we were to presume that their confirmed case number reflected the actual infection rate, and they were only ventilating 5% of cases as China and other countries have done, then they would only need about 500 ventilators. That can’t be right. There are likely 3000 ventilators in Atlanta Georgia alone, based on US national averages.
Of all the media sources I could find on the number of Italian ventilators, the only one that gave a number was this one, which quoted around 3,000. That’s not very many at all. That would be about 5 per 100,000, about a tenth what we have in the USA. Maybe that’s the right number.
When we replicate the Reddit table above against World Health Organization (WHO) data, the data for Italy matches, but the data for the United States does not. It’s similar in some spots, but not in others, and doesn’t accurately represent the growth curves here. To be sure, some of our depressed numbers are absolutely related to poor testing procedures, but I’m not convinced Italy’s numbers don’t represent poor testing as well, given their reportedly early aloof attitude towards the disease. Let’s proceed presuming Italy early testing and USA early testing have the same sort of underlying failures, and do a real per capita, and per ventilator, comparison.
here’s the reddit chart, fixed
Presuming Italy met their triage boundary around the 11th of March, that would mean about 3.38 confirmed cases per ventilator if their ventilator supply was around 3000, and 1.11 cases per ventilator if their ventilator supply was around 9132. The 3.38 number makes more sense, because we know from Chinese experiences that not very many cases require ventilation. If true, that indicates that there was probably a tremendous per capita ventilator shortage in Italy prior to the outbreak, which is exacerbating their situation.
Extrapolating from these numbers, and further presuming that Italian screening is equally bad to US screening, we should expect to get into the “triage apocalypse” here around when we have 3.38 confirmed cases per ventilator, which is 575 thousand cases, or a confirmed infection rate of 0.17%.
That’s not very much.
We are definitely going to hit the triage limit if we use Italy as a modeling template, just not this month.
What Date is the Ventilator Apocalypse in the USA?
This is very difficult to answer, because Italy’s numbers on which we’re building our model aren’t real infections, they’re just the number of confirmed infections. They’re bounded by the availability of their testing, as are ours.
If we presume that the current cases aren’t testing limited, or that the ramp in increased testing over the infection here so far parallels the actual spread of the virus in an appropriate way, then we can use the last week’s rate of US propagation to project forward. That would be an increase of 1.33 times per day, or a doubling of cases every 2.4 days. We hit the limit somewhere about 20 days from now instead of 11 like the reddit graph is inferring.
First or second week of April.
Our cases may climb more rapidly than that, though, because our testing may ramp up faster than the disease ramps. And would this testing ramp be climbing faster, or less fast, than that in Italy, for our comparison to be useful? Once you throw testing uncertainty into the mix, the mathematics become basically undoable. If you throw in the fact that people in Italy literally kiss each other on the face when they meet, and are statistically the second worst nation in the EU at washing their hands, and are a more dense population than we are, the comparison becomes regressively undoable.
We also just implemented some very significant social distancing protocols last week, once we started freaking out about Italy, that are hopefully going to change our infection curve away from a doubling every 2.4 days to a doubling across a much wider time frame.
Geographical Variation of Critical Cases
Different areas of the country have higher population densities than others, different age profiles, and different health characteristics. A GIS nerd with a statistics degree could punch out a map of that, especially if they’re at Harvard. Several days ago this study did exactly that, in attempting to map likely COVID-19 hot spots in the US.
from the study, go read the study, it’s neat
The study basically uses the Wuhan contagion curve and maps it over to our “old people ratio” (top map) and our “high blood pressure ratio” (bottom map) to see how bad it might get here, in terms of infections, if we end up progressing like Wuhan did. Their worst-case mapping is 5 per ten thousand population, or 50 per 100,000, at the peak. That’s not too far off our ventilator capacity, if every ventilator was dedicated to handling coronavirus. But maybe we have that many ventilators because we need that many ventilators for other reasons.
Why was Wuhan so bad?
from the study, but the little red line there is mine
If the contagion curve does not get any worse than what China saw in Wuhan, we don’t exceed our national per capita ventilator capacity. But Wuhan also went into extreme lock down to manage that curve.
It was bad in Wuhan in part because Wuhan’s critical care infrastructure was a lot worse than ours is. Like Italy, their per capita resource allocation (left side Y axis) towards treatment of things like this started very low, and all those instant “just add water” hospitals we watched them build on the news were not China exceeding US capacity, they were China climbing up to meet our capacity. But again, maybe we need the high capacity we have because we’re already using it?
Perhaps our bloated healthcare infrastructure where everything is always over-treated to fleece the insurance companies for more revenue has inadvertently built in a buffer to where we can handle this? Many of the things that drive our healthcare costs up — over-treatment, treating everything as critical care, fear of being sued, cashing in on end of life care, and a fetish-like fixation on medical machinery, have pre-built an infrastructure unlike anywhere else in the world to handle COVID-19.
The numbers seem to indicate that while this is very likely going to be bad, we have more critical care infrastructure than anyone else on the planet does. This does not mean ours won’t get overburdened, but it does mean our overburden limit is significantly higher than anywhere in Europe, especially Italy. That brings us to our final question — how much burden can we take?
“Flattening the Curve,” with Numbers Finally
Social distancing. Flattening the curve. These are the buzzwords. A few days ago HWFO published a piece about how best to visualize “flattening the curve,” by equating it to the rain.
Summarized, that corollary goes like this. If you own twenty acres of undeveloped forest land, the water runs off it into the creek next door at a rate during a storm, which is mitigated by the trees, infiltration into the soil, and such. If you pave that land, none of the water makes it into the ground, it all flows off very quickly, and the creek that used to be able to convey the flow can no longer convey it. It floods, and flooding is bad, so you mitigate this somehow. You could either widen the creek, which isn’t often done, or you could hold all the flow back in a detention pond, which releases the water slowly so the creek can convey the water without flooding.
Graphically, this:
to discover who Big Joe is, read the prior article
Turns into this:
to discover why we’re talking about Dr. Seuss, you’ll need to ask my therapist if I had one
…so you build a detention pond to prevent the flooding, and if you build it properly it turns your problem into this:
much math, very engineer
It’s a cool article. Share it with anyone who doesn’t understand “flattening the curve.”
The problem with COVID-19 is a flooding problem. Coronavirus is going to pave the land and increase the number of rain drops (infected people) who end up in the creek (medical system), and you have capacity problems. “Flattening the curve” is just doing the same thing with infected people that we did with the rain with our detention pond. Like this:
this is still unitless!
The problem, as stated prior, was that the “flattening the curve” graphs didn’t have numbers on them. None of the ones in the media do either. To understand what needs to happen, we’d have to put numbers on them. Further, we don’t know that our curve and Italy’s curve are going to be comparable, nor that Italy’s dashed line is the same as our dashed line.
A twitter thread expounding on this:
Looking closely at Italy as we did above, we discover their dashed line is basically tied to their number of ventilators. They’re in triage at 3.38 confirmed cases (maybe) per ventilator, which would lead us to believe 30% of their (identified) cases need ventilation. This is far higher than the recent JAMA Clinical Update based on Chinese numbers, which indicated approximately 5% of proven infections require critical care. This could be for several reasons, but the two most likely are that either
Italy is underreporting actual cases by a factor of six as compared to China, due to worse testing, or…
Italy is having a hard time matching the ventilators they have with the cases that need them
Our task in the USA is to make sure that we’re matching ventilators up with cases which need them as efficiently as possible, expand our number of ventilators, and figure out other ways we might be able to maximize ventilation, like kids or skinny people sharing a tube. (Maybe that works, maybe it doesn’t, I’m not a ventilation doctor) Raise our dashed line as high as we can, while squeezing as many people under it as possible.
We know what our dashed line looks like today. 170,000 ventilators is the line. We know what our estimated cases look like. Between 70 million and 150 million cases is the estimate. Based on JAMA estimates, five percent of these will need ventilation, which is 3.5 million to 7.5 million Americans. These are the “area under the curve” in the prior graph. We have the treatment capacity to treat between 2.2% and 4.8% of the total number of people who will need critical care at any given time.
How long someone must be ventilated is sketchy. The best information I can get currently comes from notes that MDs took in conferences that they propagated anonymously on the internet. These lead me to believe ten days is a good mean number, although I think if they had no triage-like resource restrictions they’d probably choose to do two weeks. If we take the low number, 70 million cases, and presume 5% need ventilation, that’s 3.5 million cases. At ten days each, with 170,000 ventilators, it would take 205 days to perfectly squeeze a perfectly flattened, perfectly square “curve” through that ventilation pipeline. For the 150 million cases scenario, it takes 441 days.
A perfectly efficient “flattened curve,” like a big rectangle that starts tomorrow and rides the system’s capacity until everyone is treated, would probably treat everyone in around a year, depending on the breaks. That’s not going to happen though. We have no idea what our infection curve is going to look like, but if we take Wuhan’s infection curve, and scale it up to the “70 million cases” estimate, this is what we get:
yay we finally put numbers on Greta Thunburg (and everyone else’s) graph
How did I draw this? First, I picked scales for each axis and drew out a “perfect (rectangular) curve” to treat all patients from a 70 million case individual outbreak in the USA, for which 3.5 million need ventilators. That’s in blue. Then I traced the general shape of the Wuhan outbreak, and scaled it vertically until the area under it (which represents cases, in red) matched the blue area. If our outbreak follows a profile like Wuhan’s, we are likely to see 70% of critically infected people not get treatment.
Maybe Wuhan isn’t the right curve to use. Here’s a gaussian distribution, a “bell curve,” for comparison, with a longer time to completion.
this still sucks
There’s no good way to know whether our infection curve is going to be steeper and uglier than Wuhan, or flatter and nicer, but the chances it gets under the 170,000 ventilator threshold are probably close to zero. It seems very likely to me that the United States will be in triage mode like Italy is now, if not in the late spring then certainly the summer, depending on how well our “social distancing” and voluntary quarantine measures work. And this is likely to happen no matter what sorts of governmental policies are laid out today, at any level. Social distancing and voluntary quarantine/isolation will help the problem, but will not solve it.
Conclusion
My father told me something on the phone today. He said that responsibility is a function of authority, not a consequence of blame. In his words, “if the bulldozer breaks down because the operator doesn’t change the oil, the dozer doesn’t get fixed by blaming the operator.”
If things proceed like I think they will, and the country is as divided as I see it, lots of very angry people are going to be blaming each other for something that is probably impossible to stop. Every new ventilator that gets built in the next two months might save a dozen people. Every other improvement to the critical care infrastructure that happens in the next two months might have a similar effect. Every attempt to squash this curve might mean someone catches coronavirus in November instead of June, which might mean they get treated when they might not have otherwise. But it’s highly unlikely that anything can prevent people we know from dying in triage. People are not only going to die from this thing, they’re going to die because a doctor chose to save someone else. Probably with good reason.
I’m telling the older, more infirm family members of mine to voluntarily quarantine themselves when ICUs are a week away from being full, and stay that way until treatment capacity opens back up. | https://medium.com/handwaving-freakoutery/predicting-american-icu-saturation-during-covid-19-f45ec1672571 | ['Bj Campbell'] | 2020-04-14 14:08:34.169000+00:00 | ['Coronavirus', 'Covid 19', 'Health', 'Media Criticism', 'Random'] |
Purge your Followers, Bring an Instagram Account Back to Life | Purge your Followers, Bring an Instagram Account Back to Life
You don't have to start from 0, but reviving an old Instagram account will take time
Photo by Pagie Page on Unsplash
Recently, someone asked me what would be the best way to grow an old, inactive account.
As you already know, Instagram rewards consistency. This makes growing an inactive account harder. The followers you may be having on this account might never see your content again as the algorithm would push other accounts, more regular and consistent ones, that they follow.
Growing an inactive account is a lot different than growing a new one.
And for this reason, many people get discouraged after they try reposting on an older account.
That’s partly why you will hear people say that they don’t like Instagram anymore, and why the free organic reach you get on TikTok is better.
But here are some truths that are worth keeping in mind:
Instagram still converts a lot more than TikTok.
than TikTok. Growing an inactive account requires some prep work.
You won’t get an incredible engagement overnight.
So pay attention to what follows if you haven’t posted in a long time, or if you’re considering posting on an old account that you have.
The first step would be to get rid of your inactive followers. Go through your existing followers and remove all the inactive, spammy, or fake accounts.
You will probably end up purging 20 to 40% of your followers, but this will help you start with a cleaner account and a higher engagement rate.
Then, go through your DMs and start engaging if you can. If you have conversations that you left pending or find ways to engage in any way with the existing messages you have, do it.
Another good practice would be to notify your followers that you will start being active again. You can manually DM some of your followers and let them know. This will help the algorithm see some activity back, and you may even show on their feed.
If you don’t know how to DM, start with your existing DMs. Reaching out to someone with whom you already had some contact will be easier.
Finally, stick to a schedule. Be consistent and regular. Post regularly, even if it’s hard, even if you’re not getting the results you were expecting. Instagram is a patience game, and too many people give up before it pays off.
Be gentle with yourself. Instagram is now a lot harder than it used to be when you had an account and were actively posting.
There are other ways to bring an account back to life, and if you know some, I would love for you to share them with me to talk about it. | https://medium.com/digital-diplomacy/purge-your-followers-bring-an-instagram-account-back-to-life-7d0732521438 | ['Charles Tumiotto Jackson'] | 2020-11-06 19:09:10.661000+00:00 | ['Marketing', 'Business', 'Startup', 'Social Media', 'Instagram'] |
How Unsplash Went From a Tumblr Page To Fully-Fledged Platform | How Unsplash Went From a Tumblr Page To Fully-Fledged Platform
And how we can apply it to our projects.
Photo by Rubén García on Unsplash
You’ve heard of Unsplash, no doubt. The thumbnail of this very article is integrated straight into this post from the service. I searched it within the text editor and picked one I liked. Boom, my article now has a header.
It wasn’t always this way. I couldn’t always type away and select an extremely high-quality photo straight from my text editor to be used completely royalty-free. It took a lot of work in the right places, over a lot of time.
But before putting in all that work to turn this into what it is today, it had to start somewhere. And starting is where most founders screw up. Starting has the most amount of friction. And taking that first step to publish a piece of work might take a year, several developers, and a lot of anxiety.
Unsplash, however, published its first version in 3 hours with $38. There was no umming and ahhing over the design. There was no unnecessary complexity. The first version was so basic that a teenager posting edgy content on the internet used the same technology — it was on Tumblr.
The Original Unsplash on Tumblr | GIF adapted from Source
This is the story of how Unsplash started and how you can use the same principles to launch practically any idea that comes to mind. | https://sahkilic.medium.com/how-unsplash-went-from-a-tumblr-page-to-fully-fledged-platform-a65e13169e27 | ['Sah Kilic'] | 2020-11-01 08:58:10.791000+00:00 | ['Entrepreneurship', 'Business', 'Startup', 'Technology', 'Advice'] |
Definitive Guide To Creating Profitable Online Courses | 1. Pick A Topic Worth Paying For
You don’t want a course that’s “nice to have.” If there’s a real demand for this knowledge or skill, people would be willing to pay good money for it.
Nowadays, there’s a course for almost everything; cooking, dancing, even playing popular video games. You can create a course about anything. But I would stick to topics that people are used to paying for. Not many people will pay for a course that only gives inspiration. There are a million free YouTube videos for that.
Here’s a simple rule of thumb: Don’t create something people can easily get for free.
What Topics Should You Consider?
I’ve seen courses on creating morning rituals. Maybe some people buy it, but it’s not something I would ever pay for myself. Why? You can learn about morning rituals from a simple 10-minute YouTube video. I actually created one:
But that doesn’t mean it can’t be part of a course about a bigger topic. The video above is part of my course, Procrastinate Zero 2. Another example is note-taking. I wouldn’t create a course about that. Instead, it could be part of a course on improving your focus.
Ask yourself this, when planning a topic: “Would I be willing to pay for this?” And more importantly, have you spent money on a course that’s similar?
Compare note-taking to coding or cryptocurrencies. There are YouTube videos and free blogs about those topics. But it’s pretty much impossible to learn those skills in a short video or article. The best courses are about topics that require expertise and time to learn. That should narrow down the list of courses you should focus on.
The next step is to apply marketing before you actually create your course. You want to build marketing into the creation process.
Specify Your Course’s Focus
Marketing is not only about creating awareness for your product or service. It’s mostly about getting in front of the right audience. Try to think ahead here. If someone comes to your website, they should know immediately if the course is for them, or not. Let’s say you’re a fitness coach.
Do you teach functional strength training? Do you focus on losing weight? Or on bodybuilding? Etc.
Who do you focus on? Athletes training for a competition? Moms who want to get in shape after pregnancy? Seniors who want to regulate their breathing?
It all depends on your focus. And that focus makes your marketing strategy a lot easier. To do this, you’ll need to answer these two questions:
1. What does my course do?
How can your target students use your course in their lives? What will this knowledge help your students get?
This is trickier than it appears. A member of my online community recently mentioned he wanted to create a course on blockchain technology. He had a great presentation of his idea. But the missing aspect was a crystal-clear answer to the question, “Why would I want to learn this? What can I specifically do with the knowledge you’re teaching me? What actionable benefits can I get from this course?”
It’s not about the knowledge you’re teaching. It’s all about what you get from applying that knowledge.
It’s the benefits that make the course worth its price. If I don’t immediately see the course’s direct benefits (other than the awareness that I might find it useful in the future), it’s difficult to be convinced.
Don’t continue with creating a course unless you have a clear and compelling answer to this question. Here’s an example from fitness YouTuber Mike Thurston.
His course is primarily focused on building muscle mass. His promise is that you will look good.
2. Who is my course for?
One of the best books on marketing I’ve read is The 22 Immutable Laws of Marketing by Al Ries and Jack Trout. I highly recommend reading that book if you want to offer any type of product or service — especially online courses.
One of their laws is “Focus.” It comes down to this: You’re better of creating a product for a specific audience. You should avoid creating a product that’s for everyone.
So if someone asks you, “Who is your course for?” You should never say something like, “It’s for every entrepreneur.” Replace entrepreneur with any target audience.
Here’s something that could make it easier for you. Target people that are currently a previous version of yourself. People struggling with the same things you did, before you actually did it yourself.
Let’s say you want to create a course on Mindfulness. Maybe you were a corporate lawyer and you worked 12 hour days. You could create a course for people who have that same lifestyle right now. Show them how you overcame the challenges you faced in the past.
All of this should help with picking a topic for a course that has the potential to sell. Once you’ve done this, it’s time to create the outline for your course.
2. Create an Online Course Outline
Your course has a main goal. Maybe it’s to help people become more confident, or stop procrastinating, or create iOS apps. Your course should be a framework to achieve this main goal.
A course outline helps organize your thoughts and planning. It’s your bird’s eye view. How do you create this outline?
Break Down Your Course In Modules and Lessons
To give you an example, I’ll be using one of my latest courses, digitalbusiness.school. The main goal of this course is to teach students how to create a sustainable and profitable online business. That’s what it does. Who’s it for? Entrepreneurs who are currently struggling to go full-time and make a good living.
Once I had a good idea of those things, I listed all the steps prospective students needed to complete. Those steps are your modules. Then, I broke down the modules into specific lessons.
How many modules do you need? It depends on your course and what you’re teaching. This course has 6 modules because they represent the 6 major steps entrepreneurs need to create a profitable online business.
My writing course has only 3 modules, for example. But every module has more lessons compared to digitalbusiness.school. I wouldn’t get hung up on this topic. When you create the structure of your course, always think about the students — not what other courses do.
Now, you want to break down your modules into lessons. Here’s what it looked like for module 1 of digitalbusiness.school. It’s just a Word or Google doc at this stage. But note that I also wrote a two-to-three-line description of each lesson. That helps you to get clear on what every lesson is about.
MODULE 1: How to Build a Brand — a practical, hands-on approach to creating an effective and iconic personal brand
Develop your competitive edge — Stand out in the market, and pick the right audience.
— Stand out in the market, and pick the right audience. Test your business idea — Determine an idea’s business potential, even without an existing audience.
— Determine an idea’s business potential, even without an existing audience. Craft your unique brand — project instant authority and create a compelling story that will make people buy.
— project instant authority and create a compelling story that will make people buy. Design your brand identity like a pro — Concepts to remember and tools to use for a professionally appealing brand, even without a designer background.
— Concepts to remember and tools to use for a professionally appealing brand, even without a designer background. Build a world-class website — the best website building platforms you can use, and which among them fits your business. Also includes a checklist of important website elements.
Here’s what the final version of that looks like:
We’ll get to the platform I use for online courses later. But for now, you want to focus on creating the best structure for your online course. I basically broke down the whole course before I started creating any material. Here’s the rest of the structure of digitalbusiness.school:
MODULE 2: How to Position Products/Services — etc.
MODULE 3: How to Build an Email List — etc.
MODULE 4: How to Grow an Audience Without Ads — etc.
MODULE 5: How to Launch a Product — etc.
MODULE 6: How to Increase Profits Over Time — etc.
Add Tools, Exercise, and Case Studies
I always like to include a practical exercise for every lesson. I have to stress the idea of “practical” here. Remember: Your courses are only as good as how your students do. As the instructor, your main goal is to ensure students absorb your lessons and implement them easily. Even if you built your course well, if students fail to absorb and practice it, there’s no point.
This is where exercises, tools, and case studies come in. They make learning more in-depth and practical.
Why You Need Exercises
Students should immediately practice what they learned. Learning is better when practiced. For digitalbusiness.school, I made sure that every lesson has an “action” that students can do at the end.
Notice that the actions can be applied both by students who are currently building their online businesses, and those still in the planning stage. It depends on your topic. But it should quickly and conveniently help students apply their knowledge in the real world.
Why You Need Tools
What kind of tools do you need to apply the skill you’re teaching? For instance, my productivity course shares several apps and tools I use. In my writing course, I share my writing tools and apps.
If you’re teaching website creation for beginners, you can direct your students to free or paid website and online course builders for professional-looking designs.
In digitalbusiness.school, I showed how Kajabi hosts my online courses, ConvertKit manages my email-list, Ubersuggest analyses competition and SEO, and many more. You can also recommend helpful books and media.
You can use affiliate links here, for additional income. But I recommend using free tools as much as possible. Only provide paid tools when they’re truly the best platform for the purpose. You don’t want to appear like you’re trying to sell at every turn.
Why You Need A Case Study
Can you include a real-life example in your course? If so, I highly recommend it.
You can use social proof, testimonials, and other means to show credibility and authority. But the best way to prove that your course works is through a case study. There’s no better proof than that.
A fitness coach can use “before” and “after” photos of people they’ve coached. A piano instructor can feature videos of her students’ successful recitals.
For digitalbusiness.school, I used a case study around a community I created called The Sounding Board.
To show that my course worked, I created The Sounding Board from scratch. I also documented my steps and results.
Using all the lessons from my course, it took me roughly 3 weeks to create TSB from idea to launch. After the launch, TSB was generating $1,220 a month. Case studies like these will help your students to connect all the dots.
3. Test The Outline
Okay, so you’ve finished the outline. You included all your ideas for exercises, tools, and potentially a case study. Do you immediately start creating your course?
I’ve done that in the past and it was a huge mistake. You always run into certain issues you didn’t consider. The last thing you want is to reshoot your whole course after getting feedback from students.
So get feedback on your course structure. Do this very early on, before you have any material. You can just get people’s feedback on whether your course is covering all challenges about the topic you’re covering.
Ask anyone in your network who might be a good fit (avoid asking people who have no clue about what you’re teaching). You can also reach out to an online community of like-minded people who exchange business ideas regularly.
It saves time and effort when you revise your course outline based on peoples’ feedback, before shooting your videos or making your module material.
Who Should You Ask?
I usually reach out to my readers and people in my network to schedule a video call. You can also ask acquaintances who fit your target audience. Then I ask them questions like:
What’s your biggest challenge (regarding the course topic)?
What things/actions/products have you tried to overcome that challenge in the past?
What worked well? What didn’t work well?
Have you bought an online course about this topic before?
What do you think of my course structure?
Are you missing anything?
Would you add anything?
Here’s something important to remember: Ask more about what they’ve done in the past, compared to what they think they’ll do. Actions speak much louder than words.
Don’t go asking things like, “Do you think you’ll buy something like this?” It’s easy to be an imaginary buyer. “Sure, I’d buy that!” Well, when it’s crunch time, not everyone pulls out their card.
Once you’ve tweaked your course outline, it’s time to start creating. But give this step enough time. I took about a month to tweak the outline for digitalbusiness.school.
I listened to everyone I talked to and used that input to create something I thought would be the perfect outline. After that, it’s time to record your lessons.
4. Start Creating Your Online Course Content
Here’s something I learned after making 6 profitable online courses: No matter how good you think your course is, there will STILL be things you’ve missed or need to improve, that only testers can point out. So you’re not done with testing yet!
If you don’t address these concerns during course production, you’ll risk getting more refunds later. You want to create the best course you possibly can. Which leads us to the first point.
Record One Module First
I know it’s tempting to start creating, but don’t record the whole course at once. That’s a mistake I made with my first online course. I recorded the full course before asking for feedback.
Turns out, I was missing exercises throughout the course. It was a lot of content, with no way for students to practice what they learned. So, I had to re-record all the videos to include exercises.
Testers can tell you, in one module, what improvements your course needs. It could be your course content or your delivery. Just be open to feedback and try to improve.
Have A High Production Value
Look, you don’t have to hire a videographer to do this. In today’s age, you can produce your own high quality videos. Here are some guidelines that will help.
Video
I use a Canon 80D for recording video because it’s affordable and has great autofocus. There’s a 90D now as well. And there probably will be a newer version as well. I’m still sticking with 1080p videos for online courses. Especially if this is your first online course.
These are my camera settings:
30 frames per second
50 shutter speed (for Europe, and 60 for USA, this depends on lighting frequencies and flickering)
Automatic ISO
Lowest possible Aperture setting if you want to blur the background
Aperture of higher than 4 if you want the whole shot to be in focus
But honestly, there’s no need to be super fussy about your camera. Keep it simple. Even an iPhone would do. Sound is more important.
Sound
Bad quality audio makes videos tough to consume. So make sure there’s no background noise, distracting sounds, and echoes in your videos.
I recommend getting a simple lavalier microphone like Rode SmartLav. That’s the base level. Anything below that quality is not acceptable. Never use the on-board microphone of cameras.
If you want to upgrade your mic, I recommend the Rode Link Filmmaker Kit. That’s the mic I currently use.
Lighting
I recommend investing in some decent studio lighting equipment as well. I’ve been using two softboxes for years now. I bought them once and they still work fine. But I would get LED lights now, they’re a bit more expensive but more practical. You can also use a ring light to keep things even simpler.
Video Editing
I use Final Cut Pro X on Mac for video editing, which I mostly do myself. Editing an online course is very simple. You only cut out the sections you’ve messed up. That’s all. If you have the time, just do it yourself. Any software will do.
Online Course Platform
I’ll cover this topic more in-depth later. But at this stage, you already want to pick an online course platform so you can upload your videos to it. That way you can easily share it with people who will test it for you.
I use Kajabi for all my courses and the platform works very well. As for which course platform would suit your business and style best, you can read my Marketplace Vs Own Site pros and cons analysis later in the article.
When You’re Done With Testing, Record The Rest
I recommend keeping your videos relatively short and to the point. I would avoid one or two hour lessons. If it’s a simple topic, under 15 minutes is fine. Shorter videos also force you to be briefer and more effective with your points.
Shooting videos can be draining, so take breaks in between. You want to look fresh and engaged in every lesson!
When you’re done recording, it doesn’t make sense to ask for critical feedback. While feedback is here to help you, it’s the last thing you want to think about when you’re done. At this point, you’ve already done all the testing and you’ve made the best online course you could.
5. Design A Logo and Landing Page
When you’ve actually created the course, it’s time to offer it online. When it comes to design, again, I like to keep things simple, but also consistent. Let’s start with the logo.
If you’re selling online courses, you’re probably doing it on your own name. So I recommend one logo for your personal brand and one for your online course. But make sure they are the same style. Here’s an example of my course about digital business.
It fits with the logo of my personal brand. I always want to keep things clean and if possible, I add an artistic touch to it — something I draw.
It‘s all about the feeling that the logo gives you. This is very unpractical advice, I know. But when I was creating the digitalbusiness.school logo, I tried a lot of colors. I initially went with a bright blue color, but it didn’t fit with my overall design. I hardly ever use bight colors. So I went with this warmer blue tint.
If you have an existing brand logo, then you can create your course logo based on it.
If you’re just starting, or you want to re-brand, then the main question is, “What does [Your Brand] stand for?” If you’ve picked your topic right (see the first step in this article), and can identify what your course does and who it’s for, then branding becomes easier.
Also, you don’t need to be a designer to create professional-looking logos. You can use Canva and other free platforms. You can also hire artists from Fiverr or Upwork.
Create A Landing Page That Converts
I must admit, a logo is nice. But it’s not the most important thing when it comes to selling your course. The landing page is more important.
The landing page is where you’ll convince site visitors to buy your course, so it’s crucial to get this right. If you’re unsure how to start, you can use paid tools like LeadPages to make things easier. They offer a free trial you can use.
To get some ideas and examples, you can also check out my landing pages. Just choose one of the courses. Here are the essential elements of a sales page that convert:
Your logo
A headline that captures the BIG idea
An appealing image/video
Body copy
This section is critical. If visitors don’t like what they’re seeing, they’ll bounce. Here’s an example of the header section of digitalbusiness.school that performed well:
Here’s what else you want to include on your landing page:
Testimonials
Client logos
What the prospect gets (your offer)
Guarantee (i.e. 30-day Money Back Guarantee, etc.)
FAQs
Contact Details/Chat
Buy now section
Your landing page can either be created within your website’s builder (like Kajabi, WordPress, etc.) or with your Marketplace platform. Which leads us to the next point.
6. Use The Right Online Course Platform
Once your course videos are ready, you have two options to host your online course:
Marketplaces like Udemy, Skillshare, Coursera, etc.
Your Own Site
Here’s a short table to help you decide the best online course platforms.
Marketplaces (Udemy, Skillshare, Coursera, etc.) | https://dariusforoux.medium.com/definitive-guide-to-creating-profitable-online-courses-1a68786caaa3 | ['Darius Foroux'] | 2020-12-10 15:53:40.973000+00:00 | ['Business', 'Money', 'Marketing', 'Education', 'Startup'] |
This Mental Exercise Can Actually Make You Smarter | Examples — How to Think From First Principles
Imagine your problem is to make an omelette. Using first-principles, you can break down your problem to its basic facts.
Fact #1: I need eggs.
Fact #2: I need a pan.
Fact #3: I need oil.
Fact #4: I need a heat source
Just by listing the facts, you’re already halfway through solving your problem. Even if you don’t know how to cook, you made a serious step in the right direction.
Now consider a more complicated challenge to illustrate the power of first-principles: lowering battery production cost.
To make electric cars affordable, Elon Musk had to find ways to lower the cost of making batteries; the most expensive part of an electric vehicle. He reflects on this problem in an interview, wonderfully showing how to solve the issue using first-principles. Please pay close attention to what he says.
First-principles is a physics way of looking at the world. You boil things down to the most fundamental truths and say, "What are we sure is true?" and then reason up from there. Somebody can say "Battery packs are really expensive and that's just the way they will always be. Historically, it has cost $600 per kilowatt-hour, and so it’s not going to be much better than that in the future." With first principles, you block the noise and look for the absolute facts about this problem. Fact #1: A battery is made up of cobalt, nickel, aluminium, carbon, and some polymers for separation. Fact #2: If we buy every material on the London Metal Exchange, it’s going to cost us $80 per kilowatt-hour more or less. Now you just need to think of clever ways to take those materials and combine them into the shape of a battery cell. As a result you can have batteries that are much cheaper than anyone realises.
Notice how Elon starts by boiling down the problem to its bare facts — principles first. Doing so, he’s immediately able to pinpoint an area in which his team can adjust to make cheaper batteries.
That’s it. That’s how first-principles can solve tiny to huge problems: from prepping a dish to scaling a multibillion-dollar company.
I’ll cover more real-world examples — from areas that might interest you — So you can clearly understand how to use this exercise in your life. | https://medium.com/age-of-awareness/this-mental-exercise-can-actually-make-you-smarter-8997f2d7f5f1 | ['Younes Henni'] | 2020-12-29 20:32:40.080000+00:00 | ['Education', 'Learning', 'Productivity', 'Creativity', 'Self Improvement'] |
Infrastructure as Code only works as Code… | The popular Serverless framework allows you to use YAML or JSON to describe and version your infrastructure configuration. The serverless.yml or serverless.json file stored at the project root is used by default to provision any project using this framework.
This is no revolution in the IaC frameworks world, where most of them rely on declarative file syntaxes to describe infrastructure.
However, a lesser known and recently introduced feature allows you to use a serverless.js or serverless.ts file as the default configuration file. In this article, I’ll describe the advantages of using such a format to build serverless applications faster and with a better developer experience.
TL;DR
You can use serverless.ts service file in the Serverless framework. You can benefit from:
Types : you can use service file definition types to get quick feedback on available properties for each block. However, the framework is not written in TypeScript, and so the definitions can sometimes be outdated. Regular community maintenance is required.
: you can use service file definition types to get quick feedback on available properties for each block. However, the framework is not written in TypeScript, and so the definitions can sometimes be outdated. Regular community maintenance is required. Imports : JavaScript file imports allow you to split the definition file into multiple files. This means you can have fine-grained function block definitions right next to your handler’s codebase.
: JavaScript file imports allow you to split the definition file into multiple files. This means you can have fine-grained function block definitions right next to your handler’s codebase. References: you can write custom-made functions to build AWS intrinsic syntax and navigate easily through infrastructure definitions, dependency by dependency, using the native click and follow features of your IDE.
TYPING
The TypeScript and Serverless communities joined forces to lay the groundwork for Serverless framework types, which allow direct feedback when writing a Serverless service file.
If you want to give it a try, just run serverless create --template aws-nodejs-typescript in a new directory (and make sure you’re using at least version 1.75 of Serverless framework to benefit from serverless.ts service file definition). The new Serverless type encloses all available configuration keys for the framework — which means you don’t need to go through the full serverless.yml configuration example to get the right syntax.
VSCode type suggestion for Serverless service file
As the framework evolves, all the definitions are maintained by the community within the DefinitelyTyped repository. Unlike the recently-added JSON schema validation, which was made directly within the framework source code, the definitions require constant improvement to follow the evolution of the service file definitions. Please report any issues you encounter, to make sure this definition stays up to date.
IMPORTS
JSON and YAML file formats cannot be split into multiple files. In order to avoid large service definition files when using those formats, Serverless came up with a dedicated variable resolver, allowing the use of the ${file(filepath)} function to import content from other files.
This can be leverage for example for function definition. An all-in-one serverless.yml file containing the entire service definition:
can be split in multiple files:
The problem with this approach is that you rely on the string definition of the other files to be accurate. This can result in bugs when a file path contains a typo, or when the source file is moved. It also impacts developer experience, because those links cannot be resolved by your IDE of preference.
Switching to JavaScript and TypeScript removes this issue. Dependent files are imported, and their links are usually dynamically updated whenever you change your project directory structure, thanks to your IDE. Any developer can click and follow referenced files to dive into specific service configuration blocks.
Using the power of Javascript imports, we can now keep function definition much closer to function handler code for complex Serverless applications :
As you see above, both function configuration and handler are within the same file. It considerably speeds up development, as no navigation is required between files to develop an end-to-end feature. Both execution context and instructions are located within the same easy-to-access file.
REFERENCES
You often need to inject specific attributes of provisioned infrastructure into other pieces of infrastructure within your application. For example, your Lambda handler’s code may rely on the DynamoDB table name to do the required feature.
The usual way of doing this is to use AWS native intrinsic functions throughout your serverless.ts configuration file. You inject AWS CloudFormation native syntax blocks into your service file’s resources property, and those services are provisioned together with your functions. You can then use the Ref intrinsic function to inject the generated DynamoDB table as one of your lambda’s environment variables in create.ts .
The problem with this syntax is the use of Ref with a string representing the provisioned DynamoDB table. There is no way for a developer going through the create function to easily trace back which resource is referenced in the handler’s environment. It is also quite easy to accidentally change this value without noticing its impact. You will not get any feedback saying you referenced a non-existing resource until you actually deploy to AWS.
To make it more developer friendly, you can write a small service handling the AWS intrinsic function writing for you. This service actually use both your whole resources value as well as the AWS CloudFormation single resource itself, to generate the correct output Ref syntax:
You can then use a much more natural syntax in your function configuration to let everyone know which definition you’re actually using the name of. It is also much easier to know which functions actually depends on the specified AWS resource, since your IDE can actively tell you all usage of the variable representing the MyTable resource.
CONCLUSION
Other IaC frameworks, such as the AWS CDK, have started using configuration syntax which allows a more functional definition of infrastructure.
The use of JavaScript and TypeScript objects to power definitions of the Serverless framework opens up a new world of possibilities — with fewer text-defined options, fewer errors, and making it easier for developers to know the extent of usage of a specific infrastructure block. | https://medium.com/serverless-transformation/infrastructure-as-code-only-works-as-code-a8f0072b29cf | ['Frédéric Barthelet'] | 2020-10-15 11:50:57.594000+00:00 | ['Infrastructure As Code', 'Serverless', 'AWS', 'Typescript', 'Serverless Architecture'] |
How Technology is Hijacking Your Mind — from a Magician and Google Design Ethicist | Estimated reading time: 15 minutes.
“It’s easier to fool people than to convince them that they’ve been fooled.” — Unknown.
I’m an expert on how technology hijacks our psychological vulnerabilities. That’s why I spent the last three years as a Design Ethicist at Google caring about how to design things in a way that defends a billion people’s minds from getting hijacked.
When using technology, we often focus optimistically on all the things it does for us. But I want to show you where it might do the opposite.
Where does technology exploit our minds’ weaknesses?
I learned to think this way when I was a magician. Magicians start by looking for blind spots, edges, vulnerabilities and limits of people’s perception, so they can influence what people do without them even realizing it. Once you know how to push people’s buttons, you can play them like a piano.
That’s me performing sleight of hand magic at my mother’s birthday party
And this is exactly what product designers do to your mind. They play your psychological vulnerabilities (consciously and unconsciously) against you in the race to grab your attention.
I want to show you how they do it.
Hijack #1: If You Control the Menu, You Control the Choices
Western Culture is built around ideals of individual choice and freedom. Millions of us fiercely defend our right to make “free” choices, while we ignore how those choices are manipulated upstream by menus we didn’t choose in the first place.
This is exactly what magicians do. They give people the illusion of free choice while architecting the menu so that they win, no matter what you choose. I can’t emphasize enough how deep this insight is.
When people are given a menu of choices, they rarely ask:
“what’s not on the menu?”
“why am I being given these options and not others?”
“do I know the menu provider’s goals?”
“is this menu empowering for my original need, or are the choices actually a distraction?” (e.g. an overwhelmingly array of toothpastes)
How empowering is this menu of choices for the need, “I ran out of toothpaste”?
For example, imagine you’re out with friends on a Tuesday night and want to keep the conversation going. You open Yelp to find nearby recommendations and see a list of bars. The group turns into a huddle of faces staring down at their phones comparing bars. They scrutinize the photos of each, comparing cocktail drinks. Is this menu still relevant to the original desire of the group?
It’s not that bars aren’t a good choice, it’s that Yelp substituted the group’s original question (“where can we go to keep talking?”) with a different question (“what’s a bar with good photos of cocktails?”) all by shaping the menu.
Moreover, the group falls for the illusion that Yelp’s menu represents a complete set of choices for where to go. While looking down at their phones, they don’t see the park across the street with a band playing live music. They miss the pop-up gallery on the other side of the street serving crepes and coffee. Neither of those show up on Yelp’s menu.
Yelp subtly reframes the group’s need “where can we go to keep talking?” in terms of photos of cocktails served.
The more choices technology gives us in nearly every domain of our lives (information, events, places to go, friends, dating, jobs) — the more we assume that our phone is always the most empowering and useful menu to pick from. Is it?
The “most empowering” menu is different than the menu that has the most choices. But when we blindly surrender to the menus we’re given, it’s easy to lose track of the difference:
“Who’s free tonight to hang out?” becomes a menu of most recent people who texted us (who we could ping).
“What’s happening in the world?” becomes a menu of news feed stories.
“Who’s single to go on a date?” becomes a menu of faces to swipe on Tinder (instead of local events with friends, or urban adventures nearby).
“I have to respond to this email.” becomes a menu of keys to type a response (instead of empowering ways to communicate with a person).
All user interfaces are menus. What if your email client gave you empowering choices of ways to respond, instead of “what message do you want to type back?” (Design by Tristan Harris)
When we wake up in the morning and turn our phone over to see a list of notifications — it frames the experience of “waking up in the morning” around a menu of “all the things I’ve missed since yesterday.” (for more examples, see Joe Edelman’s Empowering Design talk)
A list of notifications when we wake up in the morning — how empowering is this menu of choices when we wake up? Does it reflect what we care about? (from Joe Edelman’s Empowering Design Talk)
By shaping the menus we pick from, technology hijacks the way we perceive our choices and replaces them with new ones. But the closer we pay attention to the options we’re given, the more we’ll notice when they don’t actually align with our true needs.
Hijack #2: Put a Slot Machine In a Billion Pockets
If you’re an app, how do you keep people hooked? Turn yourself into a slot machine.
The average person checks their phone 150 times a day. Why do we do this? Are we making 150 conscious choices?
How often do you check your email per day?
One major reason why is the #1 psychological ingredient in slot machines: intermittent variable rewards.
If you want to maximize addictiveness, all tech designers need to do is link a user’s action (like pulling a lever) with a variable reward. You pull a lever and immediately receive either an enticing reward (a match, a prize!) or nothing. Addictiveness is maximized when the rate of reward is most variable.
Does this effect really work on people? Yes. Slot machines make more money in the United States than baseball, movies, and theme parks combined. Relative to other kinds of gambling, people get ‘problematically involved’ with slot machines 3–4x faster according to NYU professor Natasha Dow Schull, author of Addiction by Design.
Image courtesy of Jopwell
But here’s the unfortunate truth — several billion people have a slot machine their pocket:
When we pull our phone out of our pocket, we’re playing a slot machine to see what notifications we got.
When we pull to refresh our email, we’re playing a slot machine to see what new email we got.
When we swipe down our finger to scroll the Instagram feed, we’re playing a slot machine to see what photo comes next.
When we swipe faces left/right on dating apps like Tinder, we’re playing a slot machine to see if we got a match.
When we tap the # of red notifications, we’re playing a slot machine to what’s underneath.
Apps and websites sprinkle intermittent variable rewards all over their products because it’s good for business.
But in other cases, slot machines emerge by accident. For example, there is no malicious corporation behind all of email who consciously chose to make it a slot machine. No one profits when millions check their email and nothing’s there. Neither did Apple and Google’s designers want phones to work like slot machines. It emerged by accident.
But now companies like Apple and Google have a responsibility to reduce these effects by converting intermittent variable rewards into less addictive, more predictable ones with better design. For example, they could empower people to set predictable times during the day or week for when they want to check “slot machine” apps, and correspondingly adjust when new messages are delivered to align with those times.
Hijack #3: Fear of Missing Something Important (FOMSI)
Another way apps and websites hijack people’s minds is by inducing a “1% chance you could be missing something important.”
If I convince you that I’m a channel for important information, messages, friendships, or potential sexual opportunities — it will be hard for you to turn me off, unsubscribe, or remove your account — because (aha, I win) you might miss something important:
This keeps us subscribed to newsletters even after they haven’t delivered recent benefits (“what if I miss a future announcement?”)
This keeps us “friended” to people with whom we haven’t spoke in ages (“what if I miss something important from them?”)
This keeps us swiping faces on dating apps, even when we haven’t even met up with anyone in a while (“what if I miss that one hot match who likes me?”)
This keeps us using social media (“what if I miss that important news story or fall behind what my friends are talking about?”)
But if we zoom into that fear, we’ll discover that it’s unbounded: we’ll always miss something important at any point when we stop using something.
There are magic moments on Facebook we’ll miss by not using it for the 6th hour (e.g. an old friend who’s visiting town right now).
There are magic moments we’ll miss on Tinder (e.g. our dream romantic partner) by not swiping our 700th match.
There are emergency phone calls we’ll miss if we’re not connected 24/7.
But living moment to moment with the fear of missing something isn’t how we’re built to live.
And it’s amazing how quickly, once we let go of that fear, we wake up from the illusion. When we unplug for more than a day, unsubscribe from those notifications, or go to Camp Grounded — the concerns we thought we’d have don’t actually happen.
We don’t miss what we don’t see.
The thought, “what if I miss something important?” is generated in advance of unplugging, unsubscribing, or turning off — not after. Imagine if tech companies recognized that, and helped us proactively tune our relationships with friends and businesses in terms of what we define as “time well spent” for our lives, instead of in terms of what we might miss.
Hijack #4: Social Approval
Easily one of the most persuasive things a human being can receive.
We’re all vulnerable to social approval. The need to belong, to be approved or appreciated by our peers is among the highest human motivations. But now our social approval is in the hands of tech companies.
When I get tagged by my friend Marc, I imagine him making a conscious choice to tag me. But I don’t see how a company like Facebook orchestrated his doing that in the first place.
Facebook, Instagram or SnapChat can manipulate how often people get tagged in photos by automatically suggesting all the faces people should tag (e.g. by showing a box with a 1-click confirmation, “Tag Tristan in this photo?”).
So when Marc tags me, he’s actually responding to Facebook’s suggestion, not making an independent choice. But through design choices like this, Facebook controls the multiplier for how often millions of people experience their social approval on the line.
Facebook uses automatic suggestions like this to get people to tag more people, creating more social externalities and interruptions.
The same happens when we change our main profile photo — Facebook knows that’s a moment when we’re vulnerable to social approval: “what do my friends think of my new pic?” Facebook can rank this higher in the news feed, so it sticks around for longer and more friends will like or comment on it. Each time they like or comment on it, we’ll get pulled right back.
Everyone innately responds to social approval, but some demographics (teenagers) are more vulnerable to it than others. That’s why it’s so important to recognize how powerful designers are when they exploit this vulnerability.
Hijack #5: Social Reciprocity (Tit-for-tat)
You do me a favor — I owe you one next time.
You say, “thank you”— I have to say “you’re welcome.”
You send me an email— it’s rude not to get back to you.
You follow me — it’s rude not to follow you back. (especially for teenagers)
We are vulnerable to needing to reciprocate others’ gestures. But as with Social Approval, tech companies now manipulate how often we experience it.
In some cases, it’s by accident. Email, texting and messaging apps are social reciprocity factories. But in other cases, companies exploit this vulnerability on purpose.
LinkedIn is the most obvious offender. LinkedIn wants as many people creating social obligations for each other as possible, because each time they reciprocate (by accepting a connection, responding to a message, or endorsing someone back for a skill) they have to come back to linkedin.com where they can get people to spend more time.
Like Facebook, LinkedIn exploits an asymmetry in perception. When you receive an invitation from someone to connect, you imagine that person making a conscious choice to invite you, when in reality, they likely unconsciously responded to LinkedIn’s list of suggested contacts. In other words, LinkedIn turns your unconscious impulses (to “add” a person) into new social obligations that millions of people feel obligated to repay. All while they profit from the time people spend doing it.
Imagine millions of people getting interrupted like this throughout their day, running around like chickens with their heads cut off, reciprocating each other — all designed by companies who profit from it.
Welcome to social media.
After accepting an endorsement, LinkedIn takes advantage of your bias to reciprocate by offering *four* additional people for you to endorse in return.
Imagine if technology companies had a responsibility to minimize social reciprocity. Or if there was an independent organization that represented the public’s interests — an industry consortium or an FDA for tech — that monitored when technology companies abused these biases?
Hijack #6: Bottomless bowls, Infinite Feeds, and Autoplay
YouTube autoplays the next video after a countdown
Another way to hijack people is to keep them consuming things, even when they aren’t hungry anymore.
How? Easy. Take an experience that was bounded and finite, and turn it into a bottomless flow that keeps going.
Cornell professor Brian Wansink demonstrated this in his study showing you can trick people into keep eating soup by giving them a bottomless bowl that automatically refills as they eat. With bottomless bowls, people eat 73% more calories than those with normal bowls and underestimate how many calories they ate by 140 calories.
Tech companies exploit the same principle. News feeds are purposely designed to auto-refill with reasons to keep you scrolling, and purposely eliminate any reason for you to pause, reconsider or leave.
It’s also why video and social media sites like Netflix, YouTube or Facebook autoplay the next video after a countdown instead of waiting for you to make a conscious choice (in case you won’t). A huge portion of traffic on these websites is driven by autoplaying the next thing.
Facebook autoplays the next video after a countdown
Tech companies often claim that “we’re just making it easier for users to see the video they want to watch” when they are actually serving their business interests. And you can’t blame them, because increasing “time spent” is the currency they compete for.
Instead, imagine if technology companies empowered you to consciously bound your experience to align with what would be “time well spent” for you. Not just bounding the quantity of time you spend, but the qualities of what would be “time well spent.”
Hijack #7: Instant Interruption vs. “Respectful” Delivery
Companies know that messages that interrupt people immediately are more persuasive at getting people to respond than messages delivered asynchronously (like email or any deferred inbox).
Given the choice, Facebook Messenger (or WhatsApp, WeChat or SnapChat for that matter) would prefer to design their messaging system to interrupt recipients immediately (and show a chat box) instead of helping users respect each other’s attention.
In other words, interruption is good for business.
It’s also in their interest to heighten the feeling of urgency and social reciprocity. For example, Facebook automatically tells the sender when you “saw” their message, instead of letting you avoid disclosing whether you read it (“now that you know I’ve seen the message, I feel even more obligated to respond.”)
By contrast, Apple more respectfully lets users toggle “Read Receipts” on or off.
The problem is, maximizing interruptions in the name of business creates a tragedy of the commons, ruining global attention spans and causing billions of unnecessary interruptions each day. This is a huge problem we need to fix with shared design standards (potentially, as part of Time Well Spent).
Hijack #8: Bundling Your Reasons with Their Reasons
Another way apps hijack you is by taking your reasons for visiting the app (to perform a task) and make them inseparable from the app’s business reasons (maximizing how much we consume once we’re there).
For example, in the physical world of grocery stores, the #1 and #2 most popular reasons to visit are pharmacy refills and buying milk. But grocery stores want to maximize how much people buy, so they put the pharmacy and the milk at the back of the store.
In other words, they make the thing customers want (milk, pharmacy) inseparable from what the business wants. If stores were truly organized to support people, they would put the most popular items in the front.
Tech companies design their websites the same way. For example, when you you want to look up a Facebook event happening tonight (your reason) the Facebook app doesn’t allow you to access it without first landing on the news feed (their reasons), and that’s on purpose. Facebook wants to convert every reason you have for using Facebook, into their reason which is to maximize the time you spend consuming things.
Instead, imagine if …
Twitter gave you a separate way to post a tweet than having to see their news feed.
Facebook gave a separate way to look up Facebook Events going on tonight, without being forced to use their news feed.
Facebook gave you a separate way to use Facebook Connect as a passport for creating new accounts on 3rd party apps and websites, without being forced to install Facebook’s entire app, news feed and notifications.
In a Time Well Spent world, there is always a direct way to get what you want separately from what businesses want. Imagine a digital “bill of rights” outlining design standards that forced the products used by billions of people to let them navigate directly to what they want without needing to go through intentionally placed distractions.
Imagine if web browsers empowered you to navigate directly to what you want — especially for sites that intentionally detour you toward their reasons.
Hijack #9: Inconvenient Choices
We’re told that it’s enough for businesses to “make choices available.”
“If you don’t like it you can always use a different product.”
“If you don’t like it, you can always unsubscribe.”
“If you’re addicted to our app, you can always uninstall it from your phone.”
Businesses naturally want to make the choices they want you to make easier, and the choices they don’t want you to make harder. Magicians do the same thing. You make it easier for a spectator to pick the thing you want them to pick, and harder to pick the thing you don’t.
For example, NYTimes.com lets you “make a free choice” to cancel your digital subscription. But instead of just doing it when you hit “Cancel Subscription,” they send you an email with information on how to cancel your account by calling a phone number that’s only open at certain times.
NYTimes claims it’s giving a free choice to cancel your account
Instead of viewing the world in terms of availability of choices, we should view the world in terms of friction required to enact choices. Imagine a world where choices were labeled with how difficult they were to fulfill (like coefficients of friction) and there was an independent entity — an industry consortium or non-profit — that labeled these difficulties and set standards for how easy navigation should be.
Hijack #10: Forecasting Errors, “Foot in the Door” strategies
Facebook promises an easy choice to “See Photo.” Would we still click if it gave the true price tag?
Lastly, apps can exploit people’s inability to forecast the consequences of a click.
People don’t intuitively forecast the true cost of a click when it’s presented to them. Sales people use “foot in the door” techniques by asking for a small innocuous request to begin with (“just one click to see which tweet got retweeted”) and escalate from there (“why don’t you stay awhile?”). Virtually all engagement websites use this trick.
Imagine if web browsers and smartphones, the gateways through which people make these choices, were truly watching out for people and helped them forecast the consequences of clicks (based on real data about what benefits and costs it actually had?).
That’s why I add “Estimated reading time” to the top of my posts. When you put the “true cost” of a choice in front of people, you’re treating your users or audience with dignity and respect. In a Time Well Spent internet, choices could be framed in terms of projected cost and benefit, so people were empowered to make informed choices by default, not by doing extra work.
TripAdvisor uses a “foot in the door” technique by asking for a single click review (“How many stars?”) while hiding the three page survey of questions behind the click.
Summary And How We Can Fix This
Are you upset that technology hijacks your agency? I am too. I’ve listed a few techniques but there are literally thousands. Imagine whole bookshelves, seminars, workshops and trainings that teach aspiring tech entrepreneurs techniques like these. Imagine hundreds of engineers whose job every day is to invent new ways to keep you hooked.
The ultimate freedom is a free mind, and we need technology that’s on our team to help us live, feel, think and act freely.
We need our smartphones, notifications screens and web browsers to be exoskeletons for our minds and interpersonal relationships that put our values, not our impulses, first. People’s time is valuable. And we should protect it with the same rigor as privacy and other digital rights.
Tristan Harris was a Design Ethicist at Google until 2016 where he studied how technology restructures two billion people’s attention, wellbeing and behavior. For more resources on Time Well Spent and the Center for Humane Technology, see http://humanetech.com.
UPDATE: The first version of this post lacked acknowledgements to those who inspired my thinking over many years including Joe Edelman, Aza Raskin, Raph D’Amico, Shaun Martin, Jonathan Harris and Damon Horowitz.
My thinking on menus and choicemaking are deeply rooted in Joe Edelman’s work on Human Values and Choicemaking. | https://medium.com/thrive-global/how-technology-hijacks-peoples-minds-from-a-magician-and-google-s-design-ethicist-56d62ef5edf3 | ['Tristan Harris'] | 2019-10-16 01:30:35.132000+00:00 | ['Business', 'Startup', 'Time Well Spent', 'Psychology', 'Tech'] |
What is this Quantum Ledger Database That We Keep Hearing About? | Image Source: Aneesh Nair
Quantum Ledger Database(QLDB) is a No-SQL(Semi-SQL & Semi-NoSQL) Append-only database that provides an immutable, transparent, and cryptographically verifiable transaction log owned by a central authority. Since it is a No-SQL database, It has the ability to store a lot of semi-unstructured data using a document-oriented data model. Moreover, it Uses SQL like data structure(Tables and Rows) and a language(PartiQL). So, it can leverage current SQL developers to offer robust ways to query and manage data.
QLDB vs SQL,NoSQL Databases:
Traditional Databases:
Any other traditional databases (SQL and No-SQL) store data in the form of a table or JSON document model. Within these traditional databases, data can be modified by anyone who is in control of the database itself results in data conflicts and manipulations. These databases don't keep track of document history and only store the current state of the document. So, these databases cannot provide audit trials out of the box. As all of these databases do not have automatic data encryption so the data files are at risk of being read or modified by hackers directly. While most of these databases cannot work with cryptography, so it is not possible to immute the data and cannot prove who modified what.
Immutability&Transparency ✘ Audit Logs ✘ Verifiable ✘ Data History ✘
Amazon’s QLDB:
Amazon QLDB is a fully managed ledger database that provides a transparent, immutable, and cryptographically verifiable transaction log owned by a central trusted authority. Amazon QLDB tracks each and every application data change and maintains a complete and verifiable history of changes over time. How did you write this definition soo beautifully and clearly? Because I copied it from AWS Documentation. That is the best definition I’ve come across for QLDB.
AWS is always at the front when it comes to blockchain technology or any technology in general. Amazon, itself being a giant knows the importance of data among the businesses. It introduced QLDB in 2018 along with Amazon Managed Blockchain.
Features of QLDB:
So, you may ask, why Amazon QLDB?
Immutability&Transparency ✓
Amazon QLDB has a built-in immutable journal that stores an accurate and sequenced entry of every data change. The journal is append-only, meaning that data can only be added to a journal and it cannot be overwritten or deleted. This ensures that your stored change history cannot be deleted or modified. Even if you delete the data from your ledger, the change history of that data can still be accessed by reading from the immutable journal.
2. Audit Logs & History✓
With Amazon QLDB, you can access the entire change history of your application’s data. You can query a summary of historical changes, and also specific details related to transaction history. So, QLDB can provide Audit trails out of the box without any further implementation.
3. Verifiable✓
Amazon QLDB uses cryptography to create a concise summary of your change in history. This secure summary, commonly known as a digest, is generated using a cryptographic hash function (SHA-256). The digest acts as proof of your data’s change history, allowing you to look back and verify the integrity of your data changes.
4. Highly Scalable✓
With Amazon QLDB, you don’t have to worry about provisioning capacity or configuring read and write limits. You create a ledger, define your tables, and QLDB automatically scales to support the demands of your application. QLDB also allows you to monitor operational metrics for your read and write IOs with Cloudwatch.
Architecture of QLDB Application:
QLDB Application Architecture
QLDB vs Blockchain:
If you’re familiar with blockchain already, you might conclude from the definition, QLDB is somewhat related to blockchain. yes, it is. It offers all the key features of a blockchain ledger database including immutability, transparency, and cryptographically verifiable transaction log. However, there’s the most important difference between QLDB and Blockchain is — QLDB is a centralized ledger whereas Blockchain is a distributed ledger.
Blockchain has shown the potential to change every sector out there. Also, blockchain is equally useful when it comes to storing data. After all, it is a distributed ledger. Also, the fact that traditional databases will soon make its way out for the more robust blockchain-based database. However, some businesses just cannot use a decentralized database for their business as they have to run a number of blockchain nodes and a lot of complexity involved in order to build a blockchain network. Also, other organizations that are involved with the business consortium should need to participate in the network.
There are certain use-cases where an organization(like a bank) doesn’t want to share ledger with any other party and wanted keep track of data on centralized ledger where they want data to be immutable, verifiable, and secure. This type of use-cases don’t need the complexity of Blockchain network architecture and QLDB is a perfect fit.
Is QLDB going to kill Blockchain? The answer is No. Blockchains have their own unique features — i.e. Smart contracts that run on the blockchain network take application logic to the next level by running separately from the server. wherein QLDB(or any traditional databases), you will end up writing application logic in your server code.
Image Source: Chirag dhul
“QLDB is a centralized ledger-based database powered by blockchain”
With the invention of QLDB and ProvenDB(Story for another), I believe we are on the right path to offering enterprises a way to take advantage of a ledger like technology without the complexities.
Selection of Database Technology:
Applications of QLDB:
QLDB fits perfectly for applications that need a scalable centralized ledger database where it needs to record all the transaction history overtime with added cryptographic security.
Blockchain can be applied to many challenges of the supply chains, finance, healthcare such as complicated record-keeping and tracking of products. As a less corruptible and better-automated alternative to centralized databases. However, few industries often don’t want to share ledger with other participants(as in blockchain) so they don’t want to use complex blockchain architecture of network with channels for the privacy instead they can use QLDB.
Usecases:
Banking and Finance:
Banks often need a centralized ledger-like application to keep track of critical data, such as credit and debit transactions across customer bank accounts. Instead of building a custom database that has complex auditing functionality, or using blockchain, banks can use QLDB to easily store an accurate and complete record of all financial transactions.
2. Supply chains:
Manufacturing companies often need to track the full manufacturing history of a product as well as records of their movements throughout the supply chain, A ledger database can be used to record the history of each transaction, and provide details of every individual batch of the product manufactured at a facility. In case of a product recall, manufacturers can use QLDB to easily trace the history of the entire production and distribution lifecycle of a product.
4. Insurance:
Insurance applications often need a way to better track the history of claim transactions. Instead of building complex auditing functionality using relational databases, insurance companies can use QLDB to accurately maintain the history of claims over their entire lifetime, and whenever a potential conflict arises, QLDB can also help cryptographically verify the integrity of the claims data, making the application resilient against data entry errors and manipulation.
In the next article, let’s dive deep into QLDB Setup, and testing out a sample application on top of QLDB. Are you guys interested? Follow us!!.
Work with us!
Impressed with the features and use-cases of QLDB? Wanted to power your application or project with QLDB? Let us know. We at Devopsinternational (an emerging technologies company focussed on Machine learning, Blockchain, and AWS) delivering value to customers and society with software craftsmanship. We have been helping social and environmental startups validate their idea through its product development and technology expertise. You can find more about our services on our website. | https://medium.com/devopsinternationalbv/what-is-this-quantum-ledger-database-that-im-keep-hearing-about-a3c16c35c799 | ['Salman Dabbakuti'] | 2020-06-23 15:04:24.870000+00:00 | ['Cloud Computing', 'Blockchain', 'AWS', 'Programming', 'Database'] |
How ‘& Sons’ Became a Retail Trope — and the Cruelest Lie in Branding | How ‘& Sons’ Became a Retail Trope — and the Cruelest Lie in Branding
Family businesses are dying. Don’t let this trend fool you.
In 2015, the British government was forced to pay £8.8 million to Taylor & Sons when the Welsh engineering firm went into liquidation, and in so doing, laid off its 250-person team. On the surface, that might sound weird. After all, why would a government pay out a seven-figure sum to a midsize firm that went bankrupt?
In short, in all came down to a typo, a complex business bureaucracy and the odd practice of naming businesses.
The story of Taylor & Sons’ big payout began in 2009, after the government’s official registrar for all companies in the U.K., Companies House, mistook Taylor & Sons — at that point, a flourishing firm that had been operating for 134 years — with Taylor & Son Limited, a body shop based in Manchester, which had reported plans to shut down. The mix-up meant the British government had shut down all of Taylor & Sons accounts, bank transactions and U.K.-based trusts. Contracts were lost, staff didn’t get paid and the firm’s credit agreements with suppliers were no longer valid. And so, by the end of 2009, Taylor & Sons (the engineering firm) was forced to shut down for good.
After a six-year court battle, Taylor & Sons owner Philip Davison-Sebry won the payout for irreparable damages in London’s high court. (He’s currently using that money to start a new firm.) For its part, the British government still blames the incident on a clerical error, part of which it attributes to the large number of businesses in the U.K. that have some variation of the “& Son(s)” suffix in their names.
While there’s no exact figure on the number of businesses with “& Son(s)” or “and Son(s)” currently operating in the U.K. — let alone worldwide — a basic search on Companies House yields more than 200,000 current results. Some of these businesses are expected — e.g., established bars like James Smith & Sons, a famous London umbrella makers that’s been in operation since the 1830s and is still run by members of the James family, or White and Sons, a real-estate firm first established in 1817 that specializes in luxury country property.
Newer businesses, though — including streetwear brands like ONLY & SONS (est. 2014) or coffee shops like Parker & Sons (est. 2015) — have also used the suffix as part of their branding, despite being new, and most importantly, not being set up as family-owned businesses. Moreover, the resonance of “& Son(s)” isn’t just a clever form of linguistic branding, it’s an aesthetic one too, with brands like the U.K.-based “& Sons” clothing brand building ad campaigns around sons emulating their fathers in “dapper” work clothes, and old, industrial tool-shops being used as the backdrop for photoshoots for handmade leather belts and heavy-duty denim shirts — all, of course, shot on 35-millimeter film.
Obviously, the “& Son(s)” suffix isn’t exclusively used for businesses. In fact, the folk-indie band Mumford & Sons probably represents its most successful contemporary usage. In 2016, the band’s founder and lead singer, Marcus Mumford, told NME that in retrospect, he believed the name of his band was “rubbish” and that it had been formed in a hurry. At the same time, he said that the name initially resonated because it felt like an antidote to a changing London. More specifically, he explained that “there’s a bit of trying to stop the demise of London venues,” and so, he wanted to create an “antiquated family business name” that was rare to see in the city.
Mumford might now see his band’s name as slightly ridiculous, but the thinking behind it is still fairly common among new businesses in the U.K.’s main cities like London, Leeds, Birmingham and Manchester. “We wanted a name that sounded like it could stand the test of time,” says Mark Smith, 32, the manager of Hunter & Sons coffee in Bath. The shop was founded in 2012 by Smith and two friends, who felt there was an absence of good, local coffee shops in Bath. Like Mumford, Smith liked the idea of having a place that sounded as though it was an established family name and gave the impression that this small store was part of the city’s history — the shop itself is small and intimate with a rustic-theme decor. “We didn’t want to give it a weird modern name,” Mark adds, saying that while most of his customers are young, either university students or young professionals, the city itself has a sizeable number of people around retirement age who might have been turned away from a modern-sounding, hipster name.
Smith isn’t sure whether the shop will last generations though. He and his business partners are unmarried, and none of them have children, let alone sons. But to him, “Hunter & Sons” isn’t about their legacy, it’s more about creating a place where residents can feel comfortable. “We want to make a place where you can come and work, meet your friends, or if you’re young parents, where you can take your kids,” he says. “So the name is reflective of the values we want to embody and represents a family-orientated place that gives back to the community. After all, isn’t that what all the old family businesses did?”
Curiously, this attitude isn’t shared by many men who run and work in businesses that were originally set up as father-son enterprises. Outside of a residential building site in South East London, I meet one of them — 50-year-old Phil Jenkins — in a local cafe for a cup of tea. Jenkins, wearing a dusty, torn grey T-shirt underneath a flourescent yellow jacket and black hard-hat, works as a construction manager for Jenkins & Sons, a firm his father set up shortly after World War II.
Jenkins is proud of keeping his family’s company afloat — and for continuing the legacy of his father, Michael, who passed away in 2015. He worries, though, that he’ll be the last member of his family to continue the business. His son plans to study law at university, with his eyes set on a corporate job in the city. More existentially still, high operational costs and a decreasing workload have forced Jenkins to downsize. His business’ story is a familiar one, especially in major cities like London.
When I ask Jenkins about the name “Jenkins & Sons,” he smiles, saying that when the name was chosen, there was a clear plan in mind. Its logo hasn’t changed since 1960, a simple pattern of a large triangle with a smaller triangle inside. It has no website or social media presence, let alone any kind of vintage chic-aesthetic. “We’re a family of laborers and hard grafters,” he says. “We worked hard to set up this business. It took a lot of savings. My father had to work factory jobs and then work in his free time to save money, all while looking after my mum, me and my sister.” To Jenkins, the existence of Jenkins & Sons is an embodiment of the struggle that working-class families like his have gone through — as well as a statement that the values of “honest, hard work” will continue to future generations.
It’s also a relic of an older time, he says. “Back in the day, joining the family business wasn’t a punishment, but something you should look forward to. I worked with my old man every day after I left school, and he taught me everything I know — how to use my hands, how to fix anything. So [the firm] also shows how strong our relationship was, and how much he trusted me to continue the legacy he made.”
Right now, Jenkins believes the firm has a good five years left, but he emphasises that it’s very much an exception. Friends of his who inherited firms set up by their fathers have “crashed and burned. They’ve gone bankrupt — a lot of them took a hit after the recession when they weren’t able to finance themselves and the banks cut them off.” Maybe more fatally, he believes that moments like the 2008 financial crash probably did a considerable amount of damage to the concept of family-owned business for younger generations. “I know my son saw me struggling during that period,” he says. “I’d come home stressed and angry. I’d shout at him over silly things because I wasn’t sure if the business would last. Maybe he saw all that and thought, I don’t want that stress. I want something more secure. I don’t blame him!”
As Jenkins leaves to get back to work, he says that even if Jenkins & Sons doesn’t last beyond him, he’ll be proud of what it will have left behind, which will at least outlive him — and possibly his son, too. “Foundations, frames, structuring — whatever happens, I can say we were part of that and that our family has helped people have their own familial communities here.”
Needless to say, that legacy will long outlast the antique coffee-shop aesthetic.
Hussein Kesvani is MEL’s U.K./Europe editor. He last wrote about the male business thots of Instagram.
More Hussein: | https://medium.com/mel-magazine/how-sons-became-a-retail-trope-and-the-cruelest-lie-in-branding-aca2e5934611 | ['Hussein Kesvani'] | 2018-07-25 20:11:17.836000+00:00 | ['Marketing', 'Business', 'Branding', 'Startup', 'Parenting'] |
Why Psychopaths Outperform Us | Why Psychopaths Outperform Us
Lessons from Wisdom of Psychopaths by Kevin Dutton
Too much psychopathy can lock you up behind bars, but a little bit can do wonders.
Author Kevin Dutton actually argues that “regulated psychopathy can have a positive impact on well-being and quality of life” — a life filled with accomplishments and fulfillment.
What Makes a Person a Psychopath?
Contrary to common beliefs, psychopathy lies on a spectrum, ranging from low to high levels of displayed traits. It’s not so much of a black-or-white matter.
In fact, we may all be a little psychopathic.
Psychopathy is commonly described as a personality disorder with a boosted sense of egotism and narcissism. Other traits include “superficial charm, manipulation, fabrication of intricate stories, impulsivity, and emotional poverty.”
On a positive note, psychopaths are goal-oriented, driven, charming, and unbothered by emotional hangovers.
These traits are beneficial to have in many professions, which is probably why psychopaths are more prevalent among business leaders, lawyers, and surgeons compared to those who are criminals.
Psychopaths Are Productivity Experts
Without the burden of emotions, psychopaths are able to strive towards their goal in the most efficient and effective way.
They have an “emotional tunnel vision” that facilitates the discarding of irrelevant and unbeneficial information and instead focusing on the tasks that lead to success.
People with low levels of psychopathy often get caught up in the what ifs. We procrastinate tasks that we don’t want to do. We feel emotionally damaged after traumatic events.
These thoughts cause us to step on the brakes.
In a society where productivity is rewarded, these “normal people” tendencies don’t help; they actually hinder us from achieving our goals, deeming us useless for being sensitive. We are treated as damaged goods.
Spiritually Psychopathic
Further analyzing how psychopaths view the tasks at hand, we can find some overlapping traits between psychopathy and spirituality.
Such traits are: stoicism, mindfulness, fearlessness, mental toughness, creativity, and energy.
Psychopaths are similar to monks, focused on chasing gratification. They see the goal ahead of them and run towards it.
They don’t step on the brake to dwell on all the distractions in life — they have the ruthlessness to focus on the outcomes that fulfill them. | https://medium.com/indian-thoughts/why-psychopaths-outperform-us-24993120d9a8 | ['Project Hbe'] | 2020-11-03 22:27:44.616000+00:00 | ['Productivity', 'Psychology', 'Spirituality', 'Culture', 'Business'] |
The Limits to a Facebook Ad Budget | How Ad Limits Will Work on Facebook
The ad limit consists of tiers based upon the spending from each page. The highest spend amount in a month within a rolling 12 month period is weighed against the tier limits. Facebook also considers the month with the highest spend within the period against the tier criteria.
Facebook is implementing these tiers to tailor its ad optimization. The machine learning algorithm that serves the ads on the platform discovers the best placement each time an ad appears in a Facebook user’s news feed. Running too many ads at once can hinder that assessment. When marketers issue hundreds of marketing campaigns linked to the pages under administration, each individual campaign ad runs the risk of competing against each other.
This leads to ads being delivered fewer times, reducing impressions and the opportunity to connect with the intended customer. This reduction of impressions gives Facebook fewer opportunities to learn how to best position the ads. It can cost marketers more of their budget since the ads are run without optimization initially.
The following matrix explains the four tiers that are based on the ad spend for each business page.
The tier for ad limits is expected to impact campaign budgets from large enterprises rather than small businesses.
When the limit is reached, the page won’t be able to run more ads or publish edits to existing ads until the ad volume falls below the limit. Marketers will be able to turn off the ad campaigns to help bring their accounts within the intended tier.
Facebook offers a few ways to monitor how campaigns are reaching the ad threshold. Marketers can inspect the ad limit for each Facebook page being managed on the Ad Limits Per Page tool. The Ad Limits Per Page tool is accessed through the Business Manager menu. It displays a table of pages under administration, the number of ads being running by the pages, and the ad limit for each page. | https://medium.com/better-marketing/the-limits-to-a-facebook-ad-budget-c70ab1038579 | ['Pierre Debois'] | 2020-11-23 14:57:32.144000+00:00 | ['Marketing', 'Facebook', 'Social Media', 'Ads', 'Anaytics'] |
Managed Kubernetes Services Compared: GKE vs. EKS vs. AKS | Summary
Google Kubernetes Engine (GKE)
By far the easiest-to-use and most feature-rich managed Kubernetes solution. If you have no particular allegiance to any cloud platform and you just want the best Kubernetes experience, look no further.
GKE workloads view in the Google Cloud Console.
The fact that GKE sets the bar for managed Kubernetes is no surprise considering Kubernetes was designed by Google. GKE also had a nearly three-year head-start on its competitors — ample time to mature and gain features.
GKE offers a rich out-of-the-box experience that that gives you integrated logging and monitoring, with Google’s excellent Stackdriver ops tool and full visibility into your workloads and resource usage from the GCP web console. GKE’s CLI experience also offers you full control over your cluster configuration, making cluster creation and management remarkably simple. Simply put, GKE clusters are production-ready out-of-the-box with everything you need to immediately start deploying workloads.
With Google managing so much for you, you lose a little bit of control if you wish to fully customize your cluster. Still, beyond this, GKE is hard to fault. It’s the best managed Kubernetes experience, bar none.
Azure Kubernetes Service (AKS)
A great out-of-the-box experience with powerful development tools and quick Kubernetes updates. It’s the obvious choice for those already in the Microsoft/Azure ecosystem and a strong alternative to GKE for everyone else.
AKS cluster insights view in the Azure Portal.
Although it doesn’t quite reach the heights of GKE, AKS has a great out-of-the-box experience with features like logging, monitoring, and metrics. A new Azure portal feature now gives you full visibility into your cluster workloads, although GKE still offers more comprehensive metrics and functionality. After a series of redesigns, Azure’s portal has gone from being a cluttered and confusing mess to a genuinely pleasant experience. Beyond the much-improved portal, AKS also has a strong CLI experience that gives you comprehensive control over your cluster. Clusters are easy to create and manage and are production-ready.
A downside with Azure as a platform is that it’s the least reliable of the three primary cloud providers. In terms of percentage uptime and sheer number of outages, Azure lags behind AWS and GCP. This doesn’t mean its unusable — plenty of big companies continue to rely on Azure — but it is something to keep in mind.
Besides some potential downsides, AKS remains a fantastic managed Kubernetes service. Although GKE is the better option for most, Azure has a few primary benefits. If you have an existing presence on Azure or use existing Microsoft tools like 365 or Active Directory, AKS is a natural fit. For everyone else, cheaper pricing with free control planes, fast Kubernetes updates, useful development tooling with VS Code and a seamless serverless compute option all mean that AKS is a strong offering worth considering.
Amazon Elastic Kubernetes Service (EKS)
The weakest Kubernetes offering in terms of feature support, ease-of-use and out-of-the-box experience. Pick it if you must be on AWS or if you want the ability to fully control your Kubernetes cluster.
EKS cluster view in AWS Management Console.
In contrast to the easy-to-use very managed approach GKE and AKS take, Amazon Elastic Kubernetes Service (EKS) leaves you to manage a lot of configuration yourself. You’ll spend a lot of time manually configuring IAM roles and policies, as well as installing various pieces of functionality yourself. You don’t get any visibility into your cluster or workloads by default. The EKS web interface as well as CLI are sparse and extremely limited to a handful of operations.
Third-party tools like the terraform-aws-eks Terraform module and the eksctl command-line tool fill in a lot of the frustrating gaps with EKS’s management experience. Both automate and abstract a fair bit of cluster creation and management complexity. eksctl , specifically, provides the fully-featured CLI experience that AWS does not give you. However, even these tools can only go so far. Because this functionality isn’t native to EKS, even if these tools abstract some complexity, you’re still ultimately responsible for maintaining it.
EKS does work very well if you want more control over your cluster. By being relatively hands-off on the management front, you also get a clean slate to fully customize whatever you need to, if you’re so inclined. EKS is also the only service here with bare metal node support and support for bringing your own machine images.
Ultimately, EKS’s greatest strength is that it’s an AWS service. It lives within one of the strongest and most mature cloud platforms with rock-solid reliability, a wide range of very popular services and a massive developer community. | https://medium.com/better-programming/managed-kubernetes-services-compared-gke-vs-eks-vs-aks-df1ecb22bba0 | ['Bharat Arimilli'] | 2020-12-11 03:59:59.589000+00:00 | ['Programming', 'Kubernetes', 'Azure', 'AWS', 'Google Cloud Platform'] |
The Scam Catching LinkedIn Users Off Guard — The Fast Path to 1 Million Followers Which Exploits Jobseekers | Here’s How the LinkedIn Scam Works
When a person is desperate to find a job they’re willing to try anything. How do I know? I was that guy last year. I thought it would be easy to find my next job. I wasn’t prepared for the overwhelming rejection and ghosting I faced.
After a while, your brain becomes a mashed potato. You start seeing jobs where jobs don’t exist.
LinkedIn wannabe influencers take advantage of these insecurities.
In order to build a big following on LinkedIn you need three things:
1) Likes 2) Comments 3) Followers.
It doesn’t matter where these metrics come from. All that matters is you get as much engagement as possible. Social media engagement has become a form of slave labor.
Wannabe influencers get you to like or comment on their post by promising the following:
You’ll get a job by leaving a “like.” (Or even a promotion.)
You’ll meet a recruiter if you leave a comment.
You’ll find more customers for your pandemic-stricken business by sharing an unrelated social media post you didn’t create.
If people think they can get something for free by engaging with your content then they’ll do anything to participate. It takes half a second to like a post. It takes 30 seconds to leave a comment.
It’s like the LinkedIn lottery.
You might get a six-figure job or you might get absolutely nothing. But the price to enter the competition is so small it feels silly not to do it. So, millions of people every month on LinkedIn engage with these scam posts.
1st sign of a scam
The original poster never engages with a single comment.
If the wannabe influencer really wanted to help you find a job they would at least engage with a few comments. They never do because they don’t need to. Their goal is followers and influence — not genuine help.
2nd sign of a scam
The number of comments in proportion to the number of likes is almost equal. These posts attract tonnes of comments, which is extremely powerful for the LinkedIn algorithm.
3rd sign of a scam
What you’re offered isn’t tangible. To say you will get a job by leaving a comment is an invisible promise. They don’t tell you how. There is no strategy. They don’t explain what success looks like or what the steps are.
4th sign of a scam
They never give examples of success stories.
You never see these wannabe influencers giving examples of real humans who won the LinkedIn lottery and got a job by liking their post. That’s because the winners don’t exist. We know deep down that finding a job goes well beyond the comments section of a LinkedIn post.
Genuine connection leads to job opportunities, not a like or comment. | https://medium.com/better-marketing/the-scam-catching-linkedin-users-off-guard-the-fast-path-to-1-million-followers-which-exploits-fa2ca34e3134 | ['Tim Denning'] | 2020-12-28 13:02:35.757000+00:00 | ['Social Media', 'Writing', 'Business', 'LinkedIn', 'Marketing'] |
It’s Time to Stop Charging Children as Adults | It’s Time to Stop Charging Children as Adults
Every child, no matter the crime, should be eligible for redemption and grace
Photo by David Veksler on Unsplash
The other day, a former student of mine, only 14 years old, was charged with first degree murder and is set to be tried as an adult. The kid is only 14 — so one of the first questions I gravitated to was why was he being tried as an adult?
To contextualize children being charged as adults, it’s important to understand the justice system. According to the New York Times Editorial Board, under New York law, children as young as 13 can be tried as adults if they’re charged with second-degree murder or anything worse. The same applies to some felony sex crimes — and all states have differing laws allowing minors to be charged as adults.
Of course, the practice intensified during the “tough on crime era,” namely the 1970s, 1980s, and early 1990s. A viral case of a 15-year-old Black boy who killed two men on the subway and wounded a third led to widespread tough on crime reforms throughout New York. One of the most famous children prosecuted as adults was Korey Wise, one of the teenagers wrongfully convicted as part of the Central Park Five.
It hurt me to think about my student potentially suffering such a vicious fate. And it isn’t that I don’t believe in accountability and consequences — but by all measures, being 14 still means you’re a child. Of course, my student’s story is less nuanced than that of the unequivocal innocence of Korey Wise and the Central Park Five — his attorney said he was there. His attorney said he initiated the confrontation — but simply said he didn’t shoot the gun. I wasn’t there, so I can’t act like I know all the facts.
But again, to be 14 still means you’re a child. Where is God’s gift of mercy and grace here? According to Casey et al. at the Journal of Adolescent Research, adolescent brains are associated with impulsive and risky choices. They are less likely to exercise impulse control and assess risk and long-term consequences. According to Robertson et al. at Child Psychiatry and Human Development, being a younger offender is directly associated with mental illness. And putting aside morality for a moment — there’s also the effectiveness of trying children as adults. Teens arrested as adults are 34 percent more likely to be rearrested than teens tried through the juvenile justice system.
Many countries in Europe ban adult prosecutions for children. The U.S. stands as an exception — a country that had over 50,000 teens under 18 incarcerated in 2014, that disproportionately targets Black children compared to children of other races.
And then there is the question of justice — which is undeniably important. But how is strict justice going to bring a life back? The cause of the New York Times Editorial Board piece was the murder of Tessa Majors, a Barnard student, by three teenagers, two of whom were charged with second-degree murder after Majors was stabbed to death.
Without a doubt, the murder of Majors is tragic. But I can’t help but feel like fewer people would care if Majors wasn’t a white woman. It’s an anecdotal observation from living in Baltimore, a majority Black city, that fewer people care when the victim of a murder or violent crime is Black — a quintessential example of white privilege. The city I live in has more murders per capita than El Salvador, Honduras, and Guatemala, and it seems like no one cares.
In the closing words of the New York Times Editorial Board, “charging adolescents as adults makes the state crueler, not safer.” I will hold those words close as I follow my student’s trial. But it’s just a wider question for now — when should children be charged as adults?
According to Nicole Sclalabba at the American Bar Association, a juvenile is only tried as an adult under the consideration of the law under certain circumstances. While Sclalabba notes that juveniles were not always charged separately from adults in American history, progressive reformers of the penal system in the early 1800s got the penal system to educate and rehabilitate juveniles, and in 1899, the first juvenile court was created.
It was still an imperfect system, but in the 1970s and 1980s, a rise in violent crime got reforms passed to make it easier to try juveniles in criminal courts, leading to an era of punitive juvenile justice laws. Since violent crime has stabilized a bit since that time, the Supreme Court made executions of young people who commit murder banned in Roper v. Simmons in 2005. The Supreme Court later banned mandatory life without parole sentences for people who commit crimes when they’re 17 or younger in Graham v. Florida in 2010 for non-homicide offenses.
“But charging adolescents as adults makes the state crueler, not safer,” The New York Times Editorial Board says.
Even children charged and convicted of murder and other violent crimes should be eligible for redemption and grace — and it includes kids like my old student. We have to stop charging kids as adults, no matter the crime. Juvenile courts have been associated with greater access to rehabilitation and counseling and benefits to teens. It’s time to stop charging children as adults.
Takeaways
I wrote this piece because of my student who is being charged with an adult, and I know he is not an adult. I worry about his future, and yes — I worry that my student’s life is effectively ruined. He made a mistake that ended up costing another man his life, but if a child under 18 cannot buy alcohol, enlist in the military, or vote, they also shouldn’t be sent to adult prisons.
Following the decisions of Roper and Graham, the Supreme Court held in Miller v. Alabama that even for juveniles convicted for homicidal offenses, juveniles without parole sentences are unconstitutional. It’s not like I don’t believe in consequences and accountability. But other options should be pursued — and hopefully state legislatures will stop letting prosecutors charge children as adults, or the Supreme Court will make another landmark decision. | https://medium.com/an-injustice/its-time-to-stop-charging-children-as-adults-d98f06aa12dc | ['Ryan Fan'] | 2020-10-24 12:34:57.346000+00:00 | ['Justice', 'Society', 'Equality', 'Nonfiction', 'Race'] |