title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
The Question All Writers Should Ask Themselves
The Question All Writers Should Ask Themselves How asking yourself a very important question will help you become a better artist that connects with others Image Source: Pixabay Niche down is the number one piece of advice amongst creators who are trying to make a brand out of their name and curate a social media presence. You should present yourself as the guru of one particular topic. What’s your special talent? What’s your niche? I don’t think I’ve followed this advice very well. I’ve enjoyed exploring different avenues way too much and get bored easily if I feel restricted and can only write about one topic. I need scope. I need broad horizons. I feel that I need to reach as many people as possible by covering all the things that matter to me. It turns out I do have a “niche” I just didn’t see it before. I realise now the importance of knowing who you are as a writer. I like to help people. When I write, I like my takeaway to be something that helps readers feel better. Whether it’s to help them to improve their writing skills, feel confident as parents, or know that they are not alone in struggling with their mental health, I like to leave readers with something to think about that helps them to feel happier, comforted, or inspires them to help others. I know I want to help others I want to inspire change. The way I discovered this was through the books that made me stop, think, and feel. I recently read I Stop Somewhere by T.E. Carter, a heart-breaking story that takes an in-depth look at rape culture and what it’s like to be a young girl. As a woman who has been through similar experiences to T.E. Carter’s protagonist, I wanted to write something about what life is like for young girls, something where the takeaway for people would be realising that girls matter, and how dangerous the world can be for them. It is books like Carter’s that always resonate with me the most. Her book made me realize that I want to protect young people and make a difference in their lives. It’s been there all this time and I didn’t know. I love to write children’s fantasy stories. I tend to write things that younger me would have found comfort in. The quote “Be who you needed when you were younger” is something I always have in mind when I write. Even when I write articles on parenting or mental health, I think of teenage me who suffered with her mental health, and I think of the young frightened mother I was five years ago, alone with a newborn baby, no support, and no idea how to get better. I don’t just want to help people who have suffered. I need to help people. I think that is my “niche” or “brand”; to be the writer who uses her words to heal young girls and women who were just like me. A writer who wants to educate the next generation and make the world a better place for her son. That’s who I want to be. The question will come up one day, and it’s OK if you’re not ready to answer it Read a book. Play music. Write a poem. Find something that will trigger the thing you’re looking for. Reach within for something deep, something you really don’t want to face but need to get out of your system if you want to reach the truth of who you are meant to be. Brand matters. You want people to associate you with good things. For example, wouldn’t you much rather be the writer that inspired a generation of young people to be empathetic and avid readers, or associated with transphobia? You want your words to resonate with readers. You want to make a difference. But you’ll find it difficult to find your purpose as a writer if you are all over the place. It’s OK to create whatever you want to create. I think it’s OK to have broad horizons and want to explore everything. But you need a common thread amongst the work you create. Is it therapeutic? Do you write to educate? Help? Inspire? Advocate? Who are you? This is the question that one day you will have to ask. If you want to have a social media presence that connects with others, and have your words make a difference in this world, then you need to make people listen. The only way you’ll make their heads turn is if they think they know you. But first, you have to know yourself. You need to decide what sort of writer you want to be. What matters to you the most? What are you passionate about? How can you make the world a better place? I want to write for children. I want my books to be a safe space for them. I want to help parents be good parents to their children so that they don’t struggle with their mental health in later life. I want to be considered a mental health advocate that spreads the message to everyone of any age that it’s OK not to be OK. I want to be the person I needed when I was younger. When you know the answer, you can keep it in mind when you write. It will help you to keep going. If you have a purpose, you are less likely to give up. What’s your purpose? Who do you want to be? Your turn now. Tell me, writer — who are you?
https://medium.com/the-partnered-pen/tell-me-writer-who-are-you-7f863842a5e1
['Kat Morris']
2020-10-11 18:11:21.754000+00:00
['Writing', 'Self', 'Creativity', 'Personal Growth', 'Philosophy']
Haar Cascades, Explained
A general representation of training a Haar classifer. (Image Source) You see facial recognition everywhere, from the security camera in the front porch of your house to your sensor on your iPhone X. But how exactly does facial recognition work to classify faces, considering the large number of features as input and the striking similarities between humans? Facial recognition on an iPhone X. (Image Source) Enter Haar classifiers, classifiers that were used in the first real-time face detector. A Haar classifier, or a Haar cascade classifier, is a machine learning object detection program that identifies objects in an image and video. A detailed description of Haar classifiers can be seen in Paul Viola and Michael Jones’s paper “Rapid Object Detection using a Boosted Cascade of Simple Features”, linked over here. Note that the article goes into some mathematics, and assumes knowledge of machine learning terminology. If you want a summarized, high-level overview, make sure to keep reading! Making a Haar Cascade Classifier Note: This discussion will assume basic knowledge of boosting algorithms and weak vs. strong learners with regards to machine learning. Click here for a quick Adaboost tutorial. The algorithm can be explained in four stages: Calculating Haar Features Creating Integral Images Using Adaboost Implementing Cascading Classifiers It’s important to remember that this algorithm requires a lot of positive images of faces and negative images of non-faces to train the classifier, similar to other machine learning models. Calculating Haar Features The first step is to collect the Haar features. A Haar feature is essentially calculations that are performed on adjacent rectangular regions at a specific location in a detection window. The calculation involves summing the pixel intensities in each region and calculating the differences between the sums. Here are some examples of Haar features below. Types of Haar features. (Image Source) These features can be difficult to determine for a large image. This is where integral images come into play. Creating Integral Images Without going into too much of the mathematics behind it (check out the paper if you’re interested in that), integral images essentially speed up the calculation of these Haar features. Instead of computing at every pixel, it instead creates sub-rectangles and creates array references for each of those sub-rectangles. These are then used to compute the Haar features. Illustration for how an integral image works. (Image Source) It’s important to note that nearly all of the Haar features will be irrelevant when doing object detection, because the only features that are important are those of the object. However, how do we determine the best features that represent an object from the hundreds of thousands of Haar features? This is where Adaboost comes into play. Adaboost Training Adaboost essentially chooses the best features and trains the classifiers to use them. It uses a combination of “weak classifiers” to create a “strong classifier” that the algorithm can use to detect objects. Weak learners are created by moving a window over the input image, and computing Haar features for each subsection of the image. This difference is compared to a learned threshold that separates non-objects from objects. Because these are “weak classifiers,” a large number of Haar features is needed for accuracy to form a strong classifier. Representation of a boosting algorithm. (Image Source) The last step combines these weak learners into a strong learner using cascading classifiers. Implementing Cascading Classifiers A flowchart of cascade classifiers. (Image Source) The cascade classifier is made up of a series of stages, where each stage is a collection of weak learners. Weak learners are trained using boosting, which allows for a highly accurate classifier from the mean prediction of all weak learners. Based on this prediction, the classifier either decides to indicate an object was found (positive) or move on to the next region (negative). Stages are designed to reject negative samples as fast as possible, because a majority of the windows do not contain anything of interest. It’s important to maximize a low false negative rate, because classifying an object as a non-object will severely impair your object detection algorithm. A video below shows Haar cascades in action. The red boxes denote “positives” from the weak learners. Haar cascades are one of many algorithms that are currently being used for object detection. One thing to note about Haar cascades is that it is very important to reduce the false negative rate, so make sure to tune hyperparameters accordingly when training your model. Haar Cascades in Code Implementing this into code is surprisingly easy using OpenCV’s CascadeClassifier function. import numpy as np import cv2 f_cascade = cv2.CascadeClassifier("face.xml") e_cascade = cv2.CascadeClassifier("eye.xml") image = cv2.imread("actor.jpg") gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) faces = f_cascade.detectMultiScale(gray, 1.3, 5) for (x,y,w,h) in faces: img = cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2) roi_gray = gray[y:y+h, x:x+w] roi_color = img[y:y+h, x:x+w] eyes = e_cascade.detectMultiScale(roi_gray) for (ex,ey,ew,eh) in eyes: cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2) cv2.imshow('img',image) cv2.waitKey(0) cv2.destroyAllWindows() Output of the above code. As you can see here, this model is surprisingly accurate in detecting both eyes and faces. What’s even more incredible is that the Haar classifier can be used to identify any object given enough training images to run upon. A different type of approach using OpenCV’s detectMultiScale function can be seen in GeeksForGeeks’s article here. For live streaming face detection, check out this article for code in Java, C++, and Python. Applications of Haar Cascades A representation of computer vision being used in autonomous vehicles. (Image Source) The applications of this technology are enormous in a variety of different fields. A few of the most important applications are listed below: Facial Recognition : Similar to how iPhone X uses facial recognition, other electronic devices and security protocols can use Haar cascades to determine the validity of the user for secure login. : Similar to how iPhone X uses facial recognition, other electronic devices and security protocols can use Haar cascades to determine the validity of the user for secure login. Robotics: Robotic machines can “see” their surroundings to perform tasks using object recognition. For instance, this can be used to automate manufacturing tasks. Robotic machines can “see” their surroundings to perform tasks using object recognition. For instance, this can be used to automate manufacturing tasks. Autonomous Vehicles: Autonomous vehicles require knowledge about their surroundings, and Haar cascades can help identify objects, such as pedestrians, traffic lights, and sidewalks, to produce more informed decisions and increase safety. Autonomous vehicles require knowledge about their surroundings, and Haar cascades can help identify objects, such as pedestrians, traffic lights, and sidewalks, to produce more informed decisions and increase safety. Image Search and Object Recognition: Expanding off facial recognition, any variety of objects can be searched for by using a computer vision algorithm, such as Haar cascades. Expanding off facial recognition, any variety of objects can be searched for by using a computer vision algorithm, such as Haar cascades. Agriculture: Haar classifiers can be used to determine whether harmful bugs are flying onto plants, reducing food shortages caused by pests. Haar classifiers can be used to determine whether harmful bugs are flying onto plants, reducing food shortages caused by pests. Industrial Use: Haar classifiers can be used to allow machines to pick up and recognize certain objects, automating many of the tasks that humans could previously only do. A representation of computer vision being used in agriculture. (Image Source) As shown above, Haar cascades and relevant computer vision technology will undoubtedly create a huge impact in the economy and ML world. Because of the versatility of Haar cascades, they can be applied virtually anywhere. TL;DR Haar cascades are machine learning object detection algorithms. They use use Haar features to determine the likelihood of a certain point being part of an object. Boosting algorithms are used to produce a strong prediction out of a combination of “weak” learners. Cascading classifiers are used to run boosting algorithms on different subsections of the input image. Make sure to optimize against false negatives for Haar cascades. Use OpenCV for implementing a Haar cascade model yourself. Further Reading If you want to talk more about Haar cascades or anything else, schedule a meeting: Calendly! For information about projects that I am currently working on, consider subscribing to my newsletter! Here’s the link to subscribe. If you’re interested in connecting, follow me on Linkedin, Github, and Medium.
https://medium.com/analytics-vidhya/haar-cascades-explained-38210e57970d
['Aditya Mittal']
2020-12-21 12:27:22.784000+00:00
['Image Classification', 'Haar Cascades', 'Artificial Intelligence', 'Computer Vision', 'Machine Learning']
Should You Buy Clothes Second Hand To Reduce Your Environmental Impact?
Photo by Hannah Morgan on Unsplash Long gone are the days of darning socks, siblings wearing the same hand-me-down clothes during childhood, and jeans that last a lifetime. Over the past decade fashion has gone fast. Shopping is a hobby, with constant discounts and sales, plus new ‘seasons’ of clothing hitting the stores every week. The amount of clothes that we buy has increased vastly. Take the four years from 2012 to 2016, for instance. The total amount of clothing bought in the UK has increased from 200,000 tonnes in 2012 to a huge 1,130,000 tonnes in 2016 [1]. That’s a staggering increase. And surely that can’t be good for the environment? Clothes don’t just miraculously appear in our shops. They are produced using materials, energy, water. And, of course, this production produced greenhouse gas emissions too. In 2016 the UK’s clothing was responsible for a huge 26.2 million tonnes of greenhouse gas emissions alone. So if you want to reduce your personal environmental impact, should you be shopping for clothes second hand? The environmental impact of new clothes Most clothes today are made from polyester or acrylic, both forms of plastic. It’s a by-product of the oil and gas industry, and it’s estimated that it takes about 70 million barrels of oil to produce the polyester used in fabrics each year. At the same time, producing polyester is incredibly heat intensive, meaning that it needs a lot of energy as well as a lot of water in cooling. Polyester is also dyed before it becomes your clothes, using dye which is toxic to humans and animals. Waste water from the dyeing in textile factories ends up in our water system, polluting rivers in areas of the world reliant on this industry. Even if you’re buying new clothes made from natural materials, they’re going to have an environmental impact. Cotton is commonly used in clothing production, either on its own or mixed with polyester. Cotton is an incredibly thirsty plant. It takes 1800 gallons of water to make just one pair of jeans. It also requires a lot of pesticides to grow, with cotton consuming 10% of all agricultural chemicals and 25% of pesticides — for just 2.4% of land. On top of this, there’s the emissions from transportation, with most of the world’s clothes produced in Asia and sold in the USA. There’s also a lot of waste within the fast fashion industry, with some clothes never making it to the shop due to overproduction. These clothes end up in landfill, where they cannot biodegrade. So should you buy second hand? Photo by Artificial Photography on Unsplash Research by WRAP found that by extending the average life of clothes by just three months per item, from 2 years and 2 months to 2 years and 5 months, would lead to a 5–10% reduction in each of the carbon, water and waste footprints. It keeps clothes out of landfill, and prevents the production of new clothing items. “Re-use and recycling offer some carbon savings because the lifetime of clothing that is re-used or recycled is extended. Where this displaces a sale of a new garment,10 the effects on the environment from fibre extraction and processing are avoided.” — WRAP report So, in this case, the answer seems simple. Yes, if you want to reduce your environmental impact, you should definitely shop for clothes second hand. At the same time, you can also extend the lifespan of your clothes in other ways. Avoid fashion ‘trends’ and opt instead for timeless classics. Look after the clothes you have, treating them well and fixing them if they do start to get worn — take a leaf out of the ‘make do and mend’ book. It’s also important to acknowledge that even second hand clothes aren’t perfect when it comes to environmental impact. Washing clothes is a major part of the environmental impact of any item of clothing, because of the energy that it uses. If your clothes are made of synthetic materials they’ll also emit microplastics into the water system during washing. So it’s also worth trying to minimise your washing, washing your clothes when they actually need it. Plus, opt for natural fibres such as flax, linen, or wool where possible (second hand or not). Where can you buy clothes second hand? If you want to give clothes shopping second hand a go, there are lots of ways to do it, both in person and from the comfort of your laptop. Charity shops: most charity shops have a clothing section where you can find second hand clothing, with the added benefit of supporting a worthwhile cause whilst you reduce your environmental impact too. most charity shops have a clothing section where you can find second hand clothing, with the added benefit of supporting a worthwhile cause whilst you reduce your environmental impact too. Vintage or thrift shops : vintage clothes are essentially second hand, just with an element of trendiness sprinkled on top. : vintage clothes are essentially second hand, just with an element of trendiness sprinkled on top. Swap shops : are becoming more common again, where you take items of clothing from your wardrobe and swap them for ‘new’ items from other peoples’ wardrobe. It’s worth having a look and seeing if there’s one in your local area — or you could have an evening swap with your friends! : are becoming more common again, where you take items of clothing from your wardrobe and swap them for ‘new’ items from other peoples’ wardrobe. It’s worth having a look and seeing if there’s one in your local area — or you could have an evening swap with your friends! Ebay : if you have something specific in mind then Ebay is a great option for shopping online second hand, with some brilliant bargains to be had. : if you have something specific in mind then Ebay is a great option for shopping online second hand, with some brilliant bargains to be had. Vinted, Depop, thredup: these are all online marketplaces for second hand clothes, where you’ll buy direct from another user. If you liked this post, you might also like my posts on other common decisions which influence your environmental impact: References [1] All statistics in this article are taken from WRAP’s 2017 report ‘Valuing Our Clothes’, http://www.wrap.org.uk/sites/files/wrap/valuing-our-clothes-the-cost-of-uk-fashion_WRAP.pdf#page=4
https://tabitha-whiting.medium.com/should-you-buy-clothes-second-hand-to-reduce-your-environmental-impact-1ef1cabee982
['Tabitha Whiting']
2019-09-02 14:50:14.903000+00:00
['Fashion', 'Sustainability', 'Clothes', 'Environment', 'Climate Change']
The One Good Thing About The Worst Christmas Song Ever
Single cover for the 20th anniversary version of “Do They Know It’s Christmas?” Credit: Amazon.com The One Good Thing About The Worst Christmas Song Ever For all its problems, Band Aid’s 1984 hit “Do They Know It’s Christmas?” reminds me of a spirited internationalism I find missing amidst the global pandemic. Having spent the past five years working part-time retail jobs during the holiday shopping season, I have encountered a number of Christmas songs that I would have never bothered to listen to if they weren’t in constant rotation over the store’s speaker system during my daylong shifts (I’m not a Christmas music hater or anything, but I rarely stray from the old school R&B renditions of Christmas songs I grew up hearing in my parent’s house). Earlier this week, one of those songs was the 1984 hit “Do They Know It’s Christmas?”, released by the UK group Band Aid to raise money for famine relief in Ethiopia. The dreamy, choir-like chant of “Feed the world” at the song’s conclusion instantly reminded of a song I had heard plenty of times before this week—the famous “We Are The World” single released by a group of American artists in 1985 under the name USA For Africa. But the parts of “Do They Know It’s Christmas?” that weren’t muffled by the steady foot traffic of holiday shoppers during my work shifts struck me as aloof and condescending toward the people the song intended to help. After doing a bit of research on the song, I learned that its commercial success (it sold over 2 million copies around the world and raised tens of millions of dollars for famine relief in Ethiopia) was not without controversy, and I’ll direct readers to Lisa Utzig’s excellent break down of how Band Aid may have exacerbated Ethiopia’s food crisis with the money it raised from the single. Here’s a snippet from Utzig’s piece: Unfortunately, “Do They Know It’s Christmas?” and its lousy lyrics ended up having much more sinister repercussions than the average soft rock Christmas song would have… Although droughts and other natural disasters can cause food shortages in east Africa, the true cause of the famine in 1984 was the corrupt government. Ethiopia’s genocidal dictator Mengistu Haile Mariam was systematically napalming his own country’s crops to prevent the distribution of food, and murdering innocent civilians in the process. With the money raised from “Do They Know It’s Christmas?” and Live Aid, he only grew more powerful. Today, it’s not hard to see why a song whose whole premise seems to be that people in Africa can’t fully appreciate Christmas without all the Western trappings and traditions would never make it past the recording stage (notably, the 30th anniversary version of the song abandons some of the cringiest lyrics and replaces them with an extended “Feed the world” outtro). Both the U.S. invasion of Iraq in 2003 and broader war on terror have instilled distrust among the public when it comes to work or fundraising done under the moniker of U.S. humanitarian relief. And much like our domestic politics, the language of long-term structural change has started to crowd out talk of “relief” when it comes to U.S. foreign aid. There is also greater emphasis on changing the power dynamics around foreign aid — for instance, letting communities on the ground direct the flow of resources rather than have outside agencies do the bulk of the work. But I can’t stop thinking about what Band Aid got right that seems at odds with the current political climate — the idea that we should care about another country’s problems and they should care about ours. Nine months into the covid-19 pandemic, it seems that most media coverage of the virus is framed solely in terms of its national impact, even though nearly every corner of the globe has been hit by this virus. Often times Western democracies come up in conversations that are primarily intended to compare and contrast the financial relief offered to citizens here versus citizens of other countries. Meanwhile, covid-19’s roots in China do not elicit sympathy for Chinese families who have lost loved ones to the virus; instead, by calling it the “China virus”, President Trump aims to side step responsibility for the hundreds of thousands of American lives lost to the pandemic this year. Trump’s racist attempt to recast covid-19 as a “Chinese problem” tracks with his policy of retreat when it comes to U.S. leadership on global affairs. Certainly in any other administration, the fight against covid-19 would be understood as a global health crisis in which the U.S. would lean on its international relationships to help control the spread of the virus. In other words, even as physical borders were closing earlier in the pandemic, there was still plenty to learn from what other countries were doing right or doing wrong to contain the virus. With multiple covid-19 vaccines becoming available to the public next spring, the daily trauma of knowing thousands of people are dying from this virus will, hopefully, come to a close. But equally important is that in the new year, the U.S. will finally have leadership that understands that when it comes to public health, there is no such thing as retreating from other people’s problems. They quickly and very easily become our own. Still, I would like to believe that the spirited internationalism that drew musicians and fans to “Do They Know It’s Christmas?” is self-reflecting enough to move beyond imperialist mythologies of the Global South. So, I’m not asking for “Do They Know It’s Christmas?” to get cancelled. Instead I choose to see recognition of its flaws and the real harms it caused as the starting point for a different kind of internationalism — one foremost committed to solidarity with the suffering and trust in their capacity to decide what justice looks like for them.
https://kimberlyjoyner.medium.com/the-one-good-thing-about-the-worst-christmas-song-ever-3fdf5c80d8b4
['Kimberly Joyner']
2020-12-28 13:50:10.669000+00:00
['Pop Culture', 'International', 'Christmas', 'Coronavirus', 'Music']
A Template for Writing Blog Posts That Earn Passive Income
A Template for Writing Blog Posts That Earn Passive Income Steal my formula to sell online courses, books, or services with your writing on autopilot. Photo by RF._.studio from Pexels There’s a reason why some bloggers make multiple six-figure incomes and others struggle to earn pennies from their writing. People who earn life-changing amounts of money from their content usually follow a strategic formula. The step-by-step formula that I have laid out below is a repeatable recipe that you can use to create profitable blog posts, Medium articles, or YouTube videos. This is the exact “behind the scenes” process that I used to achieve the results outlined in this case study that I wrote a few months back. I’ve been using this exact formula to reproduce these results on several of my websites covering different niches. The formula itself is simple. However, it does require some planning, effort, and practice to see maximum results.
https://medium.com/create-online-courses/my-exact-process-for-writing-blog-posts-that-earn-passive-income-85a26a1e54b4
['Krystal Wascher']
2020-12-26 19:55:22.915000+00:00
['Business', 'Writing', 'Marketing', 'Blogging', 'Money']
Google Sign-In Integration in iOS
Creating OAuth Client ID for Google Sign-In After performing the SDK installation, you will need an OAuth client ID and an iOS URL scheme so that you can continue with the configuration in Xcode. Head over to Google APIs Console and create a project for your sample app. If you have already created a project, you can also select it from the project list. Create project in Google APIs console After you have created/selected a project, you also need to configure the OAuth consent screen. Follow the steps as shown in the image below to choose your app user type and then create the OAuth consent screen. Create OAuth consent screen in Google APIs console In the OAuth consent screen information page, fill in the application name, and then click “save”. This application name will be the name being shown in the Google Sign-in form when a user tries to sign in using your app. Application name in OAuth consent screen information page Once finished configuring the OAuth consent screen, it is time to create the OAuth client ID. You can follow the steps as shown in the image below to navigate to the OAuth client ID creation page. Create OAuth client ID in Google APIs console Once you reach the OAuth client ID creation page, go ahead and select “iOS” as application type, fill in the name and also your sample app bundle ID and then click “create”. Create OAuth client ID Copy the OAuth client ID and iOS URL scheme you just create and keep them in somewhere easily reachable, you will need both of them in just a bit. Copy OAuth client ID and iOS URL scheme With that you have completed the OAuth client ID creation process, we can now head back to Xcode and proceed with the configuration.
https://medium.com/swlh/google-sign-in-integration-in-ios-90cdd5cb5967
['Lee Kah Seng']
2020-06-16 14:11:49.641000+00:00
['Mobile App Development', 'Software Engineering', 'Software Development', 'iOS App Development', 'Programming']
November Issue: The Current Corona-virology Research
November Issue: The Current Corona-virology Research A newsletter covering the most recent developments in Covid-19 research. Another month has passed, and I hope life has been treating you well. As usual, here’s a brief description of the 10 articles (friend linked) Microbial Instincts or I covered in November in chronological order: Many thanks for your support. Kindly subscribe here or reach out to me for any questions or suggestions at shinjieyong@gmail.com.
https://medium.com/microbial-instincts/november-issue-the-current-corona-virology-research-9a3a0f22e55d
['Shin Jie Yong']
2020-12-01 08:35:02.036000+00:00
['Covid 19', 'Health', 'Research', 'Science', 'Education']
PAKISTAN’S COVID-19 VISUALIZATION USING POWER BI
PAKISTAN’S COVID-19 VISUALIZATION USING POWER BI June 5, 2020 by Kinza Saeed What is Covid-19? Coronavirus disease 2019 (COVID-19) is an infectious disease caused by the severe acute respiratory syndrome. Common symptoms include fever, cough, fatigue, shortness of breath, and loss of smell and taste. While the majority of cases result in mild symptoms, some progress to acute respiratory distress syndrome (ARDS) likely precipitated by a cytokine storm, multi-organ failure, septic shock, and blood clots. The time from exposure to onset of symptoms is typically around five days but may range from two to fourteen days. Data Sources The dataset is mainly taken from Additional resources of data are the Software Tools Power BI is used for Pakistan’s Covid-19 Visualization. It helps us for a better understanding of data through visualizations. Power BI is a business analytics service that enables you to see all of your data through a single pane of glass. Data Cleaning Power BI is used for the cleaning of data. It is done by removing null values, rename data field, deleting blank rows, replacing value, dropping extra columns, changing data types, and modifying by using other valid resources. Covid-19 is a new virus and its dataset is under processing that’s why its cleaning is a big deal. Power BI useful insights of Covid-19 Outbreak We can analyze and visualize the Covid-19 dataset to show the country, provincial, and city-level situation of Covid-19 using Power BI. Visualizations 1. Introductory Dashboard Introductory Dashboard Observations This is an introductory dashboard that has the link of the Covid-19 Information portal Pakistan and Covid-19 help. These links have information about the Covid-19 situation in Pakistan and the whole world respectively. There is a link to Covid-19 symptoms and the advice that helps to avoid Covid-19. 2. Main Menu of Visualizations The main menu of visualizations Main Menu It is the main menu of the visualization. It has all the contents of the visualization. Any content can be selected to go to the respective page. 3. Main Summary of Visualizations Main Summary of COVID-19 Outbreak in Pakistan Observations This is the main summary of the COVID-19 outbreak in Pakistan. the cards are showing overall confirmed and active cases, deaths, recoveries, critical cases as well as Today’s infected, deaths, recoveries, average infected population, death rate, and recovery rate. these facts are taken from the web. 4. Original Data Summary Original Data Summary Observations This is the original data summary that is taken from Kaggle. This dashboard shows the Confirmed cases, deaths, recoveries, cumulative test, and a positive test, and infection rate with respect to tests by date. 5. Infection Spread with respect to population Infection Spread with respect to population Observations This Dashboard is showing population, confirmed cases, deaths, and recoveries with respect to provinces and average infected population and average death rate in all provinces of Pakistan. 6. Cumulative Cases, deaths and Recovered Cumulative cases, deaths, and recoveries by date and in Punjab and Sindh Cumulative cases, deaths, and recoveries in KPK, Balochistan, and Islamabad Observations It represents the cumulative confirmed cases, Deaths, and recoveries by Date and provinces including Punjab and Sindh. It is also giving an overview of the COVID-19 situation in the provinces of Khyber Pakhtunkhwa, Balochistan, and the capital territory of Pakistan. 7. Patients Records, Recovery prospects and NIH Response and preparation against COVID-19 Outbreak Patients Records, and Recovery prospects Observations This dashboard is showing records of total admitted patients, stable, critical, and patients on a ventilator by date. it also presents the confirmed cases, and recovery prospect of COVID-19 patients by using date. NIH Response and preparation against COVID-19 Outbreak This dashboard shows the average deaths and recovery rate with respect to provinces and the response and preparation of the National Institute of Health, Pakistan against the outbreak of Covid-19 in the country. 8. Covid-19 condition in capital cities of Pakistan Covid-19 condition in Islamabad Covid-19 condition in Karachi and Lahore Covid-19 condition in Peshawar and Quetta Observations It is representing the COVID-19 situation in Islamabad capital territory. This Dashboard is showing confirmed cases, Active cases, recovery rate, and virus reduction rate in the top two big cities of Pakistan. It represents the situation of COVID-19 in the other two big and provincial capital cities of Pakistan. 9. Covid-19 Transmission in provinces of Pakistan Covid-19 Transmission in Punjab and Sindh Covid-19 Transmission in KPK and Balochistan Covid-19 Transmission in Islamabad Observations This dashboard represents the COVID-19 transmission in all provinces of Pakistan including Islamabad. It also shows the COVID-19 condition of major cities of respective provinces. pie charts are used for this purpose because these are very suitable for data that has 3 to 5 categories. 10. · Confirmed cases, Deaths and recoveries by Date Confirmed cases, Deaths, and recoveries by Date Observations It represents the Confirmed cases, deaths, Recovered, population immunity, and risk to the population by date in the Table. The Map is presenting the above mention figures by provinces of Pakistan. 10. Facilities for Covid-19 patients in Pakistan Facilities for Covid-19 patients in Pakistan Observations The Bar Chart is representing the number of hospitals and beds for COVID-19 Patients by Province. While the map is showing the Temporary Isolation centers (Schools, colleges, or hostels that are converted to Quarantine Centers), the capacity of these centers, and their locations in the whole country. 11. Predictions and Forecasting of new cases, deaths, & recoveries in Pakistan Forecasting of new cases Forecasting of new Deaths and Recoveries Observations These line charts are used to predict the situation of COVID-19 in Pakistan for the next 15 days. The first chart is forecasting the new Covid-19 cases for the next two weeks. The shaded area represents the forecast. the second chart is predicting the deaths and third one is predicting the patient recoveries of next two weeks. 12. Disclaimer Disclaimer Observations This is the disclaimer of the visualizations that have data resources and helpline to help COVID-19 patients. it shows sites including kaggle.com, covid.gov.pk, and corona.help. It has the link of national health advisory and helpline 1126 for COVID-19 patients. How power BI helps us to solve problems? One of the most useful Power BI solutions is the ability to easily search for data and datasets. Power BI allows IT members to publish data catalogs for others to view. This makes it easier for you to find the data sets needed to perform an analysis. Power BI is Microsoft’s very own on-premise, cloud, and mobile offering easily connecting to files, databases and online services for self-managed business intelligence. Power BI is a suite of business analytics tools to analyze data and share insights.” Centered in the secure, streamlined power of cloud computing, Power BI allows you to access, interpret, implement, and communicate your company’s crucial information. this advanced software system can Make your data instantly accessible., Prepare for high-stakes presentations. And Measure your success. My favorite feature in Power BI. All the features of Power BI are exactly similar to the features of Microsoft PowerPoint. So, it is very easy to use and understand. For some other visualization tools, we have to use different software for data cleaning but Power BI itself gives us the facility to do data cleaning. So, we don’t need any other software for data cleaning. It also provides filter options that help us to make the desired visualization. Drag and drop in Power BI are so easy. It allows us to transform data at every stage. We can transform data at any time. We can change datatypes using power bi. The ability to make format changes to individual visuals, like turning markers on for line charts and modifying the placement of data labels, allows people to create custom format changes so that they are more visually appealing and easier for dashboard viewers to follow. It facilitates us to change the size of the page that helps us to make a number of visualizations on a single page. Conclusion: Covid-19 is seriously affecting Pakistan. Pakistan is in the third face of Coronavirus, but it has limited facilities to tackle the virus. Power BI is a powerful visualization tool that helps us to understand the whole condition. Pakistan’s COVID-19 visualizations try to cover all the important aspects of the dataset using dashboards to represent the situation of Covid-19 in overall country, provinces, and cities. Kinza Saeed GitHub repository
https://medium.com/pakistans-covid-19-visualization-using-power-bi/pakistans-covid-19-visualization-using-power-bi-fe13bd0a519e
['Kinza Saeed']
2020-06-12 14:53:29.487000+00:00
['Power Bi', 'Visualization', 'Pakistan', 'Data Science', 'Data Visualization']
Princess
amid the pompous pain of pinpricks the princesses made sleepless by peas who know nothing of a steely gaze the corroded heart punched down by even the most simple dreams who desire the depth of an ocean as if it comes easy, as if the waters remain subordinate this armor of faith has a heavy weight and a hefty price we are not the same dragon ladies skin split by the forest on the outskirts of their stories better a shrew than what they want better a beast than what they own only a ravaged spirit knows this art of survival.
https://medium.com/meri-shayari/princess-10f8f741d8ef
['Rebeca Ansar']
2020-12-17 19:20:36.872000+00:00
['Storytelling', 'Poetry', 'Poet', 'Poem', 'Creativity']
The Perfect Christmas Gift: This Is The $1,299 New 13" MacBook Pro
Welcome December, welcome Christmas season, and welcome Apple M1 Chip. Yes, this is the perfect Christmas gift by far. Apple has finally transitioned Macs from Intel processors to ARM chips of their own design. The first generation called M1 comes in the new 13 “MacBook Pro. I have tested it for a week and I have been pleasantly surprised, it is ultra-powerful, light, small, and with the same autonomy as an iPhone. What else can we ask for? Photo by Alisa Anton on Unsplash This is the second transition of processors in Macs. The first was when Apple went from PowerPC — which they designed in conjunction with IBM and Motorola — and later, to those designed and manufactured by Intel in 2005. At that time it was Steve Jobs who did it. announced during Apple’s famous presentation known as WWDC. Apple announces, 15 years later, that Mac processors are once again being designed by themselves. This time with many years of experience with all of the other devices that they make and sell: iPhone, iPad, Apple Watch, Apple TV, and HomePods. They all have an SoC (System on a Chip) designed by the company, which is also one of its biggest innovation engines. Apple New M1 Chip. Source: Apple They begin the transition with three products: MacBook Air, MacBook Pro 13", and Mac Mini. The two best-selling laptops and the cheapest desktop. It’s a statement of intent, a way of telling the world how confident you are in the overall performance of your processors. They have called it M1. Apple has decided not to change the exterior design of Macs with the transition to its M1 processors. Nor has the names changed. Recall that PowerPC laptops were called iBook and PowerBook. Once the change was made, they were called MacBook and MacBook Pro. From the outside, the MacBook Pro is exactly the same. It is the same design, same dimensions, screen size, panel, keyboard, and trackpad size. The weight doesn’t change either. Well, slightly, depending on the unit of measurement that Apple uses to indicate it on its product pages. In the metric system, both products, according to the company, weigh 1.4 kilograms. In Imperial, the 13 ”MacBook Pro with Intel weighs 3.1 pounds, but the 13” MacBook Pro with M1 weighs 3 pounds. If we take the imperial system as valid and make the conversion, the version with Intel has a weight of 1.4 kilograms and the version with M1, therefore, weighs 1.36 kilograms. Performance It is a bit unsettling to realize that you are holding a $ 1,300 or $ 1,450 laptop with a processor that, in single-core tests using Geekbench 5, is faster than the higher-spec 16 ”MacBook Pro, which exceeds $ 6,000. . In addition, it does so by delivering double and sometimes triple the autonomy without any kind of effort. Therefore, when considering the performance improvements of M1 processor notebooks, we should not limit ourselves solely to their higher performance. We have to do it alongside the fact that it achieves it using less battery. Any Universal application, that is, compiled also for the M1 processor works significantly better than on Intel. Apple has made the transition so simple for developers that the day I received the laptop for this review there was already quite a lot of software available, much of which I use in my day-to-day life. In fact, all Apple apps have been updated to work in native mode on Apple-powered Macs. From the most basic ones like Notes, TextEdit, or Reminders, through the company’s office automation — Keynote, Pages, Numbers — to Safari and Mail.app. Also professional software: Final Cut Pro X or Logic. Everything is ready to go and show the potential of the M1. I insist, in all these cases, with the native apps, the user experience on the MacBook Pro 13 ”has been extremely positive. It is remarkable since we are talking about a $ 1,300 machine that has a performance that you previously expected in teams of three to four times its value. Photoshop has a beta version optimized for M1, but some features are disabled. In that case, opening “normal” Photoshop, barring the extra boot time (when Rosetta 2 works its magic) works fine. It’s a testament to the power of Apple’s processor, even on your first try. Remember: this will be “the worst” Apple processor for your Macs. In other words, everything that comes out in the future will work better, faster, with an even higher level of efficiency. Geekbench 5 multi-core tests are also surprising. The base 13" MacBook Pro ($ 1,300 or € 1,400) outperforms the 16" MacBook Pro with the fastest Intel processor available in the custom configuration (a Core i9). It’s practically as powerful as a 2017 iMac Pro (approximately $ 5,000) with an Intel Xeon and a bit slower than the 2019 Mac Pro, also with an Intel Xeon. Rosetta 2 & Applications Built for Intel Processors Rosetta 2. Source: Apple Let’s start with the positive: During the transition from Macs to Intel, compiled apps for PowerPC could be run using a dynamic binary translator called Rosetta. Apple did this as a workaround while developers fixed optimized, and recompiled their apps in “Universal” mode, that is, running natively on both older PowerPC and Intel processors. But when you opened an app with Rosetta, you definitely noticed: everything was slower. If it was professional apps, they reached the limits of uselessness. This has not been my experience with Rosetta 2. The first time you open an app compiled for Intel, macOS prompts you that it will have to download a component that allows you to open software not optimized for M1. Now, you don’t see that message anymore. It is a significantly more refined experience. All apps work properly, in some cases as fast as if they were run on an Intel processor. Let’s talk about the downsides. Not all apps work as well as we would like to. It’s funny, it happens with the simplest ones because most of them use “Electron”, also known as the new Flash. The problem here is that we’re translating an app that in turn work on a framework that is not native either, since it adapts web technologies to make graphical interfaces typical of desktop software. It’s really curious that apps like Spotify, Notion, Discord, Atom, Slack, or WhatsApp Web, all classifiable within non-production software that could well be light, work badly. While “heavy” apps such as Premiere Pro works better. Electron already has support for compiling apps run natively on Apple Silicon, released on November 17, so it is a matter of time until the apps are updated with the new version of the framework. The main reason why there are so many apps made on Electron is that it significantly reduces the cost of development for various platforms, as it works equally on Mac & Windows. Apple wants developers to use native technologies for their applications and avoid frameworks of this type. Since many have native apps for the iPad, in theory converting it for Mac (especially now) would be very easy. But the reality is that it does not stop happening. Amazing Autonomy Source: Apple With the launch of the M1 processors comes a better energy use control system and a much more efficient chip when it comes to using watts to generate processes. Apple has learned so much about energy efficiency with devices like the iPhone or the Apple Watch and all that knowledge has been deposited in the design of the M1. For the first time, we have Macs with autonomies similar to the iPad or the iPhone. The MacBook Pro 13" is able to last between 18 and 20 hours of normal use. It is a huge difference with similar devices with Intel processors, which hopefully reach half. For one of my tests, I unplugged the laptop at 7:00 p.m., worked for a long time in a cafeteria, then continued working at home, left the laptop open while I ate dinner, and continued answering emails until after midnight and the battery remained at 67 %. Amazing. Let’s Talk About The Keyboard macOS Big Sur. Source: Apple The 13" MacBook Pro with an M1 processor, like the latest generation of the Intel processor notebook, has the new “scissor mechanism keyboard” that was released with the 16" MacBook Pro. It replaces the troublesome butterfly mechanism that has given Apple and its users so many headaches over the years. Apple recognized this problem some time ago and makes free (and usually quick) changes with an appointment made at an Apple Store or authorized service provider. Is It The Perfect Christmas Gift For You? Photo by Artem Kniaz on Unsplash Absolutely. It’s fast, it’s efficient, the battery lasts longer than you need. The absence of physical changes contrasts with internal changes. But in true Apple style, the improvements must be invisible to the user. The important thing is that they are there. The how, although it is relevant now, should at one time cease to be. Apple wants the focus to be on the fact that everything will be significantly faster and more efficient from now on. May this “new generation”, as they call it, be that, a new generation. It’s not the how, it’s the what. And what it is, simply put, is a 13" MacBook Pro faster than ever, matching and beating professional desktops three to four times as expensive. Read more Medium Stories.
https://medium.com/macoclock/the-perfect-christmas-gift-this-is-the-1-299-new-13-macbook-pro-b4c27a57628d
[]
2020-12-01 07:29:17.989000+00:00
['Mac', 'Future', 'Tech', 'Apple', 'SEO']
6 easy ways to find a UX side project and strengthen your portfolio
What is a side project (in this context)? When I talk about side projects in this context, I’m talking about the UX and UI projects you initiate and work on in your spare time. The type that doesn’t involve a company, a client, or a team. The type you do “for fun”, for practice, and for building your portfolio — especially if you’re new in the field and have little or no experience! I recently wrote another article, about How side projects benefit you and your Design career. This naturally raised an important question: How do you find a good idea for your next side project? The 6 approaches described below are based on my own experience, advice from people close to me, and inspiration from multiple case studies online. I’m sure the list isn’t exhaustive, but it should be enough for you to get started. The 6 ways to find a side project 1. Solve a problem in your everyday life This is probably the most common approach to starting a new side project. Designing for a real-life user need is the most realistic approach to a side project, even if you’re your own target user in this case. We all experience plenty of small problems, frustrations, and inconveniences in our everyday lives. The key is to notice them, which isn’t necessarily as straightforward as you might think. We’re great at becoming efficient with suboptimal tools, learning to work around the obstacles we’re faced with, or simply accept and stop noticing things that at first made us frustrated. Make it a habit to reflect on your habits and routines, and notice what bothers you in your everyday life. Question everything, even commonly accepted truths and facts about how things work. Strengthening your ability to spot these things and ask these questions is an added benefit of this approach to finding your next side project. While the solution will ideally be a digital one, the problem may be found in the analog world. Don’t limit yourself to finding usability issues in the apps you use. Just train yourself to spot problems, and you’ll soon have more side project ideas than you know what to do with! 2. Solve someone else’s problem Although very similar to the approach described above, this is arguably an even better one for strengthening your skills and portfolio, for one key reason: You have a target group. Instead of designing for yourself, you design for someone else. This enables you to design your solution based on User Research and Usability Testing. It forces you to carefully consider the needs of someone other than yourself, and makes your whole project more realistic. 3. Get inspired by new and curated apps Another good practice to incorporate in your weekly routine is staying up-to-date on what’s new and popular in the world of digital products. Whether you’re planning your next side project or not, this practice can help you stay inspired and motivated. My two favorite sources are Product Hunt and the Apple App Store or Google Play. Check out the top lists, most upvoted, most downloaded, featured, and other highlighted apps. They may spark an original idea, or an idea for a redesign. From the home page of Product Hunt (May 17, 2020) 4. Redesign an existing app or website Doing a redesign of an existing app or website, a so-called unsolicited redesign, is one of the most popular pastimes among new and aspiring Designers. While I’m personally a big proponent of side projects and practice in any form, your approach matters when it comes to unsolicited redesigns. I recently wrote an article on the topic that I recommend you give a read (link below). To find the subject of your redesign, begin with the apps you have on your devices. Maybe the design of your favorite app hasn’t been updated in years — how about taking it upon yourself? Even it has, could you imagine a completely different UX and UI? 5. Turn a feature into a standalone app Most apps and online platforms become more and more complex and bloated with features as they age. As a consequence of this, some of them turn features into standalone apps. Take Facebook as an example. It’s perhaps the most well-known and extreme example of this. The Facebook platform is extremely complex, packed with a range of features that differ in importance for each user. You might only use Facebook for messaging your friends. A friend of yours primarily uses the platform to find and manage events. Yet another is using it professionally to promote her business through a Page. It’s no surprise that Facebook has created separate apps for each of these features (Messenger, Local, and Pages), and I expect to see more standalone apps from Facebook. Maybe you already notice one that’s missing from the Facebook app ecosystem? Turning a feature into a standalone app is also one of my tips for avoiding the potential problems with unsolicited redesigns. Instead of attempting a full redesign of LinkedIn, or Airbnb, or Twitter, how about using the same approach as Facebook and turn a feature into a standalone app? 6. Adapt an existing concept to a new domain The “Uber for X…” pitch has caught some flack and ridicule in the startup community. And maybe that’s fair. However, when it comes to finding a fun little side project to practice your design skills, you don’t need to worry about your pitch, the business viability, or the perception among potential investors. This approach usually revolves around a new and innovative concept. Taking Uber as an example, plenty of businesses in various industries have been created (or redefined) based on the peer-to-peer marketplace model. You can also adapt an existing concept to a certain niche. Many of the most popular digital products and services grow and scale to cover mass markets. This leaves room for similar concepts perfectly tailored to a specific niche. Think about the core concept and business model behind a product or service you use. How could this idea be applied to another industry? Or to a specific subsegment of users?
https://medium.com/swlh/6-easy-ways-to-find-a-ux-side-project-and-strengthen-your-portfolio-c1c6f89b81c0
['Christian Jensen']
2020-05-17 18:30:59.682000+00:00
['Careers', 'Design', 'Portfolio', 'Creativity', 'UX']
How to Start a Machine Learning Career
How to Start Your Machine Learning Career Tommy Follow Dec 9 · 8 min read Machine learning is a broad term in the current 4.0 industrial revolution era where factories are trying to automate every single menial task to reduce costs, improve product quality, boost productivity, minimise workplace accidents, and enhance efficiency in materials usage. However, machine learning is not only implemented in the industrial environment, but also in the household equipment. For instance, you probably own an Amazon Alexa to control your household items simply by using your voice command, whether it is to play music, volume it up or lower it down, read out your reminders, or even turning on or off your TV simply by saying “Alexa, turn on TV”. But how this system actually works? In short, by processing your command through Machine Learning algorithm. In detailed explanation, the algorithm breaks down your commands into several pieces of words, which then consults to Amazon’s database that comprises various pronunciation patterns to figure out which words are the closest to the command. It then links it to several key words to make sense of the tasks and execute the function based on the closest key words. The thing that sets Machine Learning apart from other algorithms is that it recognises your voice, pronunciation, intonation, and words and able to train itself to understand you everyday by using the data feed to it every time you uses its services. This makes Machine Learning has huge potential that many startups are leveraging its techniques to create sophisticated products to ease our daily life. Why is Machine Learning Popular Nowadays? This question might goes through your mind, if it is such a strong algorithm that can change people’s life, why is it only becoming popular nowadays? The answer is the internet. As we understand from Alexa case earlier, it requires an access to its own database to learn our command and translate it into a specific given task. In human case, data is equal to experience. Therefore, without the availability of data, machine learning will not able to recognise our command or it will not perform as good and accurate as it is with massive amount of available data. Therefore, with an access to internet, data collection task has become efficient, reliable, and accessible. There are several methods that are used to collect the data. For instance, Google has recorded all the information that we, as the users, given through its search engines, location searches and tracking through Google Maps, YouTube video search, applications that we have downloaded in our Android mobile, and etc. Facebook stores all the messages and voice mails that you have sent and received, your contacts, and also every third party applications that have connected to your Facebook account. The other more conventional data collection method is through sensors, such as IoT (Internet of Things) applications that collect data through smart sensor and stores it in the cloud based system. Regardless of the methods, internet helps us collect reliable data efficiently to improve our Machine Learning algorithm’s performance. How Do I Start My Machine Learning Career? With the emerging abundance of data availability, many companies are starting to harness such technology to leverage this potential in building billion dollars companies by hiring new talents in Machine Learning. According to Indeed.com, the average Machine Learning Engineer salary is A$94,351 annually in Australia with hiring companies, such as Facebook, Adobe, Qualcomm, Apple, and other companies that has built outstanding and long lasting products in global scale makes Machine Learning become one of the most popular skill that is being sought out for new talents. Similar with Machine Learning that requires abundance of data to perform better and better compares to limited amount of data. Us humans, require experience and knowledge in order to perform better in producing high performing Machine Learning algorithms that are efficient and with minimum amount of error rate. Hence, followings are the preparations recommended for you to kick start your Machine Learning career. Python, Python, Python We, as Machine Learning Engineers, unable to live without Python. It is equal to oil for cars to move, water for plants to grow, and electricity for lamps to shine. It is the programming language that is widely used by Machine Learning Engineers to work on various tasks, whether, it is just to perform data cleansing, data mining, or eventually to produce a great Machine Learning algorithm. Probably you might think why Python? The answer is it is easier compares to other languages! We love easy task, which is why we have produced many inventions that use Machine Learning to make our tasks simpler. If you have learned C, C++, or JAVA, good news for you that Python is much simpler than those languages and you might love it immediately. Secondly, it contains massive amount of libraries to make our effort in producing Machine Learning algorithms much simpler than building it from scratch. You might wondering what library is in Python? If you think of library, probably the only thing that comes to your mind is the building that contains lots of books so you do not need to do your own research to get the data and information. Similarly in Python, libraries are set of algorithms that have been compellingly written so you do not need to write it from scratch to perform specific task. For instance, NumPy library helps you to perform simple calculations or even up to calculating multiplication of set of matrices, Matplotlib library helps you to construct beautiful graphs based on your data so you can understand it much easier, or even Panda library allows you to import your Excel data in CSV format or JSON data into your Python. Finally, open source. Despite all those advantages, Python is an open source platform, that means you do not need to pay a single cent to use all its products and services. You can also access third party platforms, such as SQL to access your database. Furthermore, you will have tons of communities that will support you in solving your dead end projects. Great, But Where Can I Write My Python Code? I am glad you have reached up to this point of this article that means you have highly engaged and interested in starting your own Machine Learning project. Currently, there are two main platform to write your Machine Learning project by using Python. Firstly, by using Google Colaboration. Google Colab is developed by Google to write Python code in an environment similar to Google docs. This means you can write Python immediately without any installation whether for the Colab itself or even for several common Python libraries. However, since it is a cloud based platform, it can only access document from Google Drive (do not forget it has 15 GB limited storage for free access), requires internet access, and you need to install libraries that are not included in Colab for every time you opens it. For offline and local platform, Jupyter Notebook is the most preferred. Compared to Colab, Jupyter stores and allows to use data locally, which means no need to reinstall libraries and it allows you to work in offline mode. However, you will need to install and set it up before able to use it. To access Google Colab, click the link here. To download Jupyter Notebook, download Anaconda as a navigator platform this link, which will install both Python and Jupyter Notebook in one package. In the next section, we will learn how to code with Python using Jupyter Notebook as it is more powerful and will be used more often in offline option. Getting to Know Python on Jupyter Notebook After installing Jupyter Notebook, try to get familiar with Python using this platform. As mentioned earlier, Python is simple and does not require many syntax to execution command. For instance, to compute 1+1 can be done simply by typing 1+1 in the cell and to execute the command, press CTRL+ Enter, which will gives an output below the cell as shown in the picture below. As you might be wondering, there is a number [1] right next to your cell. This number shows the sequence on which cell you have executed last. Hence, if you keep executing the cell, the number will keep increasing. To reset the cell number, simply click the circling arrow or restart button. This will return the sequence number back to 1 if the cell is executed again. You might seem to be confused that sometimes the box might also shows [*] symbol. This indicates that your kernel is loading for the task that you have assigned previously, which might takes sometime due to heavier load. To create new cell, select the button which shows “+” symbol. On the other hand, to delete a certain cell, click the cell and then double click d key. Installing and Importing Libraries As we mentioned earlier, libraries are useful to simplify in structuring our Machine Learning algorithm. Therefore, we need to install necessary libraries in our Jupyter Notebook. For example, one of the most common library is numpy. To install numpy, type pip install numpy in your Jupyter’s cell and congratulations, you have installed your first library. Just a reminder, although it has been installed, every library needs to be imported every time the file is initiated. To import numpy, simply type import numpy as np . Now you can use numpy’s functions, for instance for structuring a 3 x 5 matrix without typing it manually as shown in the picture below: After installing numpy, you might also want to install other libraries that might help your project later on, such as, matplotlib, pandas, seaborn, pytorch, scikit-learn, and keras for starter. The Processes in a Machine Learning Project To start on your very first Machine Learning project, you need to gather data, which is mostly available in Kaggle, a competition website specially built for Data Scientist, or University of California Irvine’s repository. As a starter, you might want to consider small data size with minimum number of attributes (table’s columns) and number of records (table’s rows). Roughly, after gathering the appropriate data, the usual next step is to clean the data from dirty data and then perform pre-processing to scale down the data’s attributes. Subsequently, data will be split as train data and test data, which will be processed through the ML algorithm to achieve the intended outcome. If the results are not as good as expected, ML Engineer will fine tune the parameters or alternatively, choose a different algorithm that suits the provided data’s patterns. Hello World Project in Machine Learning To start your very first project in Machine Learning, I would recommend you to start on one of the most well known hello world version of Machine Learning, which is called an Iris dataset project, where the data can be downloaded in the following link here. The reason this is considered a great project for starters is because it is a numeric dataset that requires you to perform data mining with a relatively low volume (4 attributes and 150 rows) for classification task. Wrap It Up This might be considered as a baby step towards all the possibilities that you might be able to create with Machine Learning, however, if you would like to learn your first in-depth practical Machine Learning project, I would suggest to continue on the next article here. Write in the comment what do you think about career in Machine Learning.
https://medium.com/swlh/how-to-start-a-machine-learning-career-7b8d3cc36b3e
[]
2020-12-16 15:24:12.017000+00:00
['AI', 'Python', 'Machine Learning']
Gradient Descent for Machine Learning, Explained
Throw back (or forward) to your high school math classes. Remember that one lesson in algebra about the graphs of functions? Well, try visualizing what a parabola looks like, perhaps the equation y = x². Now, I know what you’re thinking: How does this simple graph relate to this article’s title? To how machines learn? Well, it actually points to one of the fundamental concepts of machine learning — optimization. What is a Loss Function? In typical machine learning problems, there is always an input and a desired output. However, the machine doesn’t really know that. Instead, the machine uses some of the input that it is given, as well as some of the outputs which are already known to use in predictions. Determining the best machine learning model for a certain situation often entails comparing the machine’s predictions and the actual results to determine whether or not the algorithm used is accurate enough. The problem is, how do we know whether or not the machine is learning effectively? This is where optimization enters the picture in the form of a loss function. To increase accuracy, the value of the loss function must be at the minimum. An example of a simple loss function is the mean-squared-error (MSE) which is given by the expression below: Expression for Mean-Squared Error (MSE) Here, n refers to the number of data points; yi, the actual output; xi, the machine’s prediction. From this, we can see that the MSE evaluates (yi-xi) for every data point in the input. Each iteration and evaluation of the loss function for each data point is called an epoch. In a working machine learning model, the loss function decreases with the more epochs it iterates through. We can optimize a model by minimizing the loss function. Intuitively, we can think of the loss function as the accumulation of the differences between the prediction and actual value for each data point in a certain dataset. Hence, maximum accuracy is achieved through minimizing these discrepancies. Defining Gradient Descent Now, let’s examine how we can use gradient descent to optimize a machine learning model. Of course, we have to establish what gradient descent even means. Well, as the name implies, gradient descent refers to the steepest rate of descent down a gradient or slope to minimize the value of the loss function as the machine learning model iterates through more and more epochs. Example of the Output of a Machine Learning Model Minimizing Loss In this model, we can see that as the number of epochs increases, the machine is able to better predict the output of a certain data point. The value of the loss decreases, and by the 10th epoch it’s already pretty close to 0 which is what we want. Our Parabola and Tangent Lines Now, let’s go back to our parabola (hopefully you’ve kept this in mind). How do you think gradient descent can be applied here? Ideally, we would want lines to obtain gradients, but it doesn’t seem too obvious especially since our parabola is curved. At first, it may not seem too obvious, but there’s actually a way to conquer this minor obstacle. Given that our goal is to achieve the minimum, this begs the question: “At what point on the graph do you think the minimum value of the loss function (which models a quadratic equation) lands?” It would be at the vertex (A), of course! Graph of a Parabola with its Vertex You might be asking yourself, “How can this simple graph be related to gradient descent?” The lines we’re looking for aren’t formed by the graph itself, but instead, they’re actually the tangent lines for each point on the graph. Tangent lines are lines that intersect the graph at only one point and can reveal how gradient descent works. To appreciate the beauty of such lines, let us marvel at the aesthetics of this visualization: Visualization of Tangent Lines of a Parabola (Credit to: Greg Radighieri) Going back, for simplicity, let’s draw a tangent line for a random point B. Then, we also draw the tangent line for our minimum point (vertex) as shown: Tangent Lines at A and B Of course, the objective is for B to eventually approach or even reach A. Here, the slope of the orange tangent is negative and non-zero. This tells us two things: the “direction” in which B should go and that the magnitude of the slope should decrease towards 0. “Why 0?”, you might ask. Since A is the vertex of the parabola, its tangent line would be a horizontal one. We can then calculate its slope as follows: In the context of a machine learning model, iterations through each epoch should ideally bring B closer and closer to A. However, the number of epochs that need to be run through affects how quickly this is achieved. The number of epochs needed is in turn affected by a “learning rate.” You can think of the learning rate as the “step-size” that B traverses down the graph. We can then take the gradients for every point that B lands on until it eventually reaches 0 (which is precisely at A). Adjusting the Learning Rate However, there is a caveat for the learning rate. If the learning rate or “step-size” is too high, there is a possibility that B may never reach or become close to A, thereby decreasing the model’s accuracy. In short, it may overshoot or diverge from the minimum. Visualization for Gradient Descent (large learning rate) Here, following the blue lines simulating the path of B, it may originally seem that the model is doing relatively well as B approaches closer to A. However, B eventually diverges from A, as shown by the purple line, due to overshooting. On the other hand, if the learning rate is too small, B may take more epochs than necessary to approach A and approach it very slowly. Visualization for Gradient Descent (small learning rate) Here, following the blue lines which simulate the path of B, we can see that B takes very small “steps” per epoch. Although this model would be accurate, it would not be too efficient. Epilogue In this article, we discussed gradient descent through the visualization of a quadratic equation (a polynomial of degree 2). However, in reality, large-scale machine learning problems often involve hundreds of different types of inputs (also called features) which increase the degree of the polynomial for the loss function. Despite this, the end goal remains the same: minimize the loss function to maximize accuracy. I am in no way, shape, or form, a machine learning expert. However, as a cornerstone of machine learning, gradient descent is simply something that I believe everyone should be familiar with. That being said, I hope that this brief explanation helps in introducing you to the ML world. Thanks for reading!
https://medium.com/cantors-paradise/gradient-descent-for-machine-learning-explained-35b3e9dcc0eb
['Sean Chua']
2020-12-08 12:09:07.437000+00:00
['Machine Learning', 'Mathematics', 'Math', 'Artificial Intelligence', 'Science']
Why McDonald’s is Using Artificial Intelligence to Spy on Its Dumpsters
Somebody’s Eyes Watching In an age where surveillance and monitoring are all too easily misused, the idea of a camera in odd places can seem a bit weird and even a little creepy. Even when cameras are where they’re supposed to be, it can mean that you’re being watched by who knows who. I definitely cover my webcam to keep unscrupulous eyes from peering through watching me in my own home. So, I was a little leery when I found out that McDonald’s was putting cameras in their dumpsters. “For what?” I said. My first thought was that they wanted to do that to possibly catch dumpster divers, but then I thought what would somebody be trying to find that they could sell in a McDonald’s dumpster? Then I thought well maybe they’re putting the cameras in the dumpsters to catch homeless people rummaging through them, which I thought was a rotten thing for them to do. But neither of these is the case.
https://medium.com/technology-hits/why-mcdonalds-is-using-artificial-intelligence-to-spy-on-its-dumpsters-85705a26cb29
['Audrey Malone']
2020-12-30 05:04:15.376000+00:00
['Technology', 'Artificial Intelligence', 'Recycling', 'Surveillance', 'AI']
How to run Spark/Scala code in Jupyter Notebook
Photo by Ilya Pavlov on Unsplash Jupyter Notebook is the most widely used tool for putting code and text together in Data Science World. It’s a great tool for practicing data analysis and performing machine learning techniques in Python. This is a must for every Data Analyst and Data Scientist. Apart from the data analyst and data scientist, this tool can also be useful for a data engineer. If you don’t know about Data Engineering, It’s a way of collecting/transforming data into a common data lake from where Data Analyst and Data Scientist can use this data to perform data analytics. In the Big Data domain, there are various tools and technologies we use to collect and process the data. Apache Spark is one of the frameworks which allows distributed computing on a large data scale. Apache Spark is a data processing framework that can quickly perform processing tasks on very large data sets, and can also distribute data processing tasks across multiple computers, either on its own or in tandem with other distributed computing tools. Apache Spark code can be written in the following 3 languages — Java, Scala, and Python. Scala is the most preferred programming language for Apache Spark as Spark itself is written in Scala. Hence in this article, I am going to tell you how you can use the same Jupyter notebook to write Spark code in Scala language. Step 1: Install the package pip install spylon-kernel Step 2: Create a kernel (scala) This will allow us to select scala kernel in the notebook python -m spylon_kernel install Step 3: Install and use Jupyter Notebook pip install jupyter And in the notebook, we select New -> spylon-kernel. This will start our scala kernel. However, for this to work, you need to make sure that SPARK_HOME is set. To set the SPARK_HOME in linux/mac OS, put the below statements in the home .bash_profile file. export SPARK_HOME=/Users/vishalmishra/spark-3.0.1-bin-hadoop2.7 cd ~ vi .bash_profile
https://vishalmishra2k20.medium.com/how-to-run-spark-code-in-jupyter-notebook-using-scala-as-a-language-3d2cdcbce3de
['Vishal Mishra']
2020-11-10 00:28:15.974000+00:00
['Jupyter Notebook', 'Data Engineering', 'Apache Spark', 'Data Science', 'Scala']
Artificial Intelligence (AI), Data Science, and Analytics
There is a lot of confusion about the definition of Artificial Intelligence (AI), Data Science, and Analytics, and it is particularly harmful for students and early-career people who are thinking about specializing in these areas. This article is to dispel that confusion. Artificial Intelligence (AI) is characterized as “intelligent software agents” in the authoritative book on the subject, “Artificial Intelligence: A Modern Approach” (Russell, 2010). Advances in machine learning, computer vision, and natural language processing create a buzz for AI. This buzz sometimes overshadows other AI algorithms, such as search, stochastic games, etc. Data Science is about organizing and analyzing massive amounts of data. It is tightly linked to machine learning (ML) tools. These tools are also called AI. The overlap with AI is often resolved as “AI makes the tools; Data Science uses the tools”. If so, Data Scientists are the mechanics who train and maintain Machine Learning (ML) algorithms, not the engineers who design and build the system. This is at odds with the aspirations of those Data Scientists who aim to “extract value from data” (Irizarry, 2020), and forces a redefinition of Data Science towards becoming an umbrella term highly overlapped with Analytics. Analytics is an umbrella term for all that it takes to “transform data into insight for making better decisions” (Saxena, 2020), and includes the tasks to: Create analyses using statistics, operations research, and decision analysis. Present results in reports, scorecards, and dashboards. Build and run data supply chains. Manage data and metadata, assure data processing flows, and data quality. Design and implement user interfaces, dashboards, reports, etc. Code and maintain data processing, system interfaces, and storage. Set up and run the systems infrastructure of servers and networks. Analytics systems generally get data from transaction systems. Transaction systems are the primary originators and sources of data. There are many kinds of transaction systems, such as Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Supply Chain Management (SCM), etc. Other data sources, such as Social Media (Facebook, Twitter, etc.), the Internet of Things (IoT), etc., may also be put into this “transaction systems” category for the purposes of this discussion. The difference between analytics systems and transaction systems rests on the differing purposes: one to analyze, the other to do a transaction or task. Analytics systems and transaction systems need not use AI, they can and often do use non-intelligent software. We can, therefore, categorize all systems as intelligent or dumb depending on whether they use AI or not. Dumb systems . These systems are used for data input and storage, with basic “smarts” in data processing such as edit-checks, filtering, grouping, summing, etc. For complex systems such as ERP and CRM, the intelligence is often in the minds of the experts who configure and customize the systems based on their evaluation of business needs, after which users can use the dumb system along the exact pathways set by the experts. . These systems are used for data input and storage, with basic “smarts” in data processing such as edit-checks, filtering, grouping, summing, etc. For complex systems such as ERP and CRM, the intelligence is often in the minds of the experts who configure and customize the systems based on their evaluation of business needs, after which users can use the dumb system along the exact pathways set by the experts. Intelligent systems. These systems embed the “mind of the analyst” in the software, enabling the system to achieve its goals (e.g., to transcribe voice to text, or to build the optimal staff roster) for users who do not have any expertise in the algorithms being used. Systems in a 2x2 matrix: Transaction/Analytics vs. Dumb/Intelligent Systems Now we can define AI, Data Science, and Analytics in the context of this 2x2 matrix representing the universe of systems.
https://medium.com/swlh/artificial-intelligence-ai-data-science-and-analytics-14d8395b9ce2
['Rahul Saxena']
2020-06-12 11:19:38.842000+00:00
['Analytics', 'AI', 'Artificial Intelligence', 'Data Science', 'Business Analytics']
How to Use React Refs
How to Use React Refs How to utilise refs in React, with ref forwarding, callback refs and HOCs React Refs are a useful feature that act as a means to reference a DOM element or a class component from within a parent component. This then give us a means to read and modify that element. Perhaps the best way to describe a ref is as a bridge; a bridge that allows a component to access or modify an element a ref is attached to. Using refs give us a way to access elements while bypassing state updates and re-renders; this can be useful in some use cases, but should not be used as an alternative to props and state (as the official React docs point out). Nevertheless, refs work well in certain scenarios that we will visit in this article. Refs also provide some flexibility for referencing elements within a child component from a parent component, in the form of ref forwarding — we will also explore how to do this here, including how to inject refs from HOCs into their wrapped components. Refs at a high level Refs are usually defined in the constructor of class components, or as variables at the top level of functional components, and then attached to an element in the render() function. Here is a bare-bones example where we are creating a ref and attaching it to an <input> element: class MyComponent extends React.Component { constructor(props) { super(props) this.myRef = React.createRef(); } ... render() { return ( <> <input name="email" onChange={this.onChange} ref={this.myRef} type="text" </> ) } } Refs are created using React.createRef() , and are assigned to class properties. In the above example the ref is named myRef , which is then attached to an <input> DOM element. Once a ref is attached to an element, that element can then be accessed and modified through the ref. Let’s add a button to our example. We can then attach an onClick handler that will utilise myRef to focus the <input> element it is attached to: ... handleClick = () => { this.myRef.current.focus(); } render() { return ( ... <button onClick={this.handleClick}> Focus Email Input </button> </> ) } By doing this we are in fact changing the state of an input element without any React state updates. This makes sense in the case of focussing an <input> — we wouldn’t want to re-render elements every time we focus / blur an input. There are a few more cases where refs make sense; we will visit those further down the article. You may have noticed the current property of myRef ; current refers to the element the ref is currently attached to, and is used extensively to access and modify our attached element. In fact, if we expand our example further by logging myRef in the console, we will see that the current property is indeed the only property available: componentDidMount = () => { // myRef only has a current property console.log(this.myRef); // myRef.current is what we are interested in console.log(this.myRef.current); // focus our input automatically when component mounts this.myRef.current.focus(); } At the componentDidMount lifecycle stage, myRef.current will as expected be assigned to our <input> element; componentDidMount is generally a safe place to process some initial setup with refs, e.g. focussing the first input field in a form as it is mounted on stage. What if we change componentDidMount to componentWillMount ? Well, myRef will be null at this stage of the component lifecycle. Refs will be initialised after the component mounts, whereby the elements we are attaching refs to need to be mounted (or rendered) on the stage. Therefore, componentWillMount is not suitable for accessing refs. Where refs can be attached Refs are quite extensible in that they can be attached to DOM elements and React class components, but not to functional components. The functionalities and data we have access to depend on where you attach the ref: Referencing a DOM element gives us access to its attributes gives us access to its attributes Referencing a class component gives us access to that component’s props, state, methods, and it’s entire prototype. We cannot attach refs to functional components in React currently. In the following example, myRef will not be recognised and will cause an error: // this will fail and throw an error function FunctionComponent() { return <input />; } class Parent extends React.Component { constructor(props) { super(props); this.myRef = React.createRef(); } render() { return ( <FunctionComponent ref={this.myRef} /> ); } } Your app will not compile if this scenario occurs, ensuring the error will not leak into a final build. Note: To get around this limitation we can introduce ref forwarding boilerplace around functional components. We will visit this further down the article. However, we can define refs within functional components What we can do with functional components however is define the ref inside it, and then attach it to either a DOM element or class component. The below example is perfectly valid: // class component will accept refs class MyInput extends React.Component { ... } // ref is defined within functional component function FunctionComponent() { let myRef = React.createRef(); function handleClick() { ... } return ( <MyInput handleClick={this.handlClick} ref={myRef} /> ); } If you see an opportunity to pass refs into a functional component, it may well be worth converting it to a class component in order to support them. Using refs conditionally with callback refs We can also pass functions into an element’s ref attribute, instead of a ref object — these types of refs are called callback refs. The idea is that doing so gives us more freedom related to when to set and unset refs. The referenced element or component will be supplied as an argument to the function we are embedding within the ref attribute. From here we have more control over how the ref will be set. Consider the following: class MyComponent extends React.Component { constructor(props) { super(props) this.myInput = null; } focusMyInput = () => { if (this.myInput) this.myInput.focus(); }; setMyInputRef = element => { this.myInput = element; }; componentDidMount() { this.focusMyInput(); } render() { return ( <input name="email" onChange={this.onChange} ref={this.setMyInputRef} type="text" ) } } Instead of defining a ref in the constructor, we have now defined an empty myInput property. myInput will not refer to a DOM element until setMyInputRef() is called, which is done so upon the rendering of the component. Concretely: The component renders and setMyInputRef() is called from within the ref attribute of the <input> element. is called from within the attribute of the element. setMyInputRef() now defines the myInput class property with a full reference to the element now defines the myInput class property with a full reference to the element focusMyInput() can now focus the element in question. This is seen as a more favourable approach to initialising refs, whereby the ref itself will only exist when the element in question exists on stage. More Ref use cases So where can refs be used apart from focusing an input? Let’s briefly go over some use cases before moving on to ref forwarding: Incrementing / Decrementing input values: Attaching a ref to an <input> field in a similar fashion as the example above and creating a onClick handler that increments or decrements the value of the input. We can access the value of an input ref with: incrementValue = () => { this.myRef.current.value++; } render() { <input type="text" ref={this.myRef} value="0" /> <button onClick={this.incremenetInput}> Increment Input Value </button> } Getting input values: Perhaps an input value needs to be referenced and included in a string or label — refs provide the bridge to fetch the current value from an input element. Similarly, refs can also read whether a checkbox has been checked, or which radio button has been selected; other elements on your stage could then be modified based on these values, without relying on state. Perhaps an input value needs to be referenced and included in a string or label — refs provide the bridge to fetch the current value from an input element. Similarly, refs can also read whether a checkbox has been checked, or which radio button has been selected; other elements on your stage could then be modified based on these values, without relying on state. Selecting or cycling through form values: Having an array of valid form values and clicking next or previous buttons to update the selected value. Such an update does not warrant a state update. Selecting text is also a good use case for refs, whereby the ref would manage the currently selected or highlighted value. Having an array of valid form values and clicking next or previous buttons to update the selected value. Such an update does not warrant a state update. Selecting text is also a good use case for refs, whereby the ref would manage the currently selected or highlighted value. Media playback: A React based music or video player could utilise refs to manage its current state — play / pause, or sweeping through the playback timeline. Again, these updates do not need to be state managed. A React based music or video player could utilise refs to manage its current state — play / pause, or sweeping through the playback timeline. Again, these updates do not need to be state managed. Transitions and keyframe animations: If you wanted to trigger an animation or transition on an element, refs can be used to do so. This is particularly useful when one element needs to trigger a style update for a separate element, which could also be nested within another component. In some cases a ref will need to be attached to an element from within another component — this is when forwarding refs come into play. Let’s see how ref forwarding is integrated next. Forwarding Refs Ref forwarding is a technique to automatically pass a ref to a child component, allowing the parent component to access that child component’s element and read or modify it in some way. React provide us with extra boilerplate specifically for ref forwarding whereby we wrap a component with React.forwardRef() . Let’s take a look at how it is used: //handling ref forwarding to MyInput component const MyInput = React.forwardRef((props, ref) => { return(<input name={props.name} ref={ref} />); }); // we can now pass a ref down into MyInput from a parent component const MyComponent = () => { let ref = React.createRef(); return ( <MyInput name="email" ref={ref} /> ); } In the above example we are using React.forwardRef() , which provides us 2 parameters — the props of the component we are wrapping, as well as an additional ref object specifically for the ref we are forwarding. Had we not wrapped our component with React.forwardRef() , the component itself would be the thing we are referencing. This may be indeed what you intend to do, but in order to forward the ref down, React.forwardRef() is needed. Note: The reason we require this additional boilerplate is that a ref JSX attribute is not treated as a prop. Like key , ref has specific uses within React. These can be looked at as reserved keywords for specific React features. We can use React.forwardRef() with class components too. Doing so resembles more of a HOC setup than simply wrapping our component. Take a look at the following example, where we pass our forwarded ref down to the wrapped class component as an innerRef prop: // we will be passing our ref down to this component, to be used for our input element class WrappedComponent extends Component { render() { return ( <input type="text" name={this.props.name} ref={this.props.innerRef} /> ) } } // we are wrapping our wrapped component with forwardRef, providing the ref const MyInput = React.forwardRef((props, ref) => { return (<WrappedComponent innerRef={ref} {...props} />); }); export default MyInput; This is slightly more boilerplate than before; we are opting for the innerRef prop to pass the ref down to <WrappedComponent /> . From here we can go ahead and pass a ref down from a parent component into <MyInput /> : import { MyInput } from './MyInput'; const MyComponent = () => { let ref = React.createRef(); return ( <MyInput name="email" ref={ref} /> ); } This brings us onto the use case of defining HOCs specifically for injecting forwarded refs into component DOM. Using Forwarding Refs with HOCs To expand the HOC concept further, let’s explore how we can wrap a component with a HOC that will automatically forward a ref to a wrapped component that did not originally have ref support. Concretely, what we expect here is: To be able to pass a ref into a component wrapped in the HOC via a forwardedRef prop. We will name the HOC withRef . prop. We will name the HOC . The HOC to take a ref and forward it to the wrapped component The wrapped component to attach the ref to a child element // define the withRef HOC export function withRef(WrappedComponent) { class WithRef extends React.Component { render() { const {forwardedRef, ...props} = this.props; return( <WrappedComponent ref={forwardedRef} {...props} />); } } return React.forwardRef((props, ref) => { return < WithRef {...props} forwardedRef={ref} />; }); } Now we can simply wrap our <MyInput /> component with withRef() to support ref forwarding, supplying a non-ref and ref-supported version of the component: ... export const MyInput = (props) => ( <input name={props.name} type="text" /> ); export const MyInputWithRef = withRef(MyInput); We are now able to import <MyInputWithRef /> into any component, which will now route a ref to the child element with the withRef() HOC behind the scenes: import { MyInputWithRef } from './MyInput'; const MyComponent = () => { let ref = React.createRef(); return ( <MyInputWithRef name="email" forwardedRef={ref} /> ); } To learn more about HOCs and other use cases that can be applied to them, check out my article dedicated on the subject: And that wraps up this talk on refs in React. Use them where appropriate — but not as a replacement to state where state is warranted. To continue reading about React refs for functional components and the useRef hook, check out my article discussing the proper implementation and use cases: Using refs effectively will indeed simplify your component logic and provide some nice UI feedback for end users; keep them in mind when developing around forms in particularly.
https://rossbulat.medium.com/how-to-use-react-refs-4541a7501663
['Ross Bulat']
2019-10-08 16:14:37.574000+00:00
['Software Engineering', 'JavaScript', 'React', 'Programming', 'Web Development']
First Rule of Entrepreneurship: Always Have Multiple Revenue Streams
A common point of failure One type of easy-to-spot SPOF for investors is revenue stream. This is especially true for early startups because they haven’t had an opportunity to build a robust variety of revenue streams. For example, an early startup might have one customer or one partnership providing the bulk of its revenue. Or the startup might be relying on one type of advertising channel to drive sales. From the startup’s perspective, it usually makes sense to focus its limited time, energy, and resources on one valuable revenue stream and make it as successful as possible. After all, why deploy scarce resources to things with questionable value when you’ve got something that’s certain to deliver a high ROI? However, from the perspective of potential investors, a startup with one primary revenue stream is a huge risk. If something happens to that revenue stream, the company will fail. So why take the chance? Investors are better off finding similar companies with multiple revenue streams even if those other companies are currently generating less cash. The YouTuber I was meeting with is a perfect example. Yes, he was making good money from YouTube, but he was also completely at YouTube’s mercy. When YouTube changed its algorithms, his audience vanished overnight, and he didn’t have any other way to make money. Why hadn’t he also been building an audience on TikTok or Instagram? Why didn’t he have a newsletter? Why hadn’t he signed long term sponsorship deals? In other words, why hadn’t he built other potential revenue streams to protect himself and his business from the eventuality that YouTube wouldn’t remain a viable source of income? Instead, his years of hard work were negated in an instant because some strangers at a company on the other side of the country changed a few lines of code. Since he hadn’t properly accounted for that possibility, he was left with lots of bills, and no way to pay them. If you don’t want to get stuck in the same position, you need to look carefully at your single points of failure. That’s true regardless of whether you’re a social media influencer, a startup CEO, or even someone working a traditional 9-to-5 corporate job. In fact, people with corporate jobs are often the least secure. What would happen if your company shut down tomorrow? How would you pay your bills? How would you eat? If the answer to those questions doesn’t include, “keep paying the bills with my other income,” then you’re violating the first rule of entrepreneurship, and it’s time to find other streams of revenue. Have you considered mowing lawns?
https://medium.com/swlh/first-rule-of-entrepreneurship-always-have-multiple-revenue-streams-3b724b9757aa
['Aaron Dinin']
2020-12-03 20:04:12.591000+00:00
['Startup', 'Business', 'Venture Capital', 'Entrepreneurship', 'Side Hustle']
[Python] Coding Practices
I am seeing some tensions in onboarding new team members and a lot of it is about coding practices and common mindset that is not shared with people who come from other disciplines. I will add to this list as I come across cases and I hope this serves other people joining Datance and the broader community. Disclaimer: I’m an ex-software engineer who hasn’t done industrial software engineering for more than 10 years. In many of these matters I have referred to partner who’s been spending the last 10 years producing software across a wide range of languages and environments. Lambdas Anonymous functions are powerful concepts, but they have to be used with caution. Lambdas semantics and syntax differs between languages and depending on the language and situation it imposes harsh restrictions and may also have other side effects. So after calming down from excitement when you learn about lambdas the first time, you should thinking clearly about what it is you want to achieve in your code and how you want to use lambdas. In Javascript for instance we also have to deal with the extra inconsistency between arrow function style of defining lambda and the standard function keyword. Remember when you define arrow function for example you don’t have access to this . Fortunately in this case docstring is not affected. In Python however, we have one-liner function definition with the help of lambda. But this comes at the cost of loosing readability of docstring and your function syntax. As of now I am not aware of other semantics that maybe affected here. Also with normal setup you wouldn’t be able to type hint your lambdas and you won’t get any help from your IDE (say VSCode) to remind of the parameters and even if it did, it would defeat the purpose of an anonymous function. Now of course for type hinting alone you cold resort to something like this: from typing import Callable StrLen: Callable[[str, int] = lambda Var1: len(Var1) But even this will not give you the convenience of using your automatic docString generator, or even your IDE’s help with the parameters. So at least at the time of this writing I don’t know what the point would be. As of now the only reasonable use of lambdas I can think of is crafting partial functions (which is a whole topic of its own if you are not used to functional programming from pure functional languages, but maybe that is for another time. If you have comments or edits on anything here, I’m always looking to learn more. 2. Imports When you are working with layered architectures it is always good to be aware of your hierarchy. Therefore try to avoid doing things like from Package.Module import Function . Ideally we don’t want to see Function() in our code. Not to mention if you are importing several modules you will get into namespace conflicts if modules have functions of the same name. Keep your from...import... knowledge for writing importers, which means most probably the maintainer of frameworks need to deal with it. 3. Information Flow Always try to stick to one-directional information flow. This will avoid bugs, and ease troubleshooting and architecting your frameworks. What do I mean by that. Imagine something like this: src/data/model/Models.py src/helpers/Loaders.py src/main.py Here Main.py is responsible for managing the information flow. A Datasource object is created in Models.py . We would import Models and Loaders in Main, then call Loaders.Load(Datasource) from Main. Be sure not to call Load() inside the Loaders.py . 4. Dynamic Variable Generation If you are hot on dynamism, you will probably fall victim to this. But for heaven’s sake no eval() or globals() for dynamic variable generation in your local or global scope. You are undermining you IDE to help you and your colleagues or your future self when trying to modify or trouble shoot your code. 5. No or Limited Side Effects Side effects are when a function creates a change in its context. All communication with a function should be through the parameters and the return value. Functions that change their environment are not pure functions and make your entire software error prone. They may sometimes be unavoidable, but their use MUST extremely limited.
https://medium.com/an-hour-with-myself-as-an-engineer-and-a-scientist/python-coding-practices-1ce8727c265a
['Alireza Goudarzi']
2020-04-23 00:59:12.037000+00:00
['Software Engineering', 'Software Development', 'Python']
Beware Those Offering Simple Solutions to Complex Problems
We are a paradoxical species in many ways but perhaps one of the more perplexing is the ever present war within ourselves between the desire for quick, easy, and simple solutions to our problems and our compulsion for excessively complicating the problems facing us. The first is driven by the natural impulse towards comfort and pleasure. The second is most often born of our quest for a sense of personal importance. Complicating a problem can be done quite simply and easily, solving it cannot. There is nothing evil or weak or lazy about preferring comfort and ease to discomfort and difficulty. We harvest the low hanging fruit first, we take the route with the least twists and steep cliffs, we prefer spending time with people we find easy to get along with, we happily greet information which affirms our existing thoughts and feelings. Seeking and preferring comfort and pleasure is a natural impulse and is the prime motivation behind virtually all technological development. Preferring things be simple, however, does not mean it is always an option. There are always going to be times in our lives when the solutions to our problems are neither simple nor easy. Some solutions will certainly be less complicated than others but ‘less complicated’ is still ‘complicated’. This holds true at the personal level and exponentially more so at the societal level, because societal problems are not a singular problem but rather a collected multitude of personal ones. When you start looking at a problem and it seems really simple with all these simple solutions, you don’t really understand the complexity of the problem. And your solutions are way too oversimplified, and they don’t work. Steve Jobs Wanting discomfort or pain to stop is a natural impulse, the more intense the pain the greater our desire for it to stop. This is why torture proves so impactful. Cause enough pain and you will get a response. The problem is that response has nothing to do with any objective truth. It is aimed entirely at placating the torturer so they will stop inflicting pain and thus tailored to be the answer the torturer wants to hear regardless of whether or not it contains any actual truth whatsoever. Our compulsion for complicating the problems facing us turns us into our own torturers. We become the ones intensifying our own pain and when it reaches too great an extreme we snap to the other half of the paradox and crave the simplest and easiest solution, something which fits neatly and comfortably with our already existing thoughts and feelings to free us from any further strain or discomfort. There is always an easy solution to every problem — neat, plausible, and wrong. H.L. Mencken It is important to note that complicating a problem does not necessarily mean making it more complex. Sometimes that is exactly what it means, we frequently add countless extra and needless steps to a process either to add more elements we find enjoyable or to avoid ones we dislike. When riding our bikes from point A to point B we will go several blocks out of our way, take winding paths through a park, and even cross a toll bridge if it means we can avoid that one huge steep hill in the middle of the shorter more direct route. We add extra ‘organizational’ or presentation elements to a project so we can spend more time playing with charts and diagrams instead of doing the less exciting research the project demands. But we also often ‘complicate’ our problems by rendering them more and more difficult to solve to the point of seeming unsolvable. We discount and disqualify any solutions which feel like they entail any further discomfort or difficulty. We cast ourselves as the victim facing overwhelming forces both to excuse our failures and to place all blame for them on the great and mysterious ‘them’. We’re never going to get a promotion because the bosses we’ve worked for refuse to acknowledge our singular brilliance, probably because they feel threatened by us, and instead spend all their time insisting on unreasonable things like consistent work ethic and productivity. We conjured our ideal image of a significant other, dream house, dream job, wedding day, or any other number of images when we were young and it is up to the world around us to fit perfectly into them. Instead of acknowledging that the true answers to our problems or to achieving our desires involve difficult and uncomfortable work on our parts we deflect the blame out onto the world around us along with a prescription, and expectation, of a singular solution tailor made for our comfort. This is also the fuel which powers societal addiction to conspiracy theories, that and fear of the unknown. Beware of people preaching simple solutions to complex problems. If the answer was easy someone more intelligent would have thought of it a long time ago — complex problems invariably require complex and difficult solutions. Steve Herbert If we allow ourselves to get backed into a corner desperate for seemingly simple and painless solutions we open ourselves up to being manipulated and exploited by anyone appearing to offer them. One of the most tried and true tactics of trying to establish tyrannical or totalitarian power is to exacerbate an existing problem, or to manufacture a problem then exacerbate it, to the point of making it seem virtually impossible to overcome then claim to be the only one capable of a quick, simple, and painless solution. Modern day politics epitomizes this approach. Look at how terrible and frightening this problem is and look how voting for my opponent will make it so much worse. Vote for me and you won’t have to feel any more fear or pain because I alone know the silver bullet solution which will take full effect the moment I am elected. It is a vulnerability we all possess when struggling with crisis and intense conflict. Not only does it leave us open to manipulation but the stoking of fear and social division this sort of manipulation employs in turn generates a whole new set of problems which are even more difficult to resolve because the necessary components are now angrily opposed to one another. Beware of simple solutions. They often lead to complex problems. Bobby Hoffman Faced with a massive tangled knot of rope swinging a heavy sharp axe at it might seem like a swift and decisive solution. Depending on your definition of the problem it might even successfully achieve it to a degree. If all you are seeking is for the knot to no longer exist the axe will do the job. But if you are seeking to untangle the knot so as to regain use of an untangled length of rope the knot may be gone but the end result is gone as well since what you end up with after the axe falls is several severed chunks of rope. The primary reason we seek to untangle a knot is so we can use the untangled rope for some other purpose. Chopping with the axe may vanquish the knot offering us a brief rush of vengeance for our frustrations but whatever task we needed the rope for has become far more difficult now that all we are left with is divided fragments. Television screens saturated with commercials promote the utopian and childish idea that all problems have fast, simple, and technological solutions. You much banish from your mind the naïve but common place notion that commercials are about products. They are about products in the same sense that the story of Jonah is about the anatomy of wales. Neil Postman Our desire for simple, easy, and comfortable solutions may be a natural impulse but it has been exorbitantly inflated and capitalized upon both by those seeking to profit commercially and those seeking greater positions of personal power. It is not the beer, car, house, clothing, or product which potentially holds us in thrall but rather it is the notion that the single simple action of buying them will make our complex problems instantly fade away.
https://medium.com/curious/beware-those-offering-simple-solutions-to-complex-problems-1257cdf16fe8
['Jeff Fox']
2020-12-04 19:09:54.308000+00:00
['Life Lessons', 'Society', 'Politics', 'Self', 'Psychology']
Watching ‘Crazy Ex-Girlfriend’ Led to My Mental Health Diagnosis
With my new diagnosis to guide me, I was motivated to implement positive changes in my life. I began to identify and more easily recognize my triggers, and enforce self-care methods whenever I started to spiral. I also was able to explore renewed openness with friends and my mother, as I learned my feelings of imminent abandonment and my affinity for self-sabotage stemmed from the disorder. I allowed myself to enjoy moments with them and not hold them so far away from me. I also decided to move halfway across the world for a fresh start. My spontaneity has seldom done me any good, but like Rebecca, who quit her job as a lawyer to find her passion, I decided to take a leap of faith for the better. Recognizing that a core aspect of the disorder was a constant need for validation, I also deleted myself from all dating apps and began a ‘dating sabbatical.’ Craving human contact and connection is a very natural thing. However, relationships and intimacy are triggers, so I made a sacrifice as a means to explore my identity — away from the influence of others. Though it has been hard, I have instead put all that energy in building myself and my career. I would be lying if I said it doesn’t get lonely, but it has been extremely fulfilling in a way a relationship never was for me. However, I am aware I need to not get too comfortable in my solitude. On a later episode of the show, during a therapy scene, Rebecca explains her need to cut off intimate relationships. “Because I don’t want to die, ok? I’ve gotten better. I’ve progressed. But something will happen and…I know what I’m capable of when I feel abandoned. I can go to a dark place, a place I can hurt myself, and I never wanna be in that place ever again,” she tells her therapist. The sentiment, I feel, perfectly encapsulates my feelings. But seeing it play out in front of me made me realise that I can’t allow my life to be dictated by fear. Even though my last suicide attempt was motivated by those exact feelings, for someone I rarely think about and very much doubt I ever loved. I can’t live dictated by the disorder. There is this view that people with BPD aren’t relationship material, which is wrong, but I would be internalizing that misconception by closing myself off to love. So I have also been learning how to set healthy boundaries — for when I am ready for intimacy.
https://medium.com/an-injustice/how-watching-crazy-ex-girlfriend-led-to-my-mental-health-diagnosis-fbf8221ffbcf
[]
2019-11-19 10:07:51.113000+00:00
['TV Series', 'Borderline Personality', 'Health', 'Mental Health', 'Media']
A Place of Healing: Wrestling with the Mysteries of Suffering, Pain and God’s Sovereignty
We have all asked the question, “If there is a God, why is there suffering?” But how often have we asked, “If there is no God, why are there people who still believe in Him despite all of their sufferings and pain?” After reading the book, A Place of Healing: Wrestling with the Mysteries of Suffering, Pain and God’s Sovereignty by Joni Eareckson Tada, I have learned of one such person who continues to believe in God despite her tremendous pain. Because of a diving accident, Joni has lived as a quadriplegic for decades that has kept her mostly in her wheelchair, and recently, she has suffered tremendous pangs of pain from a fracture at the back of her spine. Despite all of these however, she continues to have faith. In fact, her faith becomes even more deeper as she clings to God more tightly in her dependency upon His Grace. Below are just some of the excerpts in the book: “Whatever you want, Lord…whether I jump out of my wheelchair pain free and tell people that my healing is genuine evidence of God’s awesome power… or whether I continue smiling in my chair, not in spite of my pain but because of it, knowing I’ve got lessons to learn, a character to be honed, other wounded people to identify with, a hurting world to reach with the gospel, and a suffering Savior with whom I can enjoy greater intimacy. And every bit of it genuine evidence of God’s love and grace.” “To this point, as I pen this chapter, He has chosen not to heal me, but to hold me. The more intense the pain, the closer His embrace.” How could we ignore such words? How could we not notice the kind of faith that lives within a heart that suffers from so much pain? It is so difficult to believe when one is suffering , but I guess it is more difficult not to believe a person who keeps on believing despite her pain. That is the reason I wanted to share this book with you, so may be blessed by it and by Joni’s story, as it has blessed me. It is true that we are God’s beloved children, and that by all means, we expect our Father to love us and provide us with everything we need. But even in the seeming absence of riches, and even in the presence of pain, we cannot conclude that God isn’t there anymore, and that He doesn’t love us anymore. We are after all, still mere pilgrims in this world. This is not yet the end. In fact, this is still a battlefield where we continue to fight for everything we hold dear. These are Joni’s own words: “At different times in my life I’ve enjoyed the old pictures of Jesus cradling cute lambs or walking around with blow-dried hair, clad in a white robe looking like it just arrived from the dry cleaner. But these days, these warfare days, those old images just don’t cut it for me. I need a battlefield Jesus at my side down here in the dangerous, often messy trenches of daily life. I need Jesus the rescuer, ready to wade through the pain, death, and hell itself to find me, grasp my hand, and bring me safely through.”
https://medium.com/the-catholic-refuge/a-place-of-healing-wrestling-with-the-mysteries-of-suffering-pain-and-gods-sovereignty-2b830fc6e475
['Jocelyn Soriano']
2020-11-28 23:38:03.501000+00:00
['Book Review', 'Books', 'Christianity', 'Health', 'Healing']
The Ultimate Vue Cheat Sheet. Vuejs has become one of the most…
Vuejs has become one of the most successfully applied, loved and trusted frontend JavaScript frameworks among our community. The Vue3 comes with a whole lot of new features. In this article we will go through all the fundamentals of Vue2 and Vue3. Basically a Vue Cheat Sheet to make your life easier. We will break it down our vue cheat sheet into different sections like global APIs, Vue Configs and the rest. Vue DOM new Vue({}) : This method provides the Vuejs instance an existing DOM element to mount on. This is where all your Vuejs Codes are defined : This method provides the Vuejs instance an existing DOM element to mount on. This is where all your Vuejs Codes are defined el : A CSS selector string or an actual HTMLElement that all the Vuejs codes will be mounted. : A CSS selector string or an actual HTMLElement that all the Vuejs codes will be mounted. template : A string template which is used as the markup for the Vue instance.You Vuejs components are defined here. : A string template which is used as the markup for the Vue instance.You Vuejs components are defined here. render: h => h(App) : The render function receives a createElement method as it’s first argument used to create VNodes. Aliasing createElement to h is a common convention you’ll see in the Vue ecosystem and is actually required for JSX. If h is not available in the scope, your app will throw an error. : The render function receives a createElement method as it’s first argument used to create VNodes. Aliasing createElement to h is a common convention you’ll see in the Vue ecosystem and is actually required for JSX. If h is not available in the scope, your app will throw an error. renderError (createElement, err) : This provides render output when the default render function encounters an error. The error encounter will be passed into the function as a second param. Vue Data Property props : This is a list of attributes that are exposed to accept data from their parent component. You can implement this using an array and then pass all the parent data into it. It also accepts extra configs for data type checking and custom validation. props:['users','samples'] data(){return{}} : This is a data object for a particular Vuejs instance. Here Vuejs convert its properties into getter/setters to make it “reactive”. data() { return { name:"Sunil", age:80 } } computed : Computed properties calculate a value rather than store a value. This computed properties are cached, and only re-computed on reactive dependency changes. computed:{ sumNumbers:function() { return this.a * 2 } } watch :This is an object where keys are expressions to watch and values are the corresponding callbacks. Basically it listens to when your data property has been changed. watch:{ name:function(val,oldVal) { console.log('newval',val,'old',oldVal) } } methods : This are methods to be mixed into the Vue instance. This methods can be accessed directly on the VM instance using the this keyword. Always avoid using arrow functions to define methods. methods:{ logName() {console.log(this.name)} } Vue Lifecycle Hooks A component in Vuejs has a lifecycle which is being managed by Vue itself when it creates the component, mounts the component to the DOM, updates the component and destroy the components. beforeCreate : This is called synchronously immediately after the instance has been initialized, before data observation and event/watcher setup. beforeCreated(){console.log('Before Created')} created : This is called after the Vue instance is created.it is called synchronously immediately after the instance has been initialized, before data observation and event/watcher setup. : This is called after the Vue instance is created.it is called synchronously immediately after the instance has been initialized, before data observation and event/watcher setup. beforeMount : In this phase, it checks if any template is available in the object to be rendered in the DOM. If no template is found, then it considers the outer HTML of the defined element as a template. : In this phase, it checks if any template is available in the object to be rendered in the DOM. If no template is found, then it considers the outer HTML of the defined element as a template. mounted : This is called after the instance has been mounted, where el is replaced by the newly created vm.$el. If the root instance is mounted to an in-document element, vm.$el will also be in-document when mounted is called. If you want to wait untill all the veiw is rendered, you can use the nextTick method inside the hook: this.$nextTick() : This is called after the instance has been mounted, where el is replaced by the newly created vm.$el. If the root instance is mounted to an in-document element, vm.$el will also be in-document when mounted is called. If you want to wait untill all the veiw is rendered, you can use the method inside the hook: this.$nextTick() beforeUpdate : This gets fired before the changes reflecting the original DOM element.Also take note that hook is not called during server-side rendering. : This gets fired before the changes reflecting the original DOM element.Also take note that hook is not called during server-side rendering. updated :The component’s DOM will have been updated when this hook is called, so you can perform DOM-dependent operations here. However, in most cases you should avoid changing state inside the hook. To react to state changes, it’s usually better to use a computed property or watcher instead. :The component’s DOM will have been updated when this hook is called, so you can perform DOM-dependent operations here. However, in most cases you should avoid changing state inside the hook. To react to state changes, it’s usually better to use a computed property or watcher instead. beforeDestroy :This is called before the Vue instance is destroyed. :This is called before the Vue instance is destroyed. destroyed : This is called after the Vue instance has been destroyed. Vue 3 Lifecycle Hooks Vue 3 also comes with its own life cycle hooks which is really great for development. To use them we will have to import them into our components like this: import { onMounted, onUpdated, onUnmounted } from 'vue' Here we get to only import the hooks that we want to use. Here are the Vue 3 life cycle hooks: onBeforeMount : This hook gets called before mounting occurs : This hook gets called before mounting occurs onMounted : Once component is mounted this hook is called : Once component is mounted this hook is called onBeforeUpdate : Called once a reactive data changes and before its re-rendered. : Called once a reactive data changes and before its re-rendered. onUpdated : Called after re-rendering of the component. : Called after re-rendering of the component. onBeforeUnmount : This is called before the Vue instance is destroyed : This is called before the Vue instance is destroyed onUnmounted : This is called immediately after the Vue Instance is destroyed. : This is called immediately after the Vue Instance is destroyed. onActivated : Components in Vuejs can be kept alive, this hook is called when this component is activated. : Components in Vuejs can be kept alive, this hook is called when this component is activated. onDeactivated : This is called once a kept-alive component is deactivated. : This is called once a component is deactivated. onErrorCaptured : This is great for error handling. This hook is called once an error is captured in a child component. Vue 3 Composition API Before we can access the Vue3 composition API we have to first of all install it: npm install @vue/composition-api After the installation was successful we can now now import it into our main.js file: import Vue from 'vue'; import CompositionApi from '@vue/composition-api'; Vue.use(CompositionApi); With this done we are set to us the Vuejs Composition API in our application. Now lets looking at some of the Vue 3 features: **setup()** : This function is called when an instance of a component has been created. This method takes in two parameters props and context . – Props are reactive values and can be watched: export default { props: { age: String, }, setup(props) { watch(() => { console.log(`Sunil is : ` + props.age + "years old"); }); }, }; - Context here has this properties `attrs`, `slots`, `emit`, `parent`, `root`. Always remember that the `this` keyword is not available in the setup function meaning that this won’t work : setup() { function onClick() { this.$emit // not available } } refs : The new way of getting reference to an element or component instance in a template is by using the ref method. To use this, we have to first of all import it into our application like this: import { ref } from '@vue/composition-api' And then use it like this in our component: <template> <div>{{ count }}</div> </template> <script> import { ref } from '@vue/composition-api' export default { setup() { return { count: ref(0) } } } </script> Vue Global Configs The Vue.config object is where we can define all our Vuejs global configs. One of the important part of Vue Cheat Sheet. Vue.config.silent : This config disables all Vuejs logs and warnings : This config disables all Vuejs logs and warnings Vue.config.devtools : This adds configuration whether to allow vue-devtools inspection or not : This adds configuration whether to allow vue-devtools inspection or not Vue.config.performance : This config enables component initalizing, compile, render and patch performance tracing in the browser devtool timeline. : This config enables component initalizing, compile, render and patch performance tracing in the browser devtool timeline. Vue.config.productionTip : This enables production tip on Vue startup. : This enables production tip on Vue startup. Vue.config.ignoredElements : Make Vue ignore custom elements defined outside of Vue (e.g., using the Web Components APIs). Otherwise, it will throw a warning about an Unknown custom element . : Make Vue ignore custom elements defined outside of Vue (e.g., using the Web Components APIs). Otherwise, it will throw a warning about an . Vue.config.errorHandler : This config assigns a handler for uncaught errors during component render function and watchers. : This config assigns a handler for uncaught errors during component render function and watchers. Vue.config.optionMergeStrategies : This defines custom merging strategies for options. This merge strategy receives the value of that option defined on the parent and child instances as the first and second arguments, respectively. Vue Templates and Themes As above vue cheat sheet helps you to speed up your workflow, there is another great thing called ready to use Vue templates, which are more than helpful, They help you create visually stunning applications using ready to use design components provided in the template package. You can definitely check them out for your application. You can download free vue templates as well if you do not want to invest to start with.
https://medium.com/js-dojo/the-ultimate-vue-cheat-sheet-for-version-3-and-2-3ed1a8b9d5d4
['Sunil Joshi']
2020-10-01 06:16:28.880000+00:00
['Vue', 'JavaScript', 'Web Development', 'Vuejs', 'Productivity']
The Rules For The Facebook Group About Facebook Group Rules
We’re so glad you’re here. Except for you, Karen. Photo by Kari Shea on Unsplash Welcome to the Facebook Group about Facebook Group Rules. This is a place to discuss all the different kinds of Facebook Group rules and absolutely nothing else. If you’d like to discuss something else, please do not request access to this group. Not that we’d let you in, but if you could just save us the time and not even try that would be great, we all have jobs and Greg is training for the marathon. The admins of this group know how much we all love leading lives within clearly defined but heavily abundant boundaries, so we’ve pinned this to the top of this Facebook Group page for easy reference. Please review, bookmark, and memorize all rules and abide by them or your posts will be removed, your access to this group will be revoked, and we’ll publish your 9th grade report card on the front page of a small but very reputable local newspaper.
https://shanisilver.medium.com/the-rules-for-the-facebook-group-about-facebook-group-rules-a2a41ac4d02a
['Shani Silver']
2020-09-17 22:32:16.751000+00:00
['Humor', 'Writing', 'Facebook', 'Culture', 'Social Media']
Why Domino’s Pizza Was Ready to Give Free Pizza for 100 Years
The Campaign Domino’s Pizza launched a marketing campaign in Russia where people could get 100 free pizzas for 100 years if they tattooed themselves with the brand’s logo and posted those pictures on social media with the hashtag #DominosForver. To their surprise, there was quickly an abundance of photos of people with the brand’s tattoo on Facebook, Instagram, and VKontakte (a Russian social media platform). I wonder what they expected. They should have expected people to react this way. Here are some of those pictures posted by people: Source: Screenshot by the author on Instagram Source: Screenshot by the author on Instagram Domino’s Pizza clarified that only the first 350 people to post the tattooed images would get the free pizzas. The tattoos were also required to be at least 2 cm in length, and had to be in “visible parts of the body.” But it was too late already, and people didn’t stop posting such pictures. They kept coming non-stop. Even after the number of participants was well over 350, people still kept posting pictures on social media. It then had to post an urgent message for people who were going to get the tattoos. The message was: "An urgent message to all those sitting at the tattoo artist’s right now: We’ll include you in the list of participants, but we’re waiting for photos up to midday today." This campaign had been planned to be run for two months, but due to the overwhelming participation of people, it had to be ended in just 5 days.
https://medium.com/illumination-curated/why-dominos-pizza-was-ready-to-give-free-pizza-for-100-years-973fcc05815d
['Binit Acharya']
2020-10-09 13:14:22.681000+00:00
['Business', 'Marketing', 'Psychology', 'Technology', 'Social Media']
Exceptions From A Software Engineer
Let’s define an SDE/SE using the following points, and this my understanding and learning so far. GDTRS “A software engineer is the one, who can ‘G’ather requirements, ‘D’evelop, ‘T’est, ‘R’elease and ‘S’upport a software product.” - Pranay Deep Gather Requirement and design Photo by Med Badr Chemmaoui on Unsplash Use common sense and understand general ask then do a dive deep for the actual requirement. Create a design document (HLD and LLD) and get it approved from senior SDE and peer. Break it into task and give ETA. Development Photo by Kevin Ku on Unsplash Choose a technology to deliver result. Follow SOLID principle and write scalable and maintainable code. Write proper “Unit Test” for all the public methods. Make sure to maintain code-coverage at-least by 75%. *Writing unit tests after code is not TDD Testing Since you have covered UT(Unit Test) while writing code, now this is the time to write integration tests. Here we will check actual behavior of various components when the interact with each other. Release Photo by Bill Jelen on Unsplash You don’t want to deploy your newly baked code into production. Better deploy you code in batches 25% at a time. Observed any spike in normal behavior? Revert! Support: — Every SDE is expected to be On-Call(Support) for a week or two (team dependent). Acronyms: Photo by Lauren Sauder on Unsplash SDE: Software Development Engineer. SE: Software Engineer. UT: Unit Test. LLD: Low Level Design. TDD: Test Driven Development. ETA: Estimated time of arrival SOLID:
https://medium.com/dev-genius/exceptions-from-a-software-engineer-4a9e8e5611be
['Pranay Deep']
2020-06-24 10:27:49.815000+00:00
['Software Engineering', 'Software Development', 'Amazon', 'Microsoft', 'Expectations']
A Groovy Kind of Relationship
What are the pros and cons of using Groovy? Jennifer Strater, former Engineer at Zenjob, and Ash Davies, Senior Software Engineer at ImmobilienScout24, talk to us about their experiences using the programming language, why Jenn spends less time “banging her head against the wall than with other languages” and why Ash thinks that “developing with Groovy can be frustrating sometimes”. How long have you been working with Apache Groovy and what do you use Groovy for? Jenn: I’ve used the Groovy programming language nearly every day at work since 2013. I took a three month break while on a contract doing Javascript, but pretty much everything else has been Groovy. In all of the teams and projects I’ve worked on, we’ve developed web applications using Grails, Spring Boot, or Ratpack. I also use Groovy in other ways like for writing build scripts using Gradle, scripting the deployment pipeline in Jenkins, and automating UI tests using Geb. Ash: Working with the Android development for the last five years, I’ve become familiar with Groovy and Gradle for building build scripts, for Android applications themselves, for our continuous integration servers, building reports and deployment scripts. 2. What makes Apache Groovy unique in comparison to other programming languages? Jenn: Groovy is unique because it’s very flexible. Many people know Groovy for its use as a dynamic, scripting language, but it’s so much more than that. Groovy supports optional typing and static compilation. It’s also incredibly powerful as a language for creating Domain Specific Languages (DSLs). This is how tools like Gradle and Jenkins use Groovy. It’s also interesting to note the case studies from Mutual of Omaha where they created a DSL for their actuaries to use. Ash: Generally Groovy shared similarities with many other loosely typed languages, in that you can define programming constructs without having to comply to many difficulties of static compilation (optionally). Interestingly, Groovy allows for a lot of syntax sugar that wouldn’t ordinarily be available in Java, such as null coalescing, or string interpolation, though these features are also available in languages such as Kotlin, it’s quite nice that this is included with Groovy. 3. What are pros and cons when using Apache Groovy? Jenn: Although the flexibility of Groovy can be advantageous in many cases, it also comes with a few cons. Most are summarized with the expression, “With great power comes great responsibility”. With Groovy, that means it’s important to have well-trained developers who follow good practices like writing tests, participating in code reviews, running static analysis tools, and enforcing Continuous Integration (CI) and Continuous Delivery (CD) best practices. Ash: In a build environment Groovy becomes quite advantageous due to having much more control of the environment, but can often be a disadvantage too, since developers will often not fully understand the sequence of operation, and try desperately to get something to work by any means, resulting in bad practices. Additionally when working with Android, it can be quite difficult to make use of the plugin APIs as it’s quite a bit harder to find the implementation, so you’re at the mercy of the public documentation, which is often out-of-date, or difficult to read. 4. Do you think Apache Groovy is becoming more popular? Jenn: Definitely! The download numbers and various surveys show that Groovy is increasing in use, but you don’t see as many news articles and conferences talks about it. Apache Groovy is often referred to as the most used JVM language no one is talking about. One of the reasons for that is that there are no paid developer advocates for Groovy. It also means there is no single company making all the decisions and the language grows through its strong community of individual contributors. Ash: There will always be innate enthusiasm for some languages, but Groovy seems to be something most developers have used at some point in their life, but perhaps haven’t found the necessity to advocate for, its flexibility affords a lot of usefulness for specific use-cases, but perhaps doesn’t find itself at the center of attention when compared to other more “exciting” languages. 5. What have you learned from working with Groovy? What tips can you give when working with Groovy in a team? Jenn: After working with Groovy for several years, I’ve learned that it can be a very powerful tool for creating web applications. The reduction of boilerplate code means that I can also finish tasks much faster and with fewer redundancies. My suggestion would be to learn the language properly by reading the Groovy in Action book, going to conferences, or taking training classes — whatever works for your learning style. It’s also very important to enforce good development practices from the beginning so that the whole team can benefit from becoming more Groovy. 6. Can everyone in your team program in Groovy? Jenn: That’s actually a really great question. At Zenjob, everyone in the backend, Infrastructure Automation, Android, and QA teams has at least some experience with Groovy. For now, the backend team uses Groovy, in particular, the framework Grails, for the core business application, and writes automated tests using Spock. Both the backend and Android teams use Gradle as a build tool, although the Android team has switched to use Kotlin now. Geb, a Groovy and Spock-based framework built on top of Selenium is used by the Test Automation team to write UI tests that run in test containers. Ash: Everybody in our team has at some point used Groovy, if only to appease our build system, though I wouldn’t say they have an in-depth knowledge of it. At the moment we are evaluating whether to use Kotlin instead of Groovy, as it allows for better type safety and can facilitate a more intimate knowledge of the Android Gradle Plugin API. 7. Did you ever bang your head against the wall because of the groovy code? Jenn: Absolutely! I mean… I definitely spend less time banging my head against the wall than with other languages, but there are tricky parts to every language including Groovy. At some Groovy conferences, like GR8Conf, we have a session called Groovy Puzzlers that points out some of the tricky problems or bugs in the current version of the language. In addition to educating fellow devs about the tricky parts of the languages, the feedback from Groovy Puzzlers also helps prioritize which bugs and problems should be fixed next. Ash: Every… single.. day. What about you and Groovy? If you are having problems with something in Groovy, feel free to post the question on StackOverflow or you can contact Jenn directly and she’ll connect you with the right person to answer your question. If you are interested in Android, don’t hesitate to drop Ash a line.
https://medium.com/scout24-engineering/a-groovy-kind-of-relationship-77852529af5f
[]
2018-10-02 13:00:02.323000+00:00
['Software Engineering', 'Java', 'Groovy', 'Programming Languages', 'Programming']
The Next Financial Crash Will Hit Social Media Too
The Next Financial Crash Will Hit Social Media Too Life always comes full circle, so you should be prepared Photo by Ehud Neuhaus on Unsplash Growing up in the 1970s and ‘80s, I saw a massive disruption that most likely caused my incurable disease. The transition happened when “mom and pop” grocery stores, sourcing their products from small farms and local farmers, became overrun by mass-produced grocers. These grocers offering cheap-prices and thousands of options. It was the big-box revolution. Before this disruption hit, “organic” wasn’t even a word in our vernacular — it was just “produce and vegetables.” They were most likely organic, but you didn’t need to label it that way. But the disruption of local grocers in favor of mass-produced food distributors, where food was cheaply grown domestically and internationally, was brought on by the increased use of pesticides and lack of oversight. It created a massive change in our diets too — cheap, toxic fruits and vegetables and sugary/salty processed foods. It was nirvana… food addictions exploded…and a generation later we realized that those pesticides and processed foods cause disease and obesity (I personally attribute these choices, unbeknownst to the effect, to my disease). But now, this industry has come full circle. The small local farmer has made a massive comeback. “Organic” is a billion-dollar industry. When you see Walmart not only offer organic options but invest billions of dollars into them, then you know the pendulum has swung in the other direction. It took 40 years. My friend @TheDovBaron says it succinctly, “The disruptors will be disrupted.” He’s right. (We constantly see it in politics too, where the political parties wildly swing to and from power every few years.) If you follow patterns, you can see it in other parts of our economy. There’s one aspect of our new technological society that will not only be disrupted but will have a meltdown — a crash so epic, it will change your life. And here’s the kicker — you invest in it every single day… It’s social media. Yup. Facebook, YouTube, Instagram, etc. Get ready — it will crash, and here’s why.
https://medium.com/better-marketing/the-next-financial-crash-will-absolutely-hit-this-one-market-6fa2f3e35089
['Phillip Stutts']
2019-08-23 13:51:45.530000+00:00
['Marketing', 'Relationships', 'Facebook', 'Instagram', 'Social Media']
Software Engineering Series (Part -2)
Software Engineering Series (Part -2) Design Phase … It’s all about how it works If you are here for the first time i would highly recommend reading the first part here . Previously after going through the requirement phase let’s just get into the next step known as design. So, what’s the role of design,well lets first understand, what’s design? . If we think about it in a general way. we get, well now you have the requirements, so you need a plan and a way to build things you actually wanted. Design is a “way” to execute to build your products. It’s a plan. Here, I’ll be talking about 6 design terms that you’ll face during software engineering. These different design techniques are used in different places according to their use cases. Let’s see what these are. 1.Component Design : Imagine you’ve been given a task to create a music player. And your music player is able to play files locally and is able to stream as well. Now, how would you build the playback screen. You’ve two options either build a screen for local playback and one for streaming or build single screen to work in both the situations consistently. Pause, What would you wanna go for? Ofcourse, you can go with two different screens. That will definitely take more time. But, let’s think about the one with adaptable nature. One single screen and that’s basically able to play songs in two different environments. Isn’t that amazing. That’s exactly what component Design is. There can be a lot better examples but it’s OK, if you get the point. It’s about how (plan) of building adaptable component. A component Design is a design specification for one of these adaptable component. Application Engineers, using the generation picture procedure, may adapt and compose a set of these components to implement certain work products or position thereof.
https://medium.com/curiosity-jouney/software-engineering-series-part-2-7eb756d6d900
['Aditya Kumar Khare']
2017-12-25 08:06:18.930000+00:00
['Market', 'Software Engineering', 'Design', 'Software Development', 'Computer Science']
Complexities Of Cyclone Forecasting In India
There are four tools and options that India currently looks at, 1. Doppler Weather Radar, 2. Collaboration with other countries, 3. Satellites 4. Ocean Buoys 1. Doppler Weather Radar These are the radars which help in identification of a cyclone in a particular region, let’s say, for example, we have the Conventional Radar, these radars were also able to identify and predict the cyclone as well. But when it comes to the Doppler Weather Radar, they give detailed information about the storm’s internal wind flow. And they also give us the structure as well. That was not present under the conventional system. So the advantage with the Doppler Weather Radar, it gives a clear picture and structure about the cyclone, which is originating in the sea ocean. Currently, India has about 27 Doppler Weather Radars, that which is present throughout the country. So the first important factor is that the Doppler Weather Radar is much beyond and much better when it comes to the Conventional Radar. When it comes to the Doppler Weather Radar, they are also in an advantageous position as well. They also give us all the data, at least five to six days before. So if they are giving us data much prior to actual formation, we would be in a better position to save lives and property as well. How? Let me give you an example. In the present scenario where there was Cyclone Nivar, what exactly happened? The Indian Meteorological Department gave advanced warnings to the state government. So the state government of Tamil Nadu, state government of Andhra Pradesh, and to a limited extent, states like Karnataka were able to take some of the measures. They were able to push people from the low-Lying areas into an area where they were safer places. So this particular idea of giving the information five days prior are in fact, much earlier will help the state government operating in that particular state to take all precautions to save the people as well as property. So the severity of the weather systems can be quantitatively estimated more accurately, that ever before when it comes to the Doppler Weather Radars. So the Doppler Weather Radars provides all the advance information, provides the lead time, helps in saving lives as well as property. 2. Collaboration India might have Satellites. India might also have advanced technological tools as well. But sometimes all this would not be required. Why? Because something else might be originating in let’s say Pacific Ocean or something else is brewing in the Atlantic Ocean. There are countries in and around them, so they also gather information. They also have their own satellites as well. They might also have their own space stations as well. So these countries will have a close cooperation and nexus with India. So what exactly happens here? We have the United States of America and Japan, which has entered into some collaborative rules and regulations and agreements with India. We have the Indian Space Research Organization or let’s say for example, their own space research organization, or we have the Indian Meteorological Department. Likewise, they have the Japan Metallurgical Agency. So they exchanged the data so that they are able to tabulate and get the right information about the origination and formation of cyclone like conditions. So what does this collaboration exactly mean? This is where one country will start exchanging data with another country so that they can prevent any disastrous consequences that might result from that cyclone. So India, in the present scenario, has entered into a collaboration with the Japan Meteorological Agency, the U.S. National Hurricane Center and the U.S. Central Pacific Hurricane Center. So that all the data can be exchanged and we would be able to protect our people as well as our property. 3. Satellites There are a number of meteorological satellites which help in improving identification and detection as well. Let’s take the example of SCATSAT. So what does this SCATSAT mean? This is one of the satellites which provides all the weather forecasting, cyclone prediction and also tracking different services in India when it comes to the weather related issues. 4. Ocean Buoys This particular Ocean Buoy will have sensors. This might be with respect to air or it might be with respect to the water. So what will it do? It’ll calculate all the relevant information with respect to the weather related parameters. It will identify the wave height, the wind speed and the direction, air and water temperature and also barometric pressure as well. So once all these sensors are able to identify it. They’re able to capture the data. They would be able to forecast the same and cyclone warnings should be taken up and similar precautionary measures will be taken up by the government. So they send out all the real time data that is required for the calculation. And ultimately, this will help in detection of the loss of lives as well as property. So these are a number of tools that India uses to identify whether there is an origin of a cyclone and how and where it will make a landfall and we would be in a position to cut down on the lives as well as property. Conclusion Why does India Need This? When it comes to India, we have this as a coastal belt. There is Arabian Sea here. We also have Bay of Bengal here. What is more trickier? Is it the Bay of Bengal or the Arabian Sea? It is Arabian Sea which is trickier. Why? Because sometimes when you realize that cyclones in Arabian Sea can get trickier because they may recur and hit India as well. That might not happen in the Bay of Bengal and at the same time, when you look at the radars that are present, most of the radars are present on the eastern coast that is along the Bay of Bengal. So we are in a much better position to identify where these cyclones originate, where they might make a landfall, and we would be in a better position to identify and make sure that we take all types of precautionary measures. Why? One is because they do not recurve and the second is we have more radars on the eastern coast and as a result, we would be in a much better position when it comes to cyclones originating in Bay of Bengal.
https://medium.com/illumination-curated/complexities-of-cyclone-forecasting-in-india-d6fe16a7202
['Vishnu Aravindhan']
2020-12-02 15:17:15.493000+00:00
['Forecasting', 'Environment', 'Climate Change', 'Hurricane', 'Science']
Deepfakes Detection by Heart Rate Prediction
Researchers at Binghampton University and Intel Corporation have developed a model that recognizes deepfakes by predicting heartbeats. The classifier uses photoplethysmogram data to recognize fake videos. An important assumption in the model is that it learns to recognize deepfakes that have been generated using a set of publicly available architectures. This imposes restrictions on the use of the model in real applications. The approach detects fake videos with an accuracy of 97.27% and a generative model of deepfakes with an accuracy of 93.39%. DeepFakes problem The popularity of deepfakes has grown in recent years. Artificially generated videos of famous people are used for a variety of purposes, from filters on social media images to political propaganda and false news. This makes research on methods for recognizing deepfakes a topical area. The idea behind the method The researchers analyzed the remnants from the generative GAN model and tried to link them to biological signals. The proposed framework for the classification of deepfake videos is able to recognize a fake video and its source if it was generated by one of the available models. The model starts with several generative networks that receive one real video as input. The real video and the generated deepfakes are then fed to the input of the registration module. At this stage, the model extracts parts of the face of interest, which track biological signals for photoplethysmograms. The last module is a classifier that predicts the video class by a presentation. If the model predicted a deepfake, then it predicts the most likely architecture of the model that was used to generate.
https://medium.com/datadriveninvestor/deepfakes-detection-by-heart-rate-prediction-d96d8843a14b
['Mikhail Raevskiy']
2020-12-08 16:29:39.485000+00:00
['Machine Learning', 'Artificial Intelligence', 'AI', 'Data Science', 'Deep Learning']
How Human Breastmilk Can Block Coronavirus’s Replication
How Human Breastmilk Can Block Coronavirus’s Replication A new study further tells us why Covid-19 is not a good enough reason to not breastfeed. Photo by NIKOLAY OSMACHKO from Pexels Breastfeeding does many wonders for the baby and mother, ranging from emotional, immunological, and general health support. For the newborn, it encourages the development of the immune system and gut microbiome. It lowers the risk of diabetes (type I and II), obesity, leukemia, asthma, atopic dermatitis, improper cognitive development, and ear, respiratory and gastrointestinal infections. For the mother, breastfeeding decreases the risk of breast and ovary cancers, type II diabetes, and postnatal depression. And breastfeeding establishes emotional bonding between the newborn and mother. Breastmilk is rich in lactoferrin, a protein with diverse biological functions: anti-cancer, neuroprotective, bone support, immunomodulation, antioxidant, and antimicrobial (including bacteria, fungi, parasites, and viruses). So, it’s not a surprise if it fights SARS-CoV-2, which a newly published study has just shown. What the study did and found The paper, “The effect of whey protein on viral infection and replication of SARS-CoV-2 and pangolin coronavirus in vitro,” was published a few days ago in Signal Transduction and Targeted Therapy, a highly reputed journal (SARS-CoV-2 is the novel coronavirus that causes Covid-19.) Herein, researchers from China institutions collected breastmilk samples from eight healthy mothers before the Covid-19 pandemic started. Samples were then skimmed to remove the lipids and retain only the proteins for subsequent experiments. 1. Overall anti-coronavirus effects In human kidney and lung cells infected with coronaviruses, the study showed that skimmed breastmilk blocked the replication of both SARS-CoV-2 and its relative pangolin coronavirus by 98% compared to the no-treatment group at 0%. As a result, only minimal infectious coronavirus particles were produced. While such effects are dose-dependent, only small doses of the skimmed breastmilk samples were needed to achieve these maximal effects. Another unanticipated upside is that skimmed breastmilk treatment even helped the infected cells to proliferate healthily. 2. Are the proteins responsible? After that, the researchers wanted to understand if proteins in the skimmed breastmilk were responsible for the anti-coronavirus effects. They heated the samples at 100°C for 10 minutes to denature the proteins. And these heated samples lost their antiviral effects. 3. Human breastmilk proteins vs. other animals’ Interestingly, commercial goat and cow whey (milk) proteins also blocked the replication of SARS-CoV-2 and pangolin coronavirus, but to a lesser extent than the human skimmed breastmilk. “These results indicated that human whey protein has a high concentration of antiviral factors than those from other species,” the authors stated. Indeed, “human milk is rich in LF [lactoferrin], which is 10–100 fold higher than that in cow and goat milk.” Breastmilk is rich in lactoferrin, a protein with diverse biological functions: anti-cancer, neuroprotective, bone support, immunomodulation, antioxidant, and antimicrobial. 4. How does human breastmilk stops coronavirus replication? They did more experiments to decipher the underlying mechanisms of the anti-coronavirus effects. They found that the skimmed human breastmilk could attach to the coronavirus’s spike protein surface to prevent its binding to the ACE2 receptor on the human cell surface. As a result, more than 90% of SARS-CoV-2 and pangolin coronavirus particles failed to bind and infect the human cells. Even in those remaining 10% coronaviruses that managed to infect the cells, human breastmilk also blocked the virus’s replication inside the cell by arresting its RNA polymerase enzyme. This enzyme is responsible for RNA viruses, which include coronavirus, to replicate themselves. Therefore, the authors wrote, “[human] breastmilk inhibits not only viral entry but also viral replication.” A few more considerations All studies have caveats. In this study, the researchers noted that breastmilk samples were collected from only eight mothers, so the sample size is limited. Besides, as the title stated, this study is an in vitro model. In vitro means outside a living organism, such as in cells cultured in a dish or flask. Thus, animal models may still be needed to verify the anti-coronavirus effects of human breastmilk. Even if the in vitro results do not fully translate to actual organisms, breastmilk may still thwart coronavirus indirectly via the other biological benefits it provides, as stated in the beginning. Another point is the skimming process. As lipids in the human breastmilk were removed, it’s unclear if such lipids would influence the proteins' anti-coronavirus properties in human breastmilk. But if anything, the fatty acids in breastmilk — particularly the omega-3s and short-chain fatty acids (SCFAs)— should provide additional benefits to the baby’s immune system. “These results indicated that human whey protein has a high concentration of antiviral factors than those from other species.” Can a mother with Covid-19 or SARS-CoV-2 still breastfeed? Yes, thus far, no evidence has found infectious SARS-CoV-2 in breastmilk. While SARS-CoV-2 RNA genetic material can be detected in breastmilk in rare cases, it’s not infectious in cultured cells. For precautions, health authorities have advised mothers with positive or suspected Covid-19 to wear masks and wash hands before breastfeeding. So, unless the mother has certain infections like HIV, cancers, or medication or illegal drug usage, it’s hardly ever a bad move to breastfeed. Covid-19 is not a good enough reason to not breastfeed.
https://medium.com/microbial-instincts/how-human-breastmilk-can-block-coronaviruss-replication-a1793bb495a4
['Shin Jie Yong']
2020-11-27 11:50:11.096000+00:00
['Innovation', 'Health', 'Advice', 'Covid 19', 'Science']
The Last Ones Left
I watched them parade around the truck. Evan approached the back of the restaurant to get a better look. “We’re never going to get close to it,” he said. Lauren said, “How are we going to get back to Shadowood?” Evan was silent. It was one of the few times he didn’t have any answer. “I don’t think we can get back there, now.” “What are we going to do,” I asked. Evan looked around, running a hand through his shaggy hair, “I saw some footprints, back there.” “I’m sure there’s plenty around here,” she said. “No,” Evan said, “these aren’t zombie prints. They’re not all ragged. They’re not shuffling. These were human footprints.” None of us had seen another person since arriving at Shadowood. Evan showed us the boot prints. There were at least two sets of footprints, and they led from the back of the restaurant to a disturbed area of dirt where a vehicle had driven off. “They got into a car here,” Evan said, “and went off in that direction,” pointing deeper into the woods. “Is that where we’re going?” Lauren asked. “I don’t think we have a choice,” Evan said. He stood up, loosing an arrow into a zombie that found its way behind the building. “It kills me to leave all these things.” “You’ll find more,” I said. “I’m counting on it,” Evan said, pulling his arrow from the fallen zombie. I grabbed my pack, and we headed out. I walked behind them, and the three of us snuck back into the forest. In the chaos of escaping from the town, I didn’t realize how late it was. The woods made things darker, the shadows larger, and soon night would be upon us. I quickened my pace in order to catch up to them. I was about to raise my concerns when Evan said, “We need to find somewhere to bed down for the night.” He was always one step ahead. We had to put some distance between the town and us, so we hiked for half another mile, going back to the road to pick up speed. Evan bent down to inspect something, “Someone came through,” he said, “pretty fast too.” It wasn’t long before we found a a body leaning against a yield sign. “Look,” I said. Evan had been staring at the road looking for more signs, and he was showing Lauren what to look for, and both of their heads snapped when they heard me. Evan raised his bow, and called out, “Hey.” The body didn’t move. It stayed seated along the ground. It still had its hair and clothes, so if it was a zombie; it was a recent one. It, he I guess, had brown hair and a blue, blood spattered jean jacket. We approached and yet the body remained still. “Is it dead?” Lauren asked. “Yeah, he’s dead,” Evan said. Evan reached foreword and used the tip of his bow to examine the body, lifting up both sides of the man’s jacket. “Is he going to come back?” Lauren asked. “I don’t think so,” Evan said, “that only happens if you’ve been bit.” “He wasn’t bit?” “No,” Evan said, pointing to the red splotches, “this wasn’t zombies. Those are bullet holes. Somebody shot him.” “Why would someone shoot him?” I asked. “I don’t think the why is very important right now,” Evan said, “it’s the who we should be worried about.” “Okay,” Lauren said, “who shot him?” She kept looking at the body. She spent so much time near zombies, but an actual dead human still bothered her. “My guess is,” Evan said, “the people we’re following. The same ones from the town.” I should not have been as shocked as I was. I was so excited about new people that it didn’t occur to me that anyone still left in this world most likely wasn’t very nice. And the dead body in front of me was evidence. “We’re not going to keep looking for them are we?” Lauren asked. “No,” he said, “we try to find someplace to hole up tonight and then we try and make it back to Shadwood.” We were on the road trying to figure out which direction to go. We could head back the way we came, but that would that would take us closer to the infested town. The herd was on the move, but they seemed to settle in the town, and there wasn’t a sure way to tell which direction they might go. Or, we could continue in the current direction and risk running into the people that shot the man we just saw. Evan decided that we would keep going that direction, keeping watch for an approaching vehicle, and turn off the road as soon as possible. We found the first road a half hour later. It was little more than a trail, but it was the first thing any of us had seen. Lauren led the way, with Evan covering her with his bow. I stayed near the road, ready to warn them should the shooters return. Lauren went about the length of a football field and raised her hand. She found something. I met Evan and the two of us approached. Lauren pointed a single finger back into the woods, “It’s back there,” she said. It was hard to make out, but deep in the woods was a lone cabin, small, most likely someone’s old hunting shack. Luckily, Lauren noticed it. There wasn’t a mailbox, and the grass completely erased the driveway. “What do you think?” Lauren asked. “That’ll work,” he said. “You know,” Lauren said, “it wouldn’t kill you to say, ‘Good job, Lauren.” “Good job, Lauren,” Evan said, and he went to inspect the cabin. After Evan was out of earshot, Lauren said, “That guy, I swear.” “Swear what?” I asked. “Nothing,” she said, “it’s just sometimes I..” “Love him?” I said. Her gaze immediately went from staring at Evan to looking me right in the eye. “What did you say?” “I’m not an idiot Lauren,” I said, “I see how you two are. I see how you sleep all cuddled together.” “That’s just,” she said, “I mean.” “It’s okay, Lauren, I get it.” She didn’t say anything. “He’s a good guy,” I said. “There’s not exactly a lot of choices left.” We caught up with Evan, who was making the preliminary rounds, checking to see if there were any signs of zombie activity. From afar, the cabin looked stout, and there wasn’t any recent activity. It looked clean, no scratches at the door, and only a couple patches of disturbed dirt. Evan pulled out his set of lock picks, but Lauren had already tried the door, and it swung open. She stepped back and Evan drew his bow, waiting for something to come out, but nothing did. It had to have been there for a while. A substantial level of dust settled over everything in the little house. It only had one room that contained everything necessary for living out in the woods, a small bed in the corner, a chair, a large metal woodstove attached to the wall. There was only two small windows. Evan rummaged through a toolbox in the corner. “Try to find something to put over these windows,” he said. I found a couple pieces of firewood that were long enough to span the windows. There was only enough to put two pieces over each window. Evan pounded nails into the wood, “That’s going to have to do,” he said. Evan and Lauren sat on the bed, and I took the chair. “We stay here tonight and tomorrow I’ll go check in town and check for the truck. If it’s still overrun we’ll have to go on foot.” The thought of walking miles through the forest vulnerable to a zombie attack terrified me. “Do you think we’ll be able to get back?” Lauren said. Evan put a hand on her shoulder, and kissed her on the head, “We’ll get back don’t worry.” “Do you ever wonder why how they move?” “Why would you think about that? Lauren asked. “What else is there to think about?” I said before I realized it. There was plenty to think about, like the people we lost. People like our mother, or Evan’s family, and the kids at Shadowood. “Nevermind,” I said. Lauren gave me a look. “No,” Evan said, “he’s right. There’s something weird about how they move when they’re in groups. I watching a horde move through once when I was in the tree stand. It’s like they move in swarms, like birds. It’s creepy. Sometimes it looks like,” he paused for a second, “it looks like something controlling them.” “Controlling them. How can something control them?” “I don’t know, but sometimes their movements don’t seem natural.” “Is there a natural way for a thousand rooting corpses to move?” she said. Evan ignored her comment, “if we can figure out if something controls them, or how they move or why, it will help,” he said, “but that can wait till we get back.” There wasn’t anything to do in the cabin, nothing to read except for a bible and an old TV guide. I pulled a piece of kindling off the floor, took a knife off my belt, and started whittling. I never quite got the hang of it but was something to pass the time. I sat there shaving small chunks of wood off the stick, staring at the window, hoping to get through the night. Evan and Lauren were lying on the bed whispering to each other. For a second, I wished Emma was there with me, but it was a selfish thought. I would rather her be at Shadowood where we could at least pretend to be safe. I had a couple cans of food in my pack, and we ate them cold, like when we were out on the road. I chewed silently on cold carrots and peas. It was going to get dark soon, and we prepared for the night. Evan didn’t want to use the woodstove, since there wasn’t any firewood, but in one of the cupboards Evan found a small can of shortening. He smiled when found the can of fat, and Lauren asked, ”You’re not going to eat that are you?” “Watch this,” he said. He pried the top off and showed us the white stuff inside. Then, he went and cut a piece off a t-shirt he found in the cabin. Afterwards, he used a nail to make a large channel down the center of the can, and rolled the shirt into a string an shoved into the channel. He then lit the rolled up t-shirt as if it was a candlewick. He set the homemade lantern on the table, and it glowed like a candle. “Now we don’t have to sit here in the dark,” he said. We cut up the sheets that were on the bed and used them to cover the windows. It would help keep from giving away our position. I settled into the chair, and watched the world get darker. “I’ll take first watch,” I said. After the day’s events, I didn’t think I would sleep very much anyway. “I’m not going to argue with that,” Evan said, and he and Lauren lay down on the bed. Evan still kept his head up though, always on watch. It was nighttime not much later, but the little room was lit by the lantern, and I sat there listening to it crackle. I felt my eyelids getting heavy, and I should have woken Evan, but I didn’t. I was, however, woken up when I heard something outside the cabin.
https://medium.com/the-inkwell/the-last-ones-left-e108e084ad2d
['Matthew Donnellon']
2020-11-29 03:54:45.217000+00:00
['Books', 'Relationships', 'Creativity', 'Fiction', 'Short Story']
Factors for Choosing a Tech Stack
Tech stack is consequential because it’s what you use for creating software programs. Thus, don’t underestimate the essence of research and analysis to select the right set of tools. A wrong choice of technologies can cause you the loss of money, time, and other valuable resources for your project. A sound business analyst and well-experienced developers are needed to avoid pitfalls when choosing your tech stack. Here’s the list of necessities to consider when choosing a purposeful tech stack. Defining the Platform What are the expectations of the project? This is an important pointer. Without an understanding of what you expect from the project, choosing a stack is a futile move. At this point, you should ask yourself who are my target audience. How they will use your app, when, and where. What is the popular active device among them? Is it a mobile phone or a regular computer? For instance, a web application requires a totally different set of tools and frameworks from a mobile app. Even within mobile applications, the tech stacks you need for Android and iOS development are different. You should then plan based on your answers, decide if you need a single or multi-platform solution to choose the best tech stack to use. Keep it in mind that if you are making an MVP first. It is better to make it on the prominent platform popular among your target users to save costs and get user feedback. It doesn’t matter if it’s going to be a mobile or web application. MVP is vital to the success of a project and the development process. After defining the platform, consider all functional and non-functional parameters that are essential for the project launch. A set of well-defined requirements for MVP will finger the tools you need to reach your aim and the additional tech stack you may need for the market version. Defining the Project Type After you cross the bridge of defining the platform, you still have more analysis to make in choosing your tech stack. You will be considering the project based on its size and complexity, processing strength, and business goals. For example, if your application involves multiprocessing or requires low latency for better responsiveness, then you must consider the relative tech stack that provides such support. Small Projects They are usually fast to deliver due to simple requirements and usually can be implemented with less sophisticated technologies like CMS or WordPress in the case of web projects. Mid-Size Projects For mid-size projects, there is a higher level of technological commitment. They may call for the use of several combinations of programming languages and tools depending on requirements and platforms. Such projects require technologies that provide more sophisticated and diverse functionalities and integrations. Complex Projects If you intend to create a social network, online marketplace, CRM, or order management system, then you are looking at a complex project. The combination of multiple programming languages is inevitable for such. This is just like the case of a mid-size project but with more complex features. You need several functions, integrations, and more sophistication, hence, your tech stack must be of a high level. Scalability Requirements The role of a tech stack in the scalability of an application is to make provision for the increase in users and functions. Your developers should make a choice of the tech stack that will make room for new engaging features, user growth, and seasonal rise in the number of users. Your provision for scalability must cover horizontal scaling which involves several application servers running at the same time to handle the flow of user traffic, as well as vertical scaling for the inclusion of more programs to process new data types. Scaling horizontally and vertically will protect your application from crashing when the stormy days come. Technology and Team Expertise You need to weigh the expertise of your team and use it as a basis to judge your choice of the tech stack, except you are looking to outsource. Developers usually have the command of some programming languages and tools more than others, so it all balls down to how skillful your team is. You have to be certain your team can successfully follow through with the use of a tech stack, or else, there is no need to use such a stack. To avoid employing an expert to cover up for the technology your team lacks, you can train them. That is if you have the luxury of time; but if you are pressed by deadline just outsource the project. Therefore, it is paramount to match the skills and experience of your team and see that it matches the choice of technology. Also, try to ensure your choice of tech has a big developer community and the availability of reference documents such as it is on Github and Stackoverflow. This would help your team not to get stranded on a tool or technology. Maintenance You need to consider that your team must be able to maintain the application after it is released. This is the next lane after you have developed your app, and it is related to your choice of tech stack too. Codebase Your choice of tech stack should be motivated by software architecture and existing codebase if you want feasible maintenance. Try choosing languages that are effective with short, reusable, and easy-to-maintain codes. Your codebase should be simple and of average length. It helps developers to spend a short interval of time in trying to review, debug, and process the codes. Software Architecture Software architecture is a key player for enabling scalability, portability, and reusability of applications. Your choice of tech stack should be guided by your software architecture. Based on this, you should consider technologies that enable both static and dynamic component configurations. It makes the performance of the application seamless, even when users increase or you add more engaging features. The Cost Implication of Tech Stack Money makes the world go round they say. Besides, nothing comes for free. On the contrary, there are several popular open-source IT frameworks and tools that are free. However, they come with subscription fees if you will need some special or advanced features. You might need to pay for licensing, and there is also the cost of maintenance. The choice of tech stack can lean on all these factors. Furthermore, some tech stacks demand high-salary for their developers, so you need to consider that too. You might also have to look at the cost of training developers for a particular tech if it is an option. The tug of war is between the overall cost of using a tech stack and the effectiveness of its features. iTwis provides custom software development services that cover a vast range of business needs. Our partners and clients are business owners from different private sectors. We have shown a strong capacity in IT consulting and delivered as a trusted software development team Our consultant would be glad to discuss your software development challenges and come up with the most suitable solution designed for your business needs.
https://medium.com/itwis/factors-for-choosing-a-tech-stack-b7544cc76108
['Ayo Oladele']
2020-12-15 06:54:23.874000+00:00
['Mobile App Development', 'Software Engineering', 'Software Development', 'Programming', 'Web Development']
Zen Stories For A Calm, Clear & Open Mind
1. The Man Who Said Yes A man went to a Buddhist monastery for a silent retreat. After he finished, he felt better, calmer, stronger, but something was missing. The teacher said he could talk to one of the monks before he left. The man thought for a while, then asked: “How do you find peace?” The monk said: “I say yes. To everything that happens, I say yes.” When the man returned home, he was enlightened. This one is actually real. The man is Kamal Ravikant. In an interview, he shares his interpretation of the monk’s advice: “Most of our pain, most of our suffering comes from resistance to what is. Life is. And when we resist what life is, we suffer. When you can say yes to life, surrender to life and say: “Okay, what should I be now?” That’s where power comes from.” When the weather is bad, when your crush won’t answer, when the obstacle won’t budge, don’t say no. Don’t dig in your heels and push and shove until your veins pop out in frustration. Say yes. Accept. Breathe. Life is flowing. Always. It’s us trying to swim upstream. Let the current carry you instead. 2. The Girl At The River A senior monk and a junior monk were traveling together. At one point, they came to a river with a strong current. As the monks were preparing to cross the river, they saw a very young and beautiful woman also attempting to cross. The young woman asked if they could help her cross to the other side. The two monks glanced at one another because they had taken vows not to touch a woman. Then, without a word, the older monk picked up the woman, carried her across the river, placed her gently on the other side, and carried on his journey. The younger monk couldn’t believe what had just happened. After rejoining his companion, he was speechless, and an hour passed without a word between them. Two more hours passed, then three. Finally, the younger monk could not contain himself any longer and blurted out: “As monks, we are not permitted a woman, how could you then carry that woman on your shoulders?” The older monk looked at him and replied: “Brother, I set her down on the other side of the river, why are you still carrying her?” Resisting to what life is trying to tell you is exhausting, but resisting to what life has already told you is guaranteed to be in vain. What’s done is done. If you feel guilty, it was a mistake you can fix. If you feel ashamed, it was a mistake you shouldn’t repeat. But regret? That’s just dragging a past event into the present. It’s a toxic attempt to twist reality. And it always backfires. 3. The Crystal Cup A Zen master was given a beautifully crafted crystal cup. It was a gift from a former student. He was very grateful. Every day, he enjoyed drinking out of his glass. He would show it to visitors and tell them about the kindness of his student. But every morning, he held the cup in his hand for a few seconds and reminded himself: “This glass is already broken.” One day, a clumsy visitor toppled the glass on its shelf. The cup fell down. When it hit the floor, it was smashed into thousands of tiny pieces. The other visitors gasped in shock, but the Zen master remained calm. Looking at the mess in front of his feet, he said: “Ah. Yes. Let’s begin.” He picked up a broom and started sweeping. I found the idea for this in The Daily Stoic by Ryan Holiday. About a year ago, I wrote that “half of happiness is being okay with what you don’t get.” Now, I think I know what the other half is: being okay with losing what you have. The man who remembers to be grateful for his possessions is ahead of most. But the man who knows they won’t last is ahead of him still. Be the second. 4. The Bowl A monk told Joshu: “I have just entered the monastery. Please teach me.” Joshu asked: “Have you eaten your rice porridge?” The monk replied: “I have eaten.” Joshu said: “Then you had better wash your bowl.” At that moment the monk was enlightened. I can only echo what Leo Babauta said about this story: “There is something profound and yet minimalist about this advice. It’s: don’t get your head caught up in all this thinking about the meaning of life … instead, just do. Just wash your bowl. And in the washing, you’ll find all you need.” We think we do, but, most of the time, there’s no need to think or plan or strategize, because, ultimately, it won’t make a big difference which option we choose. There’s always one or multiple next steps to take. So we might as well take any one of them. Often, there’s more satisfaction to be drawn from doing. No matter how our path unfolds, mindfulness always lies on the way. 5. The Move Two men visit a Zen master. The first man says: “I’m thinking of moving to this town. What’s it like?” The Zen master asks: “What was your old town like?” The first man responds: “It was dreadful. Everyone was hateful. I hated it.” The Zen master says: “This town is very much the same. I don’t think you should move here.” The first man leaves and the second man comes in. The second man says: “I’m thinking of moving to this town. What’s it like?” The Zen master asks: “What was your old town like?” The second man responds: “It was wonderful. Everyone was friendly and I was happy. Just interested in a change now.” The Zen master says: “This town is very much the same. I think you will like it here.” What we seek is what we find. The reasons why you do what you do matter as much, if not more, as what you end up doing. Because they shape how you seek. So, ultimately, they’ll also determine what you find. 6. The Teacup A learned man once went to visit a Zen teacher to inquire about Zen. As the Zen teacher talked, the learned man frequently interrupted to express his own opinion about this or that. Finally, the Zen teacher stopped talking and began to serve teato the learned man. He poured the cup full, then kept pouring until the cup overflowed. “Stop,” said the learned man. “The cup is full, no more can be poured in.” “Like this cup, you are full of your own opinions,” replied the Zen teacher. “If you do not first empty your cup, how can you taste my cup of tea?” Here’s an interesting question: What if the two men from before came from the same town? All we have to judge the world with are our own little measuring sticks. Our biased, subjective, arbitrary measuring sticks. They rarely get the job done. Empathy, however, always works. Seeing the world through other people’s eyes is like throwing out the stick: You can’t do it without widening your perspective. It’s never easy and, sometimes, it ends up being unnecessary. But it should always be the first thing you try. 7. The Four Candles Four monks decided to meditate silently without speaking for two weeks. They lit a candle as a symbol of their practice and began. By nightfall on the first day, the candle flickered and then went out. The first monk said: “Oh, no! The candle is out.” The second monk said: “We’re not supposed to talk!” The third monk said: “Why must you two break the silence?” The fourth monk laughed and said: “Ha! I’m the only one who didn’t speak.” They all had different reasons, but each of the four monks shared his thoughts without filtering them — none of which improved the situation. Had there been a fifth, wiser monk, he would’ve remained silent and kept meditating. This way, he would’ve pointed out their mistakes without a single word. Without breaking his own quest for better. Done long enough, talking inevitably leads to embarrassing yourself. Listening leads to learning. The less you speak, the smarter you get. And, maybe not quite coincidentally, the smarter you get, the less you speak.
https://medium.com/personal-growth/zen-stories-for-a-calm-clear-open-mind-28e84c523022
['Niklas Göke']
2019-07-22 08:26:14.905000+00:00
['Self Improvement', 'Life Lessons', 'Creativity', 'Life', 'Psychology']
Top 12 Tools, Frameworks, and Libraries for Software Development in 2018
As the programming ecosystem proliferates, a number of frameworks, libraries, and tools are being introduced to simplify the software development cycle. They are not just trimming the lines of code, but, are reducing the time from prototype to production. While there are plethora of options available, the pace of change is making many of these programming-aids obsolete, faster. However, there are a few that are here to stay and disrupt the software development system. Here are a few programming tools, frameworks, and libraries that have defined their space in the programming world and have proven to be an inevitable part of it. They have evolved to ease developers’ life and are certainly going to change the way software development happens in the year 2018. 1. NodeJS Nodejs, the javascript runtime framework built on Chrome V8 engine leads the list. It’s asynchronous, event-driven and based on non-blocking I/O model, which makes it the right fit for applications that are data intensive and render output to the users in real time. Launched in 2009, this javascript framework for backend development is a part of various corporate software applications. Some of the popular names (amongst many) includes GoDaddy, Walmart, Yahoo, Netflix, Linkedin, Groupon etc. NodeJS leverages javascript benefits. The Input/Output operations in Javascript are non-blocking, which makes it competent to handle multiple, concurrent events at a point. That is why, applications built with Nodejs uses less RAM, execute operations faster, and therefore is a primary choice for apps that include heavy I/O bound workflows, such as Single Page Applications, team collaboration apps, streaming apps etc. The Node Package Manager (npm) helps to manage modules in projects by downloading packages, resolving dependencies, and installing command-line utilities. Thanks to its ever expanding community, npm is the largest ecosystem of open-source libraries in the world. Javascript has been used for front-end development since its introduction. When Nodejs developers (for backend) collaborates with front-end developers, managing lines of code, spotting and fixing bugs become efficient. For the incredible range of benefits that Nodejs comes integrated with, businesses are hiring Nodejs developers to raise the performance standards of software applications. 2. Angularjs AngularJs is a structural framework, meant for building dynamic web pages. Introduced by Google in 2012, this javascript framework for front-end development is great for building Single Page Applications (SPAs). With AngularJS, there is flexibility in development. Developers with expertise in HTML can use the language with new HTML syntax with new attributes (called directives) to extend functionality of web pages. AngularJS is a MVC framework, where synchronization between the model and view happens with two way data binding. The changes made in the model data are immediately reflected in the view (and vice versa). This automated and immediate updations ascertains that the components if framework are updated all the times. The AngularJS directives can be used to create reusable components. A component enables hiding complex DOM structure, CSS, and behaviour. That way, you can separately focus in how an application looks and how it works, separately. Along with this, AngularJS brings in the benefit of client-side server validation, deep linking, DOM manipulation etc. 3. React ReactJS is a javascript library by Facebook for building user interfaces (for web). With its launch in 2013, ReactJS outpowered its contenders in the row and one of the major reasons were the Virtual DOM. Instead of directly manipulating the DOM, ReactJS save two copies the changes made; one in the original DOM and another in Virtual DOM. Whenever a React component is changed, both the DOMs are compared and only the changes in the view are updated. This ensures that the changes in the view are rendered faster. React Native is a javascript framework by Facebook for building mobile-friendly user interfaces. React Native enables building cross-platform native apps using Javascript that would have otherwise required Objective-C or Swift. Popular apps that are built using React Native showcase the acceptance that this JS framework for mobile app development have. 4. .NET Core .NET Core is an open-source, next-gen .NET framework by Microsoft. If an application needs to run on multiple OS platforms (Windows, Linux, MacOS), then .NET is a good fit for it. .NET Core proves to be a compatible choice for server-based applications when there is cross-platform app requirements, when there are high-performance and scalable systems, and involvement of Docker containers, microservices etc. 5. Spring Spring is an open-source application framework for developing java enterprise applications. It offers an infrastructure that enables developing well-structured and easily-testable java applications, web applications, applets etc. Spring is a dependency injection framework (Inversion of Control) that assigns dependencies to the object during runtime. When standalone programs start, the main program starts, create dependencies, and executes appropriate methods. This makes the code loosely coupled and thus easy to maintain. Spring framework is built-in with templates for Hibernate, JPA, JDBC, JTA etc., saving developers from writing too much code. Spring provides a consistent programming model, which is usable in any environment. There are web applications that don’t even need high-end servers and can be run on a web container like jetty or Tomcat. Also, not all applications are server side applications. Spring provides application models that insulates application code from environment details like JNDI, making code less dependent on its run time context. 6. Django Django is an open-source framework for web app development, written in Java. It follows the model-view-template (MVT) architectural pattern and is a fit for complex, database-driven applications. Django, launched in 2005 is a part of well-known websites today, including Instagram, Nextdoor, BitBucket, Disqus, Pinterest, and more. The Python based framework supports reusability, rapid development, less code and low coupling. The main Django distribution includes a number of applications, which simplifies development to an extent. This includes an extensible authentication system, built-in mitigation for web attacks (like SQL injection, cross-site scripting, password cracking etc.). 7. TensorFlow Tensorflow is a machine learning framework by Google, meant for creating Deep Learning models. Deep Learning, a subclass of ML deals with Artificial Neural Networks (ANN) that makes a system learn and progressively improve with experiences. Tensorflow is based on computational graph, having a network of nodes. Each node is an operation, running some function, which could a simple mathematical calculation or a complex multivariate analytics. While a number of brands are putting-in their trust in this ML framework (like Dropbox, Twitter, Uber, Intel etc.), Google itself utilizes the power of TensorFlow in many of its services. This includes Google Recognition, Google Search, Google Photos, etc. This mature framework is a part of small and large scale AI development projects. “According to Stack Overflow survey results 2018, machine learning is one of the important trends in software development industry. Languages and frameworks associated with ML are on rise, and developers working in these areas are high in demand.” (Click to Tweet) 8. Xamarin Cross platform native apps are the future of mobile app development, and Xamarin is one of them. Xamarin offers an edge over the proprietary and hybrid development models as it allows developing full-fledged mobile apps using single language, i.e. C#. Moreover, Xamarin offers a class library and runtime environment, which is similar to rest of the development platforms (iPhone, Android, and Windows). Xamarin offers less complex environment for development, as compared to other native cross frameworks. When its about code sharing, cost saving, and ease at maintenance, Xamarin for cross platform native development proves to be a better option over hybrid apps. Less memory utilization, faster loading of datasets, less CPU time utilization are some of the benefits that Xamarin offers over hybrid app development. As compared to other cross platform native development platforms in the market, Xamarin has the most stable and updated SDK. Also, Xamarin integrates well with Azure, which gives the benefit of developing advanced and secure cloud backend for the apps. 9. Spark Spark is an open-source, micro framework, meant for creating web applications in Kotlin and Java. Spark was open-sourced in the year 2011 and its new version Spark 2.0 was launched for use in the year 2014, which was primarily centered on the Java 8 lambda philosophy. The Java Virtual Machine (JVM), one of the biggest programming ecosystems has got a number of java web frameworks. However, java web development has always been cumbersome. For those who love JVM but don’t want the frameworks or verbose code, Spark is the solution. 10. Cordova Apache Cordova (formly Phonegap) is a hybrid app development framework that uses HTML, CSS, and Javascript for building mobile apps. It extend the features of HTML and Javascript so that they work in accordance to specific device. As a result, the application developed is neither native (as the layout rendering is done through web views, instead of native platform UI framework), nor it is a web app (as they are wrapped as mobile apps for distribution). Therefore, with Cordova, hybrid app development is possible, which saves time, effort, and cost with code sharing for multiple platforms. There is a long list of tools, frameworks, and cloud services that are available to augment performance of Cordova. Some of the popular names include Visual Studio, Ionic, Framework7, Monaca, Mobiscroll etc. Considering the potential that Cordova brings in, the contributors to this framework are some of the tech-giants, including Adobe, Microsoft, Blackberry, IBM, Intel etc. 11. Hadoop Hadoop is an open-source framework by Apache that stores and distributes large data sets across several servers, operating parallely. One if the major benefits of Hadoop over traditional RDBMS is its cost effective system for storing giant data sets. The core of Apache Hadoop is Hadoop Distributed File System (the storage part) and Mapreduce Programming model (the processing part). Hadoop is written in Java, the widely used language by developers, which makes it easy for developers to handle tasks and process data efficiently. Hadoop’s MapReduce enables processing terabytes of data in minutes; it’s that fast! 12. Torch/ PyTorch PyTorch is a machine learning library for Python. PyTorch is primarily created to overcome the challenges of its predecessor, Torch. Owing to the unwillingness of amongst developers to learn the language Lua, Torch was unable to experience the success that Tensorflow did, in spite of being into the mainstay for computer vision for years. It enables writing new neural layers in Python by using libraries and packages like Cython an Numba. * The tools, frameworks, and libraries shared above are the most popular programming-aids amongst developers (globally), according to Stack Overflow Survey Results 2018. Recommended Reads:
https://medium.com/app-affairs/top-12-tools-frameworks-and-libraries-for-software-development-in-2018-e26f99448270
['Daffodil Software']
2018-07-18 06:44:38.507000+00:00
['Mobile App Development', 'Nodejs', 'React', 'Web Development', 'Angularjs']
7 Tips to Make the Most Out of Your Pet Projects
7 Tips to Make the Most Out of Your Pet Projects Spoiler alert: Working late into the night is not one of them Photo by Kyle Hanson on Unsplash. I’ve started so many side projects. Little or not so little, most of them were focused on one thing: making myself learn something new. Although I’ve taken on several pet projects since I started coding more than seven years ago, year after year, I end up having less and less time to dedicate to them. Being more efficient has become inevitable if I want to keep going. Building real stuff is definitely the best learning method for me, and during my last side project (a word search game using Flutter), I realized a few patterns that really helped me make the most of it. Hopefully, they will help you too. So, let’s get to them! Note: I’ll use examples from my last endeavor to make my points a bit clearer.
https://medium.com/better-programming/7-tips-to-make-the-most-out-of-your-pet-projects-db8ffd49c847
['Douglas Navarro']
2020-07-15 15:46:23.524000+00:00
['Software Engineering', 'Programming', 'Software Development', 'Productivity', 'Learning To Code']
Indo-Thai Fusion Curry: A Poem
Prep Time: 15 minutes | Cook time: 20 minutes | Total time: 35 minutes Servings: 4 | Course: Entree | Cuisine: Indian and Thai fusion, Vegan, Gluten-free Ingredients: 2, 14-ounce cans light coconut milk (substitute full-fat for a creamier texture) 2 cups chopped broccoli florets 1 cup chopped potatoes 1 cup beans (whole or chopped) 1 cup diced carrots 8 oz firm tofu, cut into small cubes (optional) 1/4 cup water or vegetable stock to steam the vegetables 4 tsp Everest Chicken Masala/ Curry Powder (affiliate link) 1/4 tsp turmeric powder 1/2 tsp red pepper powder (optional) 1 tsp Lemon (can be substituted with soy sauce or any other souring agent) Salt to taste Instructions: Heat a deep pan on the stove Get ready to pour in some love Let’s cook an Indo-Thai Yellow Curry But remember, we’re in a bit of a hurry Dinner’s at half-past eight The family’s headed home straight So we’ll use a recipe unconventional I promise the taste will be sensational In, go the water and lots of vegetable Carrots, potatoes, beans, anything’s acceptable Sprinkle a dash of turmeric and salt Steam them veggies till slightly soft Some like their vegetables crunchy I like my broccoli to be bunchy Now add coconut milk from a can Gently stir the ingredients of the pan Everest chicken masala is the key At any Indian grocery store, it should be Don’t let the ‘chicken’ in the name fool you It’s completely vegan; I assure do Now throw in the masala you got Add powdered red pepper if you like it hot Let the curry simmer and then check for taste Add your favorite spices, but make haste Lastly, squeeze in some lemon And your family, summon Switch off the stove and allow it to cool By now the aroma has everyone in a drool And there your have it nice Serve it with Thai Jasmine rice Try this recipe, it’s sure to be a hit Enjoy your meal, wish you a bon appetit!
https://medium.com/ninja-writers/indo-thai-fusion-curry-a-poem-aeae575b216c
['Manasi Kudtarkar']
2020-12-07 20:48:06.844000+00:00
['Vegan', 'Creativity', 'Poetry', 'Food', 'Recipe']
React: Managing Websockets with Redux and Context
Socket Manager: Websocket + Context Provider The <SocketManager /> component can be thought of as the engine of our Websocket and state management solution. We will talk through this solution here and provide the full Github Gist thereafter. Within <SocketManager /> we want to define the following things: The React context itself, and the useContext() hook, giving developers the option to use the context in functional components, too hook, giving developers the option to use the context in functional components, too The <SocketManager /> component itself, that is used to wrap the rest of the application, providing the Context Provider to the entire component tree component itself, that is used to wrap the rest of the application, providing the Context Provider to the entire component tree Our websocket connection is managed within <SocketManager /> ’s component lifecycle methods. We’ll connect to the websocket when the component is initialised, and disconnect when it is unmounted ’s component lifecycle methods. We’ll connect to the websocket when the component is initialised, and disconnect when it is unmounted New socket events will update <SocketManager /> ’s state, and will therefore update the Context value, keeping the market data up to date For such a setup, we require the socket.io-client package, along with react-redux for later use. Install them in your project directory: // install dependencies yarn add socket.io-client redux react-redux This setup assumes that you are also using the socket.io server client, run with NodeJS, whose job it is to feed your application the market data. The backend solution for this project is for another piece, but it is important to note that socket.io provide both the server side and client side APIs for websockets. Defining the Context Defining the Context itself is simple, also configuring the context hook to be exportable for functional components to leverage: import React from "react"; import io from 'socket.io-client'; // defining the context with empty prices object export const SocketContext = React.createContext({ prices: {} }); // defining a useWebsocket hook for functional components export const useWebsocket = () => React.useContext(SocketContext); Straight away we are allowing other components access to our context, and therefore making the live market price feed reachable. Within class components, we refer to the context using static contextType property on a class: import { SocketContext } from '../SocketManager'; export const MyClassComponent extends React.Component { static contextType = SocketContext; ... } Whereas within functional components we can use our newly defined useSocket() hook: import { useWebsocket } from '../SocketManager'; const MyFunctionalComponent = () => { const priceData = useWebsocket(); ... } Whether your project strictly adheres to class or functional component, this setup has you covered. <SocketManager /> Component It is important to highlight that <SocketManager /> has its own internal state, that will dictate what our Context Provider will hold. Let’s firstly set up this boilerplate: export class SocketManager extends React.Component { state = { prices: {} } socket = null; ... } A public socket class property has also been initialised to null . This property will be updated with the socket.io connection in the class constructor. Class properties are accessible from any class function, including lifecycle methods, render and other custom methods. This is the perfect for a socket connection, whereby lifecycle methods and render require it. If you are adopting this solution in a Typescript environment, you could also make this property private, protecting it from being mutated externally: // Typescript friendly socket class property socket: SocketIOClient.Socket | null = null; <SocketManager /> ’s render() will simply return the component’s children wrapped with the Context Provider: render () { return ( <SocketContext.Provider value={{ prices: this.state.prices }}> {this.props.children} </SocketContext.Provider> ); } Notice that, as we mentioned above, <SocketManager /> ’s state dictates the value of our Context Provider, keeping websocket data fresh as the socket feeds in new price updates. As you may have guessed, we can now wrap the entire app with <SocketManager /> , or individual clusters of components we wish to give context to: // `App` root component import { SocketManager } from './SocketManager'; ... return( <SocketManager> <App /> </SocketManager /> ); If you know exactly which part of your app needs to leverage <SocketManager /> , it will make more sense to wrap the isolated components rather than the entire application. Initiating the Websocket within constructor() Now for configuring real-time updates, we can utilise the classes constructor() method for websocket initialisation. Once we have a connected socket, we can listen for the receive prices event, where the state update can happen: constructor (props) { super(props); this.socket = io.connect( process.env.NODE_ENV === 'development' ? `https://localhost:3002/` : `https://api.mydomain.com/` , { transports: ['websocket'], rejectUnauthorized: false, secure: true }); this.socket.on('receive prices', (payload) => { this.setState({ prices: payload.markets }); }); } Examining this snippet in more detail, we have firstly initialised the socket class property with io.connect() . Notice how we have relied on the process.env.NODE_ENV environment variable to determine which endpoint to connect to: // connect to localhost in a development environment process.env.NODE_ENV === 'development' ? `https://localhost:3002/` : `https://api.mydomain.com/` We’ll discuss how to set up an encrypted websocket connection, both in development and production — relying on an Nginx Proxy — in another piece. Nonetheless, this little snippet is handy for development purposes. Defaulting to the production URL can also be handy in development if you want to feed in live data to your development build! Your websocket may default to the polling method if no transport config is provided. In the above snippet we only want to connect via websocket . In my experience polling can initialise quicker, resulting in a quicker initial response from the server. However, websocket provides us a live connection in a stable manner that is more efficient in production environments. Finally, the receive prices event is being listened to, that will update our state upon the even being triggered: this.socket.on('receive prices', (payload) => { this.setState({ prices: payload.markets }); }); Handling socket disconnection componentWillUnmount() is a great place to disconnect websockets. As there is a chance the socket may already be disconnected for whatever reason, wrap the socket.disconnect() method in a try catch statement: componentWillUnmount () { try { this.socket !== null && this.socket.disconnect(); } catch (e) { // socket not connected } } Tip: Integrating React Router DOM In the event you only wish your websocket to connect on certain pages of your app, you can always refer to react-router-dom ’s location property. Simply wrap <SocketManager /> with the withRouter() HOC provided by the package: import { withRouter } from 'react-router-dom'; // component snipped export default withRouter(SocketManager); Let’s now say we only wish to connect our Websocket on the landing page of our app, we can test the pathname value of location to do so, within constructor() : constructor (props) { super(props); if (this.props.location.pathname === '/') { ... } } Or simply return false if we are not on the landing page, returning before the rest of the function is executed: constructor (props) { super(props); if (this.props.location.pathname !== '/') { return false; } ... } You’re not limited to location properties — the component’s props can also be tested to determine whether to initialise the websocket connection.
https://rossbulat.medium.com/react-managing-websockets-with-redux-and-context-61f9a06c125b
['Ross Bulat']
2019-11-11 22:09:57.149000+00:00
['Software Engineering', 'JavaScript', 'React', 'Programming', 'Web Development']
5 Reasons Your Article Got Rejected
As writers, we face more rejection than most people. It’s comes with the job and we need to learn to deal with it well. One way to do this is to understand why our stories are getting rejected in the first place. I’ve been in the writing industry (mostly magazines) for over 12 years. I’ve faced my fair share of rejection. I’ve also worked as a subeditor, an editor, and in close partnership with other editors. After a while you start to see clear patterns of what stories get accepted and why. Once you understand the reasons, it stops feeling like a personal attack and becomes something you can work with. Here are 5 common reasons stories are rejected by editors: 1. It’s not your story, it’s you Some publications and magazines only want articles by experts and well-known writers. Forbes writers, for example, are mostly well-established professionals who are known in their fields. The New Yorker often runs stories by well-known essayists, journalists, and novelists. They both also have a number of staff writers (as do many publications.) Editors will favor their regular contributors and staff writers — especially at the moment. Still go ahead and submit your story if you want to, but be aware that getting a rejection from these types of publications is not necessarily anything to do with the quality of your writing — don’t take it personally. Try again later. 2. It’s not you, it’s your story Many new writers make the mistake of picking topics that are too common. Your story is rejected because the idea is not original. Few ideas are completely original (I know it’s hard!) but if you’ve written another “Healthy living” article about how we need to get sleep, get up early, exercise and eat less junk food, don’t expect it to be accepted. The value of a story is in its details. If you want to write about healthy living, write about how starfruit transformed your breakfasts, how getting up an hour earlier helped you lose weight, how new research shows that watching the sunrise decreases your risk of depression. Go small and specific. 3. It’s not you, it’s them Perhaps your story is great, but a huge number of submissions flood the editors inbox at the same time as yours. They reject your story, not because it’s terrible, but because their magazine is full — even online magazines/media sites have a certain number of articles they prefer to publish daily. At the moment more people are working online, and trying to make a living from freelance writing, than ever before. Your rejection may have been a case of bad timing — persevere! 4. It’s not your topic, it’s your writing If your story is full of errors and badly written, expect a rejection. No editor wants to see your rough first draft. Was it too technical or wordy for the publication? Does it need a good edit to find the focus? Are your ideas backed up (the ones you’re not an expert on)? If your writing needs work, take some time to learn: do a course, read, find a mentor. Professional writing is like any other career — there’s a training period. Few people become paid writers instantly. 5. It’s your research, or lack of it You sent in an article almost identical to one the publication has just run — embarrassing! Or perhaps you sent a personal essay when they only run journalistic pieces, or a “how-to” when they only run first-person confessional stories. Do your research on the publication you are sending your story to. Make sure you read their submission guidelines. Read some of their stories. Do a google search of your topic alongside the name of the publication. Not doing your research wastes your time and theirs.
https://medium.com/inspired-writer/5-reasons-your-article-got-rejected-3d39787c72f
['Kelly Eden']
2020-09-10 19:43:56.873000+00:00
['Life Lessons', 'Writing', 'Writing Tips', 'Creativity', 'Freelance']
How to Use the Hemingway App to Improve Your Writing
Readability Grade Levels The app uses an algorithm to determine the lowest education level needed to understand your writing. For example, when you see a sixth-grade reading level, it doesn’t mean your content is meant for sixth graders. It does mean that the lowest education level to understand the writing is sixth grade. Writing at a college-grade level doesn’t mean that your writing is going to be better than writing at a sixth-grade level. Indeed, it may be more tedious and filled with jargon difficult for readers to comprehend. According to the Hemingway App, most people read at about the tenth-grade level, which is a good grade to aim for. If you do content writing for clients, you will often see they are looking for writing around a sixth-grade level. This keeps your writing simple, free from jargon, and ensures most people will understand it. Adverbs & Weak Phrases The Hemingway App lets you easily pick out your adverbs by highlighting them in blue. It’s a real benefit if you’re in the Stephen King camp on adverbs. “I believe the road to hell is paved with adverbs, and I will shout it from the rooftops” — Stephen King You may have a less harsh philosophy on adverbs. Some people think the occasional adverb spices up their writing. Yet it’s still nice to have them highlighted where you can easily spot them. It also catches weak phrases you can eliminate and highlights them in blue as well. Ironically enough, the Hemingway App alerted me to the use of a weak phrase in King’s quote, see it highlighted below in blue. Screenshot by author. In this case “I believe” is redundant. You can just say “The road to hell is paved with adverbs.” It gets the point across all the same, but sounds more powerful. Passive Voice I’m not going to cover all the reasons experts recommend for not using passive voice. The app does promise “bold” writing, and bold writing doesn’t come from passive phrases. Yet passive voice is not a grammatical error. It’s a stylistic choice, and you may have your reasons for using it. If you’re trying to eliminate or limit passive voice, this app makes it easy to spot. You’ll see passive phrases highlighted in green. Phrases with a Simpler Alternative Not only will the app identify phrases that can benefit from a simpler alternative, it also suggests a replacement word. I tend to overuse the word “however.” The app highlights it in purple, and when I hover over the highlighted word, it provides suggestions. “But” or “yet” are simpler alternatives to my “however.” Other common simpler words are “use” instead of “utilize” and “goal” to replace “objective.” Again, you get to make the final call. If you’ve got a reason to use “utilize,” and it works for your style, go for it. Hard-to-Read and Very Hard-to-Read Sentences If you have a sentence that is getting complex or long, the app will highlight it in yellow. If it’s very long and complex, you’ll see it in red. The Hemingway App recommends that you split these sentences up. They advise that you banish all your red sentences. Long sentences can be edited, simplified, and split up until they become easier to comprehend.
https://medium.com/better-marketing/how-to-use-the-hemingway-app-to-improve-your-writing-80a914f7de06
['Jennifer Geer']
2020-03-17 14:55:51.211000+00:00
['Editing', 'Writing', 'Writing Tips', 'Creativity', 'How To']
How to Make Your First $10,000 as a Freelance Writer
If you can learn how to make $10,000 as a freelance writer, nothing will be able to stop you from earning an incredible living from your talents. That first $10,000 is going to be the hardest money you ever earn. But once you have overcome that hurdle, you will have made all the basic mistakes and will have learned everything you need to know to make a full-time living as a writer. After the first $10,000, it’s just a matter of scaling and iterating to make as much money as you want. Once you make your first $10,000, you are no longer a beginner. You may not be an expert, but you are well on your way to being in control of your own financial destiny. You need to figure out how to make that first $10,000 as quickly as possible. I’ve spent the past eight years making a full-time living online, primarily through freelance writing. Freelance writing continues to be the most significant chunk of my income. I didn’t tap into some magic formula. I made every possible mistake. It took me a long time to figure out the ins and outs of making a freelance writing business work. But once I figured it out, I was able to scale my income quickly. Your path will be different from mine because no two writers are the same. I don’t believe there is a single correct path to writing success. However, hundreds of roads lead to failure. This guide will help you avoid the biggest mistakes, and it will show you some of the lessons you need to learn to run a successful freelance writing business. Once you know what it takes to make $10,000, you’ll be able to make $50,000, $100,000, or more if you desire. Understanding Your Value The first thing that you have to master before you can make $10,000 has nothing to do with your skills as a writer, your ability to market yourself, or even your ability to land clients. You have to learn what your value is. Writers make the mistake of setting their rates far too low. Many writers feel that they cannot in good conscience charge $100 for a simple blog post because they would never pay $100 for someone else to write them a blog post. You are not your target market. You are not trying to sell your services to other freelance writers. You are trying to sell your services to businesses that need and value your skills. New writers often can’t imagine anyone being willing to pay large amounts of money for their work. Sometimes your friends and family reinforce these false narratives. When I was getting started, my wife and my in-laws wondered how I would ever make any money. They asked why anyone would hire me when they could write their web copy or blog posts themselves? Your value is not what you would pay yourself. Your value as a business is how much you value you can generate for someone else. Businesses hire writers because they lack the time, resources, and skills to write marketing copy that works. There are thousands of businesses that would eagerly pay $100 or more for a blog post. Why are they willing to pay that much? Because they understand that a good writer will help them attract their ideal customers. A good writer will create content that continues to deliver results for years to come. Image by studiostoks and licensed through Deposit Photos. If you want to make your first $10,000, you have to get over thinking that your work is only worth pennies per word. You have to learn that you provide value by creating content that helps businesses make more money. Once you can consistently help businesses close sales and generate revenue, you will be able to charge whatever you want. Setting your rates is one way you communicate to clients the value you bring to their business. Your rates are not based on whether your prose is pretty or whether or not you know how to avoid ending a sentence in a preposition. Your value comes from results. Freelance copywriting is a results-driven industry. There is a huge mismatch between the number of writing opportunities there are and the number of writers who can deliver results. There might be thousands upon thousands of freelance writers in the world, but only a small percentage can consistently deliver great work. Once you understand how valuable you are, it’s time to set your rates. In the business of freelance writing, you’re going to have to decide how much you’re going to charge. There are three common ways that freelancers bill clients. They charge per hour, per word, or per project. Charging per hour is the worst thing that you can do as a freelancer. It undervalues your skill and punishes you for being efficient. You will never earn up to your full potential charging a client on an hourly basis. Consider that even a brand-new law school graduate who has just passed the bar can bill between $150 and $300 an hour. Clients are used to seeing lawyers charge large amounts of money. How much can you charge as a freelance writer with limited experience? No client is going to ever pay a new writer $150 to $300 an hour. That rate would cause most clients to have a stroke. Clients see that rate, and they do the math. They start multiplying $150 or $300 by 40 hours a week and by 50 weeks a year. They get an astronomical figure of between $300,000 and $600,000. The clients have a hard time thinking that any writer is providing that level of value. If you end up charging per hour, you have to find a rate that meets the expectations of your clients. This is how writers get stuck charging $10, $15, or $25 an hour if they’re lucky. Some excellent writers may get away with charging $50 to $150 an hour to a client. But the smartest writers are never charging per hour. Charging per word is better than charging per hour because your clients really don’t understand how long it takes you to generate those words. However, when you charge per word, you’re still limiting yourself. If you charge ten cents a word, that means you’re charging $50 to write a 500-word blog post. That’s not a horrible rate. However, what does charging ten cents per word make clients think? They tend to remember the ten cents. They associate you with being cheap. That’s not a path for building a successful business. That’s not how to demonstrate the value that you bring to the project. Charging per project is better for clients and writers. Clients can see right away how you value your services. If you charge $100 for a short blog post, clients know that you are worth the cost if they believe they will be able to get $100 of business based on your blog post. It’s a straightforward calculation for companies. If you get more efficient at what you do, you can make more money. I can write 1,000 words in an hour that are ready to publish. That means if I charge $150 per short blog post, I can make $300 an hour. I would have the same internal billable rate as the lawyers we mentioned earlier. But the client will never know that that is what they’re paying me. Instead, they just see that they’re getting $150 or $300 worth of value out of the final product that they are receiving. When you set your rates, the best way to earn a living in as short a time as possible is to use project-based billing. Tell clients what your fee is based on the expected deliverable. You need to understand what the final word count is going to be, and approximately how long it’s going to take you to complete that deliverable. But those are proprietary details. The client only needs to know what you will deliver, how much you charge, and when you’re going to deliver the final product. Project-based billing allows you to have higher rates for different services. You may charge more to write sales emails on a per-word or per-hour basis than you charge to write a blog post. Charging per word or per hour limits your pricing flexibility. Understanding your value means learning how to set your rates for the maximum amount of money that the market will bear. There are clients available at every price point. Your prices shouldn’t be set in stone. You always have the ability to lower or raise your rates as needed. The most competitive part of the freelance writing market is the bottom. You don’t want to compete with writers who are charging pennies per word. The more you charge, the less competition there is. At higher price points, clients aren’t choosing between writers based on their prices. Clients are selecting writers based on the value they deliver. If you learn how to value your services properly now, it will help you make your first $10,000 much faster. How Long Will It Take You to Make Your First $10K? There is something magical about $10,000. Once a new business has made its first $10,000, it becomes easier to believe that the business will be successful. It no longer seems like a dream. But, how long will it take you to make your first $10,000 as a freelance writer? Some writers make $10,000 their first month in business. However, it takes most writers two or three months to make their first $10,000. The speed that you reach this milestone will depend on three factors: 1. How much time you have to invest in your business 2. What rates you are charging 3. How good you are at attracting and landing new clients If you are working full time as a freelancer, it will be easier to quickly earn money. The more you charge, the fewer clients you will need to reach your financial goals. The better you are at getting clients to hire you, the faster your business will grow. Having a timetable will help you stay motivated as you do the hard work of building a new business. However, you want your timetable to be reasonable. It needs to be based on your life situation. To create an estimate of how long it will take you to earn your first $10,000, you will need to do some math. Don’t worry, none of the math is complicated, and you are allowed to use a calculator. First, you need to decide how many hours a week you have to put into your business. Be realistic. If this is a side gig, don’t pretend you are going to put 40 hours in each week. This isn’t a competition. You are merely creating a timetable for yourself so that you can hold yourself accountable. If you are a stay-at-home parent, you will also need to factor in how much time you will be spending on children and household work. I started as a freelancer while I was also the full-time caretaker of our four young children. I did not have 40 hours a week to put into the business. For demonstration purposes, let’s assume you have 20 hours each week you can devote to your freelance writing business. If you have more or less time, that’s fine. You can adjust your timetable as necessary. Running a freelance writing business is about more than just writing. You will also need to spend time each week looking for work, communicating with prospects and clients, marketing your business, and taking care of administrative tasks. You will need to find your balance. Most likely, you will only be doing client work for half of the time available. In this example, you are spending ten hours a week doing client work. In your first couple of weeks, this number will be lower because you will still need to get some clients. We are going to assume it takes you two weeks to land enough clients to have ten hours of work a week. How much are you going to be making for each hour of client work? Even though you are not going to be charging per hour, you still need to know what your internal billable rate is. Let’s assume it takes you one hour to write a 500-word blog post. At the end of an hour, you have a post that is ready for the client to publish. If you are a new writer and lack confidence in your skills, you might set your rate to only $25 per 500-word blog post. I believe that even new writers should charge at least $50 for a blog post. But I also understand that most new writers are scared to set their rates too high. Because I want to make this timetable as realistic as possible, let’s see what happens if you price your services at the $25 per blog post rate. Under these circumstances, you will only be making $250 a week, producing ten blog posts. At this rate, it will take you 40 weeks to earn your first $10,000. That might seem depressing. But, don’t despair. Let’s see what happens if you charge more. If you charge $50 a post, and you only have ten hours a week for client work, it will only take you 20 weeks to make your first $10,000. If you learn to write faster and can produce a blog post in thirty minutes, and you still charge $50 a post, you will be earning $1,0000 a week, only working for ten hours that week on client projects. Once you become more efficient, you will be able to increase the amount of time you have to work each week. If you can spend 15 hours a week on client work, and you can make $100 an hour, you will be making $1,500 each week. This means it will only take six weeks to earn your first $10,000. Writers who can devote at least 20 hours a week to working on their business, can earn their first $10,000 in about eight weeks. In the first week or two, you may not make anything. But, once you have built a little momentum and have a few clients, your income will climb quickly. You will want to aim for making between $1,200 and $1,500 a week. Image by studiostoks and licensed through Deposit Photos. The more you charge, the fewer hours you will need to work to meet your targets. Writers who start out charging $100 a blog post, and who have the skills to merit that rate, can make as much as $200 an hour. This means they can make $2,000 a week only doing ten hours of client work. Make your own timetable for how long it will take you to make your first $10,000. Use this timetable to keep you motivated and accountable. If circumstances change in your life, go back and adjust your timetable. Once you have a tome goal and understand your value, it’s time to start building the business. The first thing you should do is create a website. Creating Your Freelance Website If you are going to create a profitable and sustainable freelance writing business, you need a website. Being able to point potential clients to your website will help you earn higher fees and will help you make your first $10,000 faster than if you have to get all of your clients from content mills or freelancer websites. When I started my freelance writing career, I could not afford the ten bucks a month for web hosting. I spent the first couple of months of my career on content mills and freelance websites, earning pitiful amounts of money. When you have your own freelancer website, you can work to attract clients without worrying about having some other desperate writer underbid you. You also have more control over your career. Once you get a potential client to your website, your only job is to close the deal. Your website will liberate you from content mills. You can grow your business without worrying about someone else changing the rules on you. Having a business website is also a confidence booster. There’s something about having a place of your own on the internet to make you feel more like a real business professional. You don’t need to get fancy with a freelancer website. In the beginning, the simpler your website is, the better. I love WordPress because it’s relatively easy to use. There are tons of resources and an unlimited amount of functionality. It is also incredibly well supported. There are regular security updates, and the WordPress software itself is free to download and use. Web hosting is going to cost you somewhere between $10 and $15 a month, depending on who you choose to do your hosting and what package that you get. You will want to get a security certificate. This is an extra cost. And it may be the most significant investment you have to make because it could cost you up to $100. However, without a security certificate, Google Chrome, and some other browsers, will display a warning next to your URL that the site is not secure. While this warning doesn’t matter if you are not processing payments on your site, it still worries people looking to hire you. Many web hosting companies will give a free security certificate for at least a year when you sign up with them. Make sure and do your homework before choosing a web hosting company. When creating your freelancer website, you can use free templates and free plugins. Once you are more profitable, you can upgrade to premium products if you want. However, I still use a free theme and free plugins on my WordPress site eight years into my career. If WordPress seems too technical for you, you can also use Squarespace, Weebly, or Wix to create a simple website for a small monthly investment. If you cannot afford web hosting and a security certificate, don’t give up! It will take you a little longer to make your first $10,000, but you can still be successful. If you need to save up the money by like I did, and work at content mills and on some freelancer websites, go ahead and do it. Make sure you keep your eye on the prize of getting your own website. It is tough to get that first $10,000 until you have created your website. What should your website look like? Many freelancers make the mistake of making their first website too complicated. The bigger you make this project, the harder it will be for you to finish it. Right now, you want to create a minimum viable product. You want to do as little as possible to launch a website. Your website doesn’t need to have hundreds of pages or even ten pages. I recommend that you just create three content pages and a contact page. Your content pages will be your homepage, about page, and portfolio page. You don’t even need a blog at this point. The goal of your website is to allow you to have a place to host your samples so that you can contact potential clients and refer them to your website. Having a website makes you seem more professional and experienced. Clients will find it easier to trust and hire you when they can visit your website. Choose a domain name that sounds professional. You can use your own name, or you can create a name for your business. I started out using my name as the domain name and later created a separate entity and bought a new domain name. Don’t waste too much time on your domain name. You can always change it as your business grows. Should you use a dot com domain name? There are tons of different domain extensions. Your clients are most likely going to find your website by clicking a hyperlink in your email. You don’t need to use a dot com domain name. You just need to choose a domain name that you like, that seems professional, and that will allow you to grow your business. Image by studiostoks and licensed through Deposit Photos. Creating Content for Your Website Your homepage needs to explain what you do. Immediately after your site loads, potential clients need to understand that you are a freelance copywriter or a freelance content writer. If you have a specific niche, you should mention that in the copy on your homepage. You also need to make it easy for visitors to navigate to the about page, portfolio page, and contact page. You don’t need to worry about SEO at this point because your business is new. Nobody is going to find you in organic search results for a few months anyway. Right now, your website is going to be a place you send prospects to after you have contacted them some other way. SEO is critical to your long-term success. But, your blog and on-page SEO can wait until you are profitable and have two or three clients. The best way to succeed as a freelancer is to get a few quick wins. This will make it easier for you to be motivated to work on things like SEO and blog posts. Right now, you are building a minimum viable website. The purpose of every single page on your website is to convince a potential client to hire you. Many service professionals struggle with creating about pages. The key is to make sure that the about page is actually about your client, not you. Every time you mention your qualifications, it should be to show why that makes a difference to the client. How to Create a Portfolio Even if You Don’t Have Any Experience The most important page on your new website as you begin this journey towards $10,000, is your portfolio page. Your portfolio page will show samples of your work. Most clients are going to want to see proof that you know how to string sentences together before they hand over any money to you. Portfolios are a major obstacle for many new freelancers because they don’t have any samples. There’s a lot of bad advice on what to do for a portfolio page. But, there is an easy way to get the samples you need to get your first handful of clients — and it doesn’t require you to work for free. You are now a professional service provider. You are a professional writer, say that to yourself three times. You are a professional writer; you are a professional writer; you are a professional writer. That means you do not work for free. Do not work for free for friends or family to get samples for your website. This sets a bad precedent that you are willing to do free work. Free work is terrible for you and bad for your relationship with your friends and family. It prevents your friends and family from truly seeing you as a pro. If you do not have any samples, spend a little time figuring out what type of clients you would like to work for. Find three or four specific companies that you like. Imagine what type of content you would write for those companies. Then, create some fictitious company names that would be in the industries that you want to work with. Write blogs, web copy, or whatever content you need samples of for those fictitious companies. You can post those to your website. The clients don’t care that they’re fictitious companies. What they care about is the quality of your writing. Once you have work samples from paying clients, you can take down the fictitious company samples and put up your paid client samples. There are a lot of different ways to display work on your portfolio. My favorite way is to create a PDF of the product. PDFs are easy to read, even on mobile devices. You can also prevent them from being indexed by search engines, which may become important after posting samples that you wrote for other clients and businesses. You don’t want to be competing with your clients for SEO juice. Your portfolio is how you will always show clients your samples. Do not send them attachments unless they have specifically asked for them. Email attachments are problematic because they are so likely to carry viruses. Some companies won’t even allow employees to open attachments. Instead, you should send a link to your portfolio page in an email. Here’s the way I handle my portfolio page. I have a free plugin for WordPress called PDF Image Generator. It automatically creates a thumbnail from any PDFs that I upload. I take the word version of a blog post that I created, and I turn it into a PDF using Canva. I add a logo for whatever company I wrote it for, and I also put a live link from that sample to the actual blog on the client’s website. If you’re creating your samples from a fictitious company, you don’t have to go to the effort of making a fake logo, and you’re not going to have any links, but make it as pleasing to the eye as possible. You could put your logo or other branding element on the page as well. I also put my contact information in a sidebar on the PDF. My goal is to make the PDF look like it could be a professional marketing brochure that I could print and send. I then upload the PDF to my website. I have a portfolio page that is organized by industry type and content type. The plugin automatically creates a thumbnail of the PDF. I place the PDF thumbnails under the right category on the portfolio page. I make sure and link the thumbnail to the PDF so that all the client needs to do is click on the thumbnail to see my sample. Each PDF opens in a new tab. Setting up a portfolio page like this can take anywhere from 15 minutes to an hour, depending on how much work you have to do to make the PDFs look beautiful, and how much of that content you still need to write. If this sounds overwhelming, you can take a simple approach. I have seen freelancers create screenshots of their samples and upload those images to a portfolio page. Because your portfolio page is going to help you launch your career, I would spend more time on making it easy to use than any other page on your site. One of the best practices for your businesses is to keep your portfolio updated. Take time once a month to take down old samples and replace them with newer, better samples. I am horrible at this. But it is a good way to make your website look professional. Also, search engines often reward sites that are regularly updating key pages. Lots of new freelancers wonder about blogging. I love content marketing. I’m a passionate content marketer, and I have landed a lot of business through content marketing. But, until you have made your first $10,000, do not stress about your blog. The truth is, nobody is going to find you on a search engine when your website is new. Nobody is looking for you specifically, and your content is unlikely to have enough SEO juice to attract the kind of clients that you want right away. Plus, the world of content marketing is changing. Simple 500-word blogs don’t do much for SEO or discoverability. You will want to work on in-depth blog posts and other types of content marketing once you have made some money. Content marketing is a long-term strategy; your initial goal is to get some success as a freelancer. Once you have that first $10,000, you will have the confidence to invest money and time into long term strategies. But when your business is new, it’s easy to get discouraged. Spending time blogging every single day or even every single week and realizing that nobody is reading your blog can make you want to give up. The truth is, even if you are a tremendous writer, it can take months or even years for a blogging strategy to pay off for freelance writers. I think blogging is something valuable to do for your long-term success. But your initial efforts need to be focused on making that first $10,000. Then you should start investing time into content marketing. But right now, skip the blog and do what you need to do to get some paying clients. Who Will You Serve? The easiest way to fail as a freelance writer is to decide that everyone in the world is your potential client. You are not a good match for every single business in the world. However, as a beginner, you probably don’t know what niche will be most the most profitable for you. You don’t have to niche down right away to make a good living as a freelancer. Some freelancers take two or three years to find their niche. Some freelancers never niche down. I’m a generalist that writes in half a dozen to a dozen different industries. For me, the idea of a niche is more of a marketing tactic. But I do have a very well-defined idea of who my clients are. That’s what you need to figure out. Who will you serve? Do you want to write for small businesses, medium businesses, or large businesses? Do you have an idea of what types of industries that you want to write about? Do you want to target local businesses, or do you want to be location independent and only contact clients online? Every choice you make has pros and cons, and every freelancer needs to make their own decisions based on what works best for them. I started out targeting local clients very briefly and realized I didn’t like it. Local clients always wanted face-to-face meetings. I changed to my marketing so that my business became location independent. I have written for Fortune 500 companies before, but I prefer writing for well-funded small businesses. It takes longer to get paid by big businesses, and many small businesses are easier to work with than large companies. Often huge corporations require several rounds of revisions, and your work has to be approved by several different people, or even committees. Some freelancers love that prestige of writing for Fortune 500 companies, and there’s nothing wrong with that. I find that small business clients are easier to land when you’re just getting started. But you do have to be protective of your rates because many small businesses will not be able to afford you. Keep in mind there are thousands of small businesses that gross millions of dollars a year, and they can pay you professional rates. Image by studiostoks and licensed through Deposit Photos. Some niches are going to be more profitable than others. Travel writing is extremely competitive, and the rates are much lower than writing for software as a service or FinTech companies. Again, there is no right or wrong choice. However, refusing to make a choice is a mistake. You cannot afford to market to everyone. If you try and appeal to everyone, you will appeal to no one. You need to choose what works best for you. Most decisions about your niche and what types of clients you make can be changed later. You are not swearing a sacred oath to only work in one niche or with one type of client. What matters right now is that you make some decisions about who you will serve so that you can begin marketing yourself. If you find you’re not able to get clients in your target market, you can always go back and change your ideal client profile. That’s the beauty of working for yourself. But you can’t succeed targeting everyone. The narrower your scope, the easier it will be to market and manage your first clients. When I began freelance writing, I had just retired from a nine-year career as an attorney. I targeted lawyers to do content marketing for them because I understood that industry. It did not require a lot of research for me to do any of the writing, and I had a natural advantage over other writers. Many attorneys preferred hiring somebody with a law degree over a writer who didn’t have that same credential. My clients were generally small and medium-sized businesses. I eventually moved away from writing for lawyers and started writing for B2B technology companies because the work was more interesting, it paid better, and the clients were more fun to work with. You want to create some type of profile for who you’re going to be serving. Again, if you strike out, there’s nothing that prevents you from going back and making a change. But when you sit down to start doing your marketing, you will feel much calmer, knowing who you are trying to reach. Payment Issues You need to figure out how you’re going to get paid. You need to have three things figured out about the mechanics of how a client is going to pay you: 1. How much will you charge upfront? 2. When do you expect payment? 3. How is the client going to send your money to you? The easiest way to get paid is to have a PayPal account. You want a PayPal business account because there are more features available to you. It’s a simple process to convert a regular PayPal account into a business account. You only have to click a few links on your account page and fill out a short form to explain what your business does. There’s not a fee for a PayPal business account. You can connect your PayPal account to your regular checking account so that you can transfer money back and forth as needed. You can also get a PayPal business debit card, which can be invaluable for keeping your business and personal expenses separate. Some clients will prefer to pay you via an ACH transfer, also known as direct deposit. This is nice because you get to avoid PayPal fees. I’ve run into a few clients who want to pay me by check. I refuse to work with these clients because it takes way too long to get paid, and I’m far too impatient for that. Stripe is another payment processor that is becoming increasingly popular. If you’re overwhelmed by the options, start with PayPal. Most of the time, I also use PayPal to send invoices to my client. Clients can click a link and pay me in less than a minute. Once you have the mechanics figured out of how a client is going to pay you, you need to figure out what your pay policy is going to be. I strongly recommend you get paid upfront. My policy is that for anything under $1,000, I get paid 100% upfront. Anything over $1,000, and I need a 50% deposit, and the rest needs to be paid upon completion of the work. Jobs that are for $10,000 or more require a more complex payment structure. I work out these details as required, but I always get a significant payment before commencing the work. Occasionally, I’m willing to do the work and get paid after I submitted the work to the client if I have worked with that client for a long time, and they have always paid on time. If you get paid upfront, you will never be ripped off. Clients who are hesitant to pay you upfront may not be good clients for you. They may be inexperienced. Many vendors are going to require payment upfront. Clients who don’t want to pay anything upfront may not actually plan to pay you in the first place. Clients have all the protections in the world, especially if they are paying you via PayPal. If for some reason, you fail to deliver the work, it is easy for them to get their money back from PayPal. If you do the work and don’t get paid, it’s almost impossible for you to get your money. You also need to decide when you plan on sending out your invoices for clients you work with regularly. I send new clients invoices immediately because I won’t start working on the project until the invoice is paid. For recurring work, I send my invoices out on the 30th of every month, or the last business day, whichever comes first. If you are willing to pay for the feature, PayPal allows you to set up recurring invoices, where you don’t have to manually send out an invoice every month, the client is charged automatically. I don’t do a high enough volume for that to be worth my time or expense. It works best for me to batch my regular invoices and send them all out the same time each month. But again, this is your business, and you need to decide what makes the most sense. You will never reach your first $10,000 if you don’t have a way to get paid, and you don’t insist on getting paid on time. Image by studiostoks and licensed through Deposit Photos. Set Your Policies Every business has a set of standard operating procedures or policies (SOPs). You are running a business now. You need to develop the standard operating procedures to guide your operations. These policies will help you make the right decisions for your business, even when you are under pressure. The purpose of having policies for your business is to save you from having to make the same decision hundreds of times and to make sure you are consistently providing your best work for your clients. Some areas you need to set policies for right away include: Rush work or emergency orders Revisions Refunds Phone calls Creating a policy ahead of time means that you will make a better decision when faced with a question from a potential client. Instead of having to decide on the spot how you want to handle something, you can clearly communicate your policy. Rush Work The beautiful thing about being a freelancer is you get to set your own deadlines with the client. I hate having work hanging over my head. I tend to turn things over relatively quickly. For small projects, my turnaround time is seven days. For longer projects like white papers or skyscraper posts, my turnaround time is between 14 and 30 days. If a client wants something faster than my standard turnaround time, then I consider that rush work. I charge a 100% premium for delivering something faster than normal. You don’t have to charge as extreme of a premium for rush work as I do, but you need to charge some additional fee for rush work. It helps keep you sane, it can provide you a little extra profit, and it sets boundaries with clients. I hate rush work. Most of the time, I don’t even accept rush projects. The reason I charge the 100% premium when I do accept rush work is it forces the client to reconsider their deadlines. It also helps compensate me for the trouble of upsetting my schedule to move that client’s project to the front of my queue. Revisions I hate revisions. They make me want to gouge my eyes out. Once I finish a project, I want to be done with it forever. Very few clients ask me about my revision policy. That probably has to do with the way that I find clients and the types of clients I tend to target. I don’t offer free revisions unless I have made a mistake. If a client asks me to write a blog post about microlending in Malaysia, but for some reason, I wrote a post about microlending in Asia, I will fix the mistake as quickly as possible. For blog posts, I will fix the error within 24 hours. This rarely happens because I am careful — but I’m not perfect. When the client hires me to write a post about microlending in Malaysia and then tries to change the scope of the project or the topic I am writing about once I have started writing, I let the client know they are going to have to pay me for the revisions. At this point in my career, this rarely happens. But, it used to happen frequently. By charging for revisions, you protect your time. You may lose a few clients over this type of policy — and that is a blessing. You don’t want clients who are so disorganized that they change their minds about the work they are hiring you for. If a client needs minor tweaks, and they request the changes in a professional manner, I always make the changes quickly and without any additional charge. Anytime a client is rude to me, I will not do any additional work on that project. I refuse to be treated with rudeness. Refunds What is your refund policy? My general principle is that I deserve to be paid for my work. If I delivered what I promised, I am not going to offer a refund. However, there is also something called the gambler’s fallacy. This is the idea that you keep chasing something because you have already invested so much into it. If a project starts to go sideways, I would rather cut my losses and issue a refund than continue to work on a project where I have negative feelings towards my clients. In the short-term, this means losing some money even though I have done some work. However, in the long-term, it gives me more time to find clients that are a better fit, and it makes me happy to fire clients I don’t like. Phone Calls Phone calls are disruptive to your day because they interrupt you. A phone call takes you out of your writing workflow. It can take twenty minutes or more to get back into the right state of mind after an interruption. I am an introvert — I hate talking on the phone. I have a draconian phone call policy. I do not answer my phone when it rings unless a client has a scheduled call. I will not take client phone calls that are not scheduled. If a client or potential client calls me, the call goes to my voicemail. I return all my calls within one business day. But I am in control of when I will call them back. Often, instead of playing phone tag, I send an email and set an appointment for a call. I rarely have to talk on the phone. I handle 99% of my client communications through email. I do not like working with clients that need a lot of hand-holding or want to talk to me about each detail of the project. Most phone calls are a colossal waste of time. I’d much rather have the client put their thoughts into a few sentences in an email then spend even five minutes on the phone. Some people enjoy talking with clients. If that is you, wonderful! But be careful. You typically aren’t getting paid for the time you spend on a call. The more time you spend talking, the less time you have to make money through your writing. Another issue with phone calls is that they often end up with you providing free consulting services. Lots of clients are going to try and get free consulting work out of you. Some will do it quite consciously, but the vast majority are just innocently asking questions. You need to decide what your policy is about the kind of questions you answer for free. At the beginning of a relationship with a client, when they’re still trying to decide if I am a good fit, I am happy to spend as much as an hour on the phone discussing their project and making marketing recommendations. I know a lot more than many small business owners do about marketing, and I’m happy to provide prospects some extra value. I don’t charge for that because we are still outlining the project, and I also want to make sure they are poised to use my work in the most effective way possible. But, after we have an agreement about the services I am providing, I am not willing to provide free consulting services. If the client asks, I will explain that they need marketing consulting services and send them a quote. I either make some extra money, or the client turns somewhere else to get the help they need. If you handle this situation professionally, you will not damage your relationship with the client. Having policies for these common client issues will save you an immense amount of time and stress. It will also help you reach your financial goals faster because you can spend more time doing the work you’re paid for, and less time dealing with client issues. Getting Paying Clients Image by studiostoks and licensed through Deposit Photos. Now that you’ve done all your preparatory work, you are ready to hit the ground running and start making money. The next thing you need to make your first $10,000 is a few paying clients. It would be nice if all you had to do to get new clients was to create a website. However, that isn’t how it works. The internet is a crowded place. If you want to be successful, you have to let potential clients know about your services. Simply launching a website is not enough to attract paying clients. You have to hustle to get your first set of clients. The three best ways to get paying client as a new freelancer are: 1. Sending cold emails 2. Freelancer job boards 3. Freelancer marketplaces You may be tempted to dabble with content mills. These are websites where you get paid pennies per word for assignments that the platform sends you. Content mills are awful places to work. The entire platform is set up to extract as much value from you as cheaply as possible. You may never earn your first $10,000 if you rely on content mills. Instead, you need to focus on maximizing your value. The best return on your time investment when it comes to finding new freelance writing clients is sending out cold emails. Cold email is a pure numbers game. The more cold emails you send out, the more clients you will land. If you have never sent a cold email before, plan on only getting a 1% conversion rate. This means that only one out of every 100 cold emails is going to land you a client. That first client is probably only going to pay for one blog post. However, once you get the hang of writing and sending cold emails, you will never have a prolonged period without paying clients. The keys to sending an effective cold email are: · Identifying the right target market. · Make sure that you’re sending your cold email from a proper email address. Do not use Gmail or hotmail.com address. You will end up getting banned. · Have a catchy subject line and a very brief message. Your message should only be three or four sentences. · Explain what you do, how you can help them, and have a link to your portfolio and invite them to click that link. You will also need a way to find potential clients to email. You can find them through LinkedIn, a Google search, or by paying for a list of prospects. It can be daunting to sit down and write 100 cold emails, and it can be tempting to just cut and paste the same content into each message. You will get better results if you personalize each message. The basic content can be the same, but you should make each company feel like you are only messaging them. You don’t have to send out 100 emails a day. But, if you don’t have any clients, you should be spending most of your available time every day looking to build your client base. Start out sending five cold emails a day. Once you have mastered that, start sending 10 emails a day. Try and work up to sending 20 cold emails a day so that you’re sending out 100 a week until you build up your client base. Once you have client work to do, you can scale back on the number of messages you are sending out, but you should still send a few each week to avoid a future drought. Another way to find work quickly is through job boards. The best freelance writer job boards are: The rates that you will make on jobs you find on freelance job boards are often less than what you can charge when you directly contact a client through an email. When you are just getting started, you need a few quick wins. Landing a few clients from a job board will give you confidence, more portfolio samples, and money in your pocket. Each listing will have very specific instructions for applying for the gig. If you do not follow the instructions, your application is going to be deleted. Keep in mind that most of the job boards are updated each morning. If you are interested in getting a job through a job board, you need to be applying every morning. While getting paying work through a job board is also a numbers game, you only want to apply for the jobs that you feel you are a good fit for. Most of the job applications are like a cold email. Keep your responses brief, but make sure you include everything that the poster has requested. I have had clients that I landed from job boards that I kept for years. Some of them paid extremely well. I have also had a lot of clients where I did one job and knew that I would never want to work for them again. When it comes to applying for work on job boards, I recommend only using job boards where a client has to pay to place their listing. These tend to be less scammy than places where anyone can post anything for free. Freelancer marketplaces can be another great source of paying clients when you are a new freelancer, still chasing your first $10,000. There are hundreds of different freelance marketplaces. I have tried a lot of them. I disliked working on UpWork and other sites where you bid against other freelancers for gigs. For me, these sites felt like a race to the bottom. Bidding for jobs on marketplaces drives down rates, and a lot of the best clients are not on those types of platforms. However, some freelancers have had great success on these kinds of platforms. If you don’t have any clients, you need to be doing everything you can to find clients. That might mean creating profiles on one or two freelancer marketplaces. My favorite freelancer marketplace is Fiverr. I’ve been on Fiverr for six years. I’m a Fiverr Pro, and a top-rated seller. Today, the platform accounts for a much smaller portion of my income. But, in years past I have made good money from the clients that found me on Fiverr. Fiverr is different than other marketplaces because you’re setting your own prices and not bidding for job listings. It’s more productized. Clients either buy your services or move onto a different seller. You’re not bidding against other freelancers. It does take some time to make decent money on Fiverr. It is incredibly competitive. The platform also takes a 20% commission on the fess you earn. Many sellers are making excellent money every month on the platform. Fiverr has even produced at least one million-dollar writer. It is going to be difficult to make $10,000 through exclusively freelancing marketplaces. But, these platforms can be a part of your strategy. After you have more experience and connections, you can move away from the platforms. Now that you are a professional writer, it is up to you to go out and find clients. Over time you can work on strategies to get clients to come to you. But, right now, nobody has ever heard of you. You have to do the work of finding clients. Once you start earning money as a professional writer, you will be hooked, and landing clients will get easier. Image by studiostoks and licensed through Deposit Photos. Structuring Your Day The best part about being a freelancer is that you are in charge of your schedule. You start your day when you’re damn well ready. The hardest part about being a freelancer is that you are in charge of your schedule. It’s easy to procrastinate. There is no boss to please, and there’s nobody looking over your shoulder. When you don’t have any client work to do, it’s easy to get lost on social media, binge-watching something on Netflix, or playing a video game instead of building your business. If you’re going to be successful, you need to impose some structure on your day. That doesn’t mean you have to get up at five o’clock in the morning. It doesn’t mean that you have to get all your work done at one time. But you need some type of rhythm so that your mind and body know that when it’s time for you to get to work As a freelancer trying to make your first $10,000, there will be four primary parts of your day: 1. Prospecting 2. Marketing 3. Client Work 4. Administrative Tasks Because this is your life and your business, you need to decide in what order you will tackle these tasks. Keep in mind that there is a mental cost to switching tasks. That means that when you go from working on one type of project to something completely different, it takes your mind a while to fully focus on the new kind of work. The most efficient way to get through your day is to batch similar tasks. This also keeps you from having to open different software or websites, you’ll have everything that you need to do a specific set of tasks at your disposal. I like to put in breaks to separate the different parts of my workday. It helps to reset my mind and gets me focused on what I need to be working on. During these breaks, I usually get up and walk around. This helps me from getting bored, and it also helps to prevent me from dying early. When possible, I will go for one or two walks a day, walking two to three miles each time. This exercise also gives me time to brainstorm ideas for my business. What should you be doing during each part of your day? Here are a few examples. Prospecting Prospecting is the work of staying in contact with potential clients. These are people that you have already had at least some contact with. Prospecting includes following up with people you have sent emails to and responding to people that emailed you back. It also includes following up on applications that you sent through a job board. You are trying to convert those prospects into paying clients. Your very first day as a professional freelance writer, you may not have to do any prospecting because you may not have created any prospects yet. But by your second week, you should have made some marketing efforts that you will need to follow up on. I do not like to bother people. I also hate when people are aggressive when following up with me. My rule of thumb is I give potential clients a week to respond to me from a cold email before I follow up. With job board applications, I wait three days before sending a message. If you do not follow up with potential clients, you will leave a lot of money on the table. It’s easy to feel like you were rejected when you don’t hear back from someone who at one point expressed interest in you, or when you send a cold email out and don’t hear anything back. But the truth is, people are busy. Email is also complicated. There’s a decent chance your email ended up in a spam folder. I have found throughout my career that prospects generally appreciate it when I politely follow up. Once I have been rejected, or someone has asked me not to contact them again, I put them on my do not contact list and leave them alone. I then focus my efforts on people who do want to work with me. Marketing Marketing is the part of your day where you try and make new contacts. Prospecting should be a higher priority in your schedule than marketing because those people are closer to paying you money. That doesn’t mean you have to do prospecting before marketing in your day. But when you only have a finite amount of time, you need to spend most of your time on activities that will bring you money the fastest. However, you cannot ignore marketing. If you aren’t doing any marketing, you will soon run out of prospects to follow up with. You have a lot of options when it comes to marketing your services. In the beginning, you need quick wins. You need to get to $10,000 as fast as possible. This means you should focus your efforts on activities like applying to job boards and sending out cold emails. Later on, you will want to add marketing tasks that will benefit you over the long-term. This includes things like content marketing and email marketing. You want to spend some time marketing every day. If you market your services every day, you will avoid dry spells. You will have a constant flow of prospects. Spend at least 15 minutes a day doing marketing tasks. At first, all of that time will be spent sending out cold emails or completing job applications on job boards. As your business matures, you will want to add content marketing tasks to your daily routine. Client Work You can’t get paid unless you do the work for the clients. I like to batch my client work. I try to spend my mornings focused on client work. This helps me control my anxiety about deadlines. It also allows me to take a long lunch break before I tackle prospecting, marketing, and administrative tasks. When my kids are at home and I am balancing distance learning, I often can’t get started on client work until after lunch. When that happens, I try to work on administrative tasks in between troubleshooting my children’s technology issues and helping them with schoolwork. I find that when I let the client work pile up, it stresses me out, and I feel overwhelmed. The quality of my work tends to suffer. While I am capable of spending eight hours a day just doing client work, I am happier when I only have to do two to three hours of client work in a day. Doing a few hours of client work a day also makes it easy for me to meet deadlines. Administrative Tasks You are running a business now. That means you need to take care of several administrative tasks. These tasks may include paying bills, sending invoices, finding a bookkeeper, or tweaking your website. When it comes to the administrative part of my day, I like to combine my personal and business tasks. I will pay all my bills at the same time. If I have to run errands, I do them during my admin time. There are a million admin tasks that need to be done in your business and your home. Many of them are not that important. But if you keep neglecting them, over time, it can erode your ability to build a steady freelance writing business. Since you are at the beginning of your journey, and you’re focused on making your first $10,000, give yourself the gift of a good foundation and spend 15 minutes every day doing administrative tasks. This keeps things from piling up and will help you feel better about the growth of your business. Over the past eight years, I’ve also been the primary caretaker of our four children. I’m a work from home dad, and this fact has influenced my schedule. If you don’t have any children or you have a spouse that is the primary caretaker of your children, then you may have a completely different plan for your day. There is no magic routine that will make you a rich freelancer. You can structure your day in a way that makes sense for you. You can change your routine any time you want. What is important though, is that you do have some structure for your day. You should have regular business hours because you are a regular business. You will be more productive, and you will also make more money if you are committed to a certain number of work hours a day. Your business hours don’t have to be traditional business hours. However, if you are going to have client contact, you need to have some overlap between your schedule and what everyone else considers normal business hours. I make a full-time income working on client-specific work for only two to three hours every day. I rarely work more than four hours total on my business, five days a week. I do not do any client work on weekends. If you set your rates high enough and learn to do the work quickly enough, you can earn a full-time income with only part-time hours. Freelance writing can provide you with much more freedom than almost any other kind of job. Image by studiostoks and licensed through Deposit Photos. Troubleshooting Marketing Issues What happens if you do everything you can, and you can’t get any clients? Don’t panic. It happens. I’ve been there. Every freelancer has been there. You need to remember two things. One, you have to be patient. It takes time to get a new business off the ground. You will not be an overnight success. Two, if you keep doing the same things, you will keep getting the same results. If you have been trying for two weeks to land a client, and you have not succeeded, you need to evaluate where the problem is. Are you getting responses to your cold emails, but not getting any to hire you? Are you not getting any response to your cold emails or job board applications? Once you identify the place where things are breaking down, you need to see what you can change. If you are not getting any response to job board applications or cold emails, you need to look at two things. One, are you sending out enough inquiries? Cold email often has a low conversion rate of 1%. If you have only sent out ten cold emails, the problem is likely you need to send out a lot more emails. The same is true for job applications. There are often hundreds of applicants for each job posting. If you are only applying for one a day, your odds aren’t very good. If you are doing enough work, but you still are not getting any response, you need to doublecheck your emails and applications for issues like spelling mistakes, typos, or awkward sentences. All of these can turn off potential clients. You should examine your cold email for possible spam filter triggers. See if your offer is clear. Ask yourself if your email is brief enough. You may want to ask a friend to look over your cold email template or job application template for issues. If you are getting responses but are failing to close the deal, you need to adjust how you respond to people. Why are you not getting hired? One simple way to diagnose the problem is to ask prospects why they decided to go with a different freelancer. Be nice, and explain that you want to improve. Do not attack them for not hiring you. You may find out your prices are not right or that your samples are not the right fit. One common issue new freelancers have is that they lack the confidence they need to land a gig. You may just need more practice talking with prospects. Image by vectorlab and licensed through Deposit Photos. While not getting client work is frustrating, it is part of the process of building a writing career. You have to get used to rejection. You also have to learn how to iterate and try again. If you are willing to learn from your mistakes and you keep prospecting, you will land work. Rinse, Repeat, and Improve Once you start landing clients, it’s going to become addictive. You’re going to be so excited that you will catch yourself looking at your bank account in shock that you made all that money by writing online. As you start to meet your income goals, celebrate your small successes. Once you reach your $10,000 goal, throw yourself a bigger celebration. Have an expensive dinner, or treat yourself to something expensive. You deserve it. You have proven that your business is viable. The first $10,000 is much harder to earn than the next $10,000. Image by studiostoks and licensed through Deposit Photos. After you treat yourself, it’s time to get back to work. You have to keep doing all the things that have made you successful so far. Market yourself every week, with a combination of short-term and long-term tactics. You need to set your next goal. Maybe your new goal is to make $10,000 from your website, and not from freelancer websites. Another step you need to take is to raise your rates. You now have a decent portfolio. You have also improved as a writer. Quote a higher rate to all of your new prospects. The biggest mistake I made in the early part of my career was I kept my rates low for far too long. There are eager clients at every pay rate imaginable. As you develop better skills, you deserve to be paid more. Some freelancers are scared to raise their rates because they think they will lose clients. The truth is, you will lose clients. But that’s okay because then it makes room for you to find more clients who are ready to pay you what you deserve. You are not going to have the same clients for your entire career. As your business matures, your business is going to outgrow some clients. Some clients will accept your rate increases, and some will not. But you owe it to yourself to regularly raise your rates. The other thing that you need to do to continue to flourish as a freelance writer is to experiment. When working on earning your first $10,000, you will probably mostly write blog posts. There is a ton of blog post work out there, and it is simple. But as you start earning more, and as you learn more about marketing and copywriting, you should try writing other kinds of projects. Branch out and write emails, explainer video scripts, white papers, or anything else that interests you. You may find that there are different types of content that you enjoy writing more than blog posts. You may find work that is more profitable for you. My most profitable projects are emails and explainer video scripts. They tend to be shorter than a lot of the other web copy projects that I take on, but I can charge more for them because they have a higher perceived value. For me, they are also fun to write. This is your business. It doesn’t have to follow anybody else’s rules. You will succeed much faster, and you will avoid burn out if you enjoy the work. So, experiment. Find the type of work and the type of client that makes you happy. If you try something new and you hate it, you know that you should turn down that type of project in the future. There’s no downside to experimenting as a freelance writer. You’ve got this. You now know everything you need to know to make your first $10,000. Go ahead and get to it!
https://medium.com/escape-motivation/how-to-make-your-first-10-000-as-a-freelance-writer-64df1c217dac
['Jason Mcbride']
2020-08-12 20:28:29.557000+00:00
['Marketing', 'Writing', 'Business', 'Make Money Online', 'Freelancing']
Why My Transition to Veganism Was Harder the Second Time
Why My Transition to Veganism Was Harder the Second Time One doctor says cheese is addicting — and I agree. Photo by Elisa Michelet on Unsplash Do you know what’s harder than going vegan? Re-starting a vegan diet after years away from it. My roommate and I are doing a vegan November. Or “Veganember” as I’ve been calling it. My roommate has resisted this term, she’s no fun. Anyway, we have both been feeling a little unwell lately and thought doing a month of a vegan diet would help us re-set a bit. Honestly, it sounded pretty easy. I’ve done this before. I was a vegetarian for over twenty years, and I went in and out of veganism multiple times over those years. Inevitably it was the siren song of cheese that pulled me back to straight vegetarianism. Ten years ago I abandoned my vegetarian diet. I was doing a lot of running, participating in at least marathon or half marathon every month, and I got pretty anemic. No matter what I ate or what type of iron supplement I took, I just couldn’t get my ferritin levels up. I started eating meat again, not excessively, but a few times a week. Fast forward to now, and I haven’t been running since an injury sidelined me several years ago. The more sedentary lifestyle packed on the pounds I had lost when I was running and working at home made it easier to snack during the day. My snack of choice? Cheese. Cheese sticks. Cheese and crackers. Cream cheese on a bagel. Really, any cheese would do. And possibly some ice cream for dessert. Ah, ice cream, my second favorite dairy product. My roommate has always been vegetarian, but like me, she had developed the habit of eating a lot of cheese. We were at the point where we were buying those giant Costco bags of cheese and moving through them quickly. Even though we both eat a lot of vegetables, clearly it wasn’t enough to cancel out the daily infusions of cheese. Neither of us felt particularly good. Something needed to change. When my roommate suggested we go vegan at least for the month of November, my first thought was, “This should be easy.” After all, we’ve done this before. Many times. “Plus, fake cheese is so much better than it used to be,” I added confidently. The last time I was eating a vegan diet, the only fake cheese was slimy soy cheese products that left a weird chemical aftertaste. Now, there’s a plethora of really good fake cheeses. They still aren’t completely like “real” cheese but offerings from companies like VioLife, Myoko’s and Kite Hill are pretty darn good. Fake cheese has come a long way. November came, and we replaced our Greek yogurt with coconut yogurt. We replaced our cream cheese with Kite Hill. We replaced our butter with Myoko’s. We put VioLife slices on our sandwich. We bought Coconut Bliss ice cream. It’s all good. Super expensive, but good. Veganism is not cheap if you’re committed to eating vegan variations of the food you love. The fake dairy products are good, but they’re still not dairy. They’re tasty, but they are just not the same. Within a few days of eliminating dairy from my diet I was craving dairy like I used to crave a cigarette when I quit smoking years ago. I wondered why I was feeling so deprived when I had perfectly suitable replacement products to eat? And why did it feel so much harder to go vegan than it did in the past? Some people argue that dairy products can be addictive, the same as caffeine or sugar. I’m one of those people. I’ll admit it, I am a cheese addict. I decided to do some research to see if others had experienced this. I came across an article titled, “Cheese is So Addictive, One Doctor Calls It Dairy Crack.” The article discusses a book called “The Cheese Trap” by Dr. Neal Barnard. Barnard isn’t some fringe doctor, he’s a professor of medicine at George Washington School of Medicine and the founder of the Physician’s Committee for Responsible Medicine. Barnard argues that when you eat cheese, the proteins in the cheese act as mild opiates. These proteins attach to the same parts of the brain that respond to narcotics, producing dopamine — also known as “the feel-good neurotransmitter”. “Cheese,” Barnard said “is not just tasty. It actually contains concentrated opiates, along with salt and grease, that tend to keep us hooked.” As I discovered, when I went back to veganism this November, eliminating cheese from your diet can actually trigger withdrawals. The more cheese you were eating, the worse the withdrawals. What’s the problem with cheese? Some studies estimate that the average American eats 35 pounds of cheese every. The top cheese? Mozzarella. Clearly that’s a lot of cheese, and pizza is a big culprit. Cheese is high in calories, sodium, saturated fat, and cholesterol, which doesn’t do any of us any favors when it comes to our health. In “The Cheese Trap” Barnard references studies showing that men who eat high quantities of cheese tend to have lower sperm counts, and for both genders there is a correlation between cheese consumption and disease like Alzheimer’s, atherosclerosis, and diabetes. The famous “China Study” also linked cheese to a higher prevalence of many types of cancer. Cheese isn’t just bad for humans. The production of cheese is big business. Cows are artificially inseminated over and over again to keep them producing milk for the dairy industry. Multiple investigations by People for the Ethical Treatment of Animals (PETA) and other animals’ rights groups have found dairy cows being held in filthy and inhumane conditions. You never see that in the happy cow advertisements. What happens to your body when you give up cheese? Every body is different, but eliminating dairy cheese from your body can have these benefits: · Reduced bloating · Clearer skin · Increased energy · Better immunity · Reduced constipation · Lower cholesterol · Lower sodium So, what’s a cheese addict to do? If you’re ready to remove cheese from your diet, the first thing to consider is weaning yourself from your favorite dairy product. If you go “cold turkey” like I did, you are likely to experience more withdrawal symptoms. You may also go through a detox period where you notice breakouts or gastrointestinal distress. But stick with it, and you’ll likely start to feel better in a week or two. Will I keep on being vegan after Veganember ends? Mostly. For me, I always do better mentally if no food is “forbidden” from my diet. I’ll definitely do my annual shipment of deep-dish cheese pizza from my favorite pizza place in Chicago for my birthday, but overall, I’m going to stay off the cheese as much as I can. Next to tackle? Sugar. Ugh.
https://medium.com/in-fitness-and-in-health/why-my-transition-to-veganism-was-harder-the-second-time-1b3caa21592e
['Rose Bak']
2020-12-14 15:35:53.145000+00:00
['Animal Rights', 'Vegan', 'Health', 'Addiction', 'Food']
How I Make an Extra $1,344 Passive Income a Month Online —Even Though I Started Out Clueless
Step 1: How to make your passive income product I mentioned in the introduction of this post that in order to create our passive income, we're going to need to create a passive income product. That passive income product is going to be a digital product. More specifically, that product is going to be a downloadable ebook. Just to prove to you that we're going to be doing this and to prove the income, see figure 1.1. Fig 1.1 selling digital products at an average order value of $17.23 You can see that there were two digital products being sold (blue and yellow). You can also note that the sales really started to pick up in the second half of the month. This wasn't due to this system not working. This only had to due with technical problems regarding the passive sale that were out of my control. We could reasonably assume close to double the amount of sales. I don't share all of this to throw it in front of your face. In fact, I share all of this to let you know what's possible. I'm just a guy from the Midwest who experimented — a lot that didn't work and some that did. Again, what helped generate this income is a passively selling ebook. This article isn't going to go over the intricacies of how exactly to build the ebook — it would make for too long an article and I know, statistically speaking, I don't have that long to keep you engaged with the content I already want to share with this article. You can either create one yourself using GoogleSlides and downloading as a PDF or have a designer help you on Fiverr or Upwork. However, it is important to create an ebook that your audience actually wants. Furthermore, it's important to create an ebook that actually solves a problem for your audience. Too often do I see online creators creating products for themselves (they just don't know it). They believe that because they created it — an audience will surely want it. Do you notice in figure 1.1 the conversion rate? Almost 50% of all the people who make it to the checkout end up purchasing the product. This far outpaces the e-commerce industry average of 3%. This is because after years of testing (and testing that is still ongoing) the development of a product an audience actually wants (and a solution to their problem). What are some examples of this? Take for instance you're in the dating and relationships niche. You want to help couples struggling with conflict resolution. You could create a problem-solving ebook helping couples worth through issues. This of course would be packed with verified studies and real-world examples. Once you have your ebook, we need to host it via an online storefront. To do this, you can use free platforms like WooCommerce. You could also use new utilities like Convertkit Commerce (if you already use Convertkit as your email service provider — of which you'll need). For my purposes, I use SendOwl and am transitioning to ThriveCart (both paid) because they offer more. None of these links are affiliate so click away! You can follow the platform's FAQs and setup wizards in order to figure out how to upload your digital product. From there, you're going to need to drive traffic to the product that you created. That's what we're going to take a look at next.
https://medium.com/the-ascent/how-i-make-an-extra-1-344-passive-income-a-month-online-even-though-i-started-out-clueless-bd45dd55ce6f
['Jon Brosio']
2020-09-15 16:01:02.058000+00:00
['Money', 'Digital Marketing', 'Writing', 'Business', 'Entrepreneurship']
COVID19, Influenza and The Role of Vitamins
Why do Doctors sometimes prescribe vitamins? A recently published study confirms a commonly held belief by the medical profession. There is absolutely no substitute for a well-balanced diet, which is the ideal source of the vitamins and minerals we need. In brief the study found the following. Our bodies prefer naturally occurring sources of vitamins and minerals. We absorb these better. And because commercially available vitamins, minerals, herbs, etc. are lumped together as “supplements,” the FDA doesn’t regulate them. When we ingest processed, concentrated, and artificially packaged “supplements,” we may be doing ourselves harm. They may be toxic, ineffective, or contaminated (all of which are not uncommon). Doctors may prescribe a multivitamin product to treat or prevent vitamin deficiency due to poor diet, certain illnesses, or during pregnancy. There are medical conditions that put people at high risk for certain nutritional deficiencies, and conditions that can be treated with certain nutritional supplements. This is important, and is why the authors of this study support targeted supplementation. In pregnant women. Folic acid is especially important for healthy fetal development, and a deficiency can cause spina bifida, a neurologic condition. As pregnancy advances, mothers will benefit from a prenatal vitamin (either by prescription or a well-vetted over-the-counter one) which contains things like iron and calcium. People who have had weight-loss surgery may require a number of supplements including A, D, E, K, and B vitamins, iron, calcium, zinc, copper, and magnesium, among other things. People with inflammatory bowel disease (like Crohn’s or ulcerative colitis) may have similar requirements. People who have or are at risk for osteoporosis may greatly benefit from vitamin D and, depending on the quality of their diet and other factors, possibly also calcium supplements. The take away from this is that OTC vitamins and multivitamins have their place. Don’t self-administer anything to yourself without first consulting your doctor and if you’re living a healthy lifestyle, odds are YOU DONT NEED VITAMINS. Importantly, watch out for products that are cheap and not properly registered. Stay with recognises brands and purchase from a reputable pharmacist. Do not buy products online. A Comprehensive Guide to Fat Soluble Vitamins Here’s what you need to know about all fat soluble Vitamins. The information below has been provided in large part by the Colorado State University. Vitamin A – Retinol Vitamin A, what is it and what does it do? Vitamin A is also commonly called retinol. It has many functions in the body, including: helping the eyes adjust to light changes bone growth, tooth development, reproduction, cell division, gene expression, regulation of the immune system. The skin, eyes, and mucous membranes of the mouth, nose, throat and lungs depend on vitamin A to remain moist. Vitamin A is also an important antioxidant that may play a role in the prevention of certain cancers. No clinical evidence yet supports this conclusively. Vitamin A Sources The. retinol, retinal, and retinoic acid forms of vitamin A are supplied primarily by foods of animal origin such as dairy products, fish liver. Some foods of plant origin contain the antioxidant, beta-carotene, which the body converts to vitamin A. Beta-carotene, comes from fruits and vegetables, especially those that are orange or dark green in color. All the following are rich in Beta-carotene. carrots, pumpkin, winter squash, dark green leafy vegetables apricots, How much Vitamin A do our bodies need? The Recommended Dietary Allowance (RDA) for vitamin A is 900 mcg/ day for adult males and 700 mcg/day for adult females. Who would require supplemental vitamin A? Studies indicate that vitamin A requirements may be increased due to hyperthyroidism, fever, infection, cold, and exposure to excessive amounts of sunlight. Heavy drinkers (alcohol) or people with renal disease should also increase intake of vitamin A. Why might my body be deficient in vitamin A? If you eat a normal and varied diet is it highly unlikely you need to take supplemental vitamin A. Deficiencies are normally restricted to severely malnourished individuals. Night blindness and very dry, rough skin may indicate a lack of vitamin A. Other signs of possible vitamin A deficiency include decreased resistance to infections, faulty tooth development, and slower bone growth. Signs of toxicity from vitamin A In the USA, the incidence of toxicity is common, rather than deficiency. The vitamin is fat soluble and builds up over time. Some multivitamin supplements contain high doses of vitamin A. Retinol is the form of vitamin A that causes the greatest concern for toxicity. If you take a multivitamin, check the label to be sure the majority of vitamin A provided is in the form of beta-carotene, which appears to be safe. Some medications used to treat acne, psoriasis, and other skin conditions contain compounds that mimic retinol in the body. Much like excessive intake of dietary retinol, these medications have been shown to negatively impact bone health and result in delayed growth in children and teens. Symptoms of vitamin A toxicity include dry, itchy skin, headache, nausea, loss of appetite. Signs of severe overuse over a short period of time include dizziness, blurred vision slowed growth. Vitamin A toxicity can also cause severe birth defects and may increase the risk for bone loss and hip fractures. Vitamin D Vitamin D, what is it and what does it do? Vitamin D plays a critical role in our body’s use of calcium and phosphorous. It works by increasing the amount of calcium absorbed from the small intestine, helping to form and maintain bones. Vitamin D benefits us in the following ways: playing a role in immunity controlling cell growth may protect against osteoporosis, high blood pressure, cancer, and other diseases. Children need adequate amounts of vitamin D to develop strong bones and healthy teeth. Vitamin D Sources The primary food sources of vitamin D are: milk, other dairy products fortified with vitamin D. oily fish (e.g., herring, salmon and sardines) cod liver oil. In addition to the vitamin D provided by food, we obtain vitamin D through our skin which produces vitamin D in response to sunlight. How much Vitamin D do our bodies need? In the absence of adequate sun exposure, at least 800 to 1,000 IU of vitamin D3 may be needed to reach the circulating level required to maximize vitamin D’s benefits. From 12 months to age fifty, the RDA is set at 15 mcg, which is the recommendation for maintenance of healthy bones in adults. Exposure to ultraviolet light is necessary for the body to produce the active form of vitamin D. Ten to fifteen minutes of sunlight without sunscreen on the hands, arms and face, twice a week is sufficient to receive enough vitamin D. This can easily be obtained in the time spent riding a bike to work or taking a short walk with arms and legs exposed. As long as you get into the sun every now and again, you don’t need any supplemental vitamin D. Who may require supplemental vitamin D? These populations may require extra vitamin D in the form of supplements or fortified foods: Exclusively breast-fed infants: Human milk only provides 25 IU of vitamin D per liter. All breast-fed and partially breast-fed infants should be given a vitamin D supplement of 400 IU/day. Dark Skin: Those with dark pigmented skin synthesize less vitamin D upon exposure to sunlight compared to those with light pigmented skin. Elderly: This population has a reduced ability to synthesize vitamin D upon exposure to sunlight, and is also more likely to stay indoors and wear sunscreen which blocks vitamin D synthesis. Covered and protected skin: Those that cover all of their skin with clothing while outside, and those that wear sunscreen with an SPF factor of 8, block most of the synthesis of vitamin D from sunlight. Disease: Fat malabsorption syndromes, inflammatory bowel disease (IBD), and obesity are all known to result in a decreased ability to absorb and/or use vitamin D in fat stores. Signs of Vitamin D deficiency Symptoms of vitamin D deficiency in growing children include rickets (long, soft bowed legs) flattening of the back of the skull Vitamin D deficiency in adults may result in osteomalacia (muscle and bone weakness), osteoporosis (loss of bone mass). increased risk of common cancers, autoimmune diseases, hypertension, and infectious disease. Why might my body be deficient in vitamin D? Research shows that vitamin D insufficiency affects almost 50% of the population worldwide; an estimated 1 billion people. The rising rate of deficiency has been linked to a reduction in outdoor activity and an increase in the use of sunscreen among children and adults. Those who live in inner cities, wear clothing that covers most of the skin, or live in northern climates where little sun is seen in the winter are also prone to vitamin D deficiency. Since most foods have very low vitamin D levels (unless they are enriched) a deficiency may be more likely to develop without adequate exposure to sunlight. Adding fortified foods to the diet such as milk, and for adults including a supplement, are effective at ensuring adequate vitamin D intake and preventing low vitamin D levels. Signs of toxicity from vitamin D High doses of vitamin D supplements coupled with large amounts of fortified foods may cause accumulations in the liver and produce signs of poisoning. Signs of vitamin D toxicity include; excess calcium in the blood, slowed mental and physical growth, decreased appetite, nausea and vomiting. It is especially important that infants and young children do not consume excess amounts of vitamin D regularly, due to their small body size. Vitamin E: Tocopherol Vitamin E, what is it and what does it do? Vitamin E benefits the body by acting as an antioxidant, and protecting vitamins A and C, red blood cells, and essential fatty acids from destruction. Older faulty research suggested that taking antioxidant supplements, vitamin E in particular, might help prevent heart disease and cancer. Newer findings indicate that people who take antioxidant and vitamin E supplements are not better protected against heart disease and cancer than non-supplement users. Many studies do show a link between regularly eating an antioxidant rich diet full of fruits and vegetables, and a lower risk for heart disease, cancer, Alzheimer’s Disease, and several other diseases. In short, research shows that to receive the full benefits of antioxidants and phytonutrients in our diet, we need to consume these compounds in the form of fruits, vegetables, nuts, and seeds. Swallowing a pill and supplements does not ensure the same benefits. Vitamin E Sources About 60 percent of vitamin E in our diet comes from; vegetable oil (soybean, corn, cottonseed, and safflower). This also includes products made with vegetable oil (margarine and salad dressing). fruits and vegetables, grains, nuts (almonds and hazelnuts), seeds (sunflower) fortified cereals. How much Vitamin E do our bodies need? The Recommended Dietary Allowance (RDA) for vitamin E is based on the most active and usable form called alpha-tocopherol. Food and supplement labels list alpha-tocopherol as the unit international units (IU) or micrograms (mcg), not in milligrams (mg). One microgram of alpha-tocopherol equals to 1.5 International units (IU). RDA guidelines state that males and females over the age of 14 should receive 15 mcg (22.5 IUs) of alpha-tocopherol per day. Consuming vitamin E in excess of the RDA does not result in any added benefits. Signs of Vitamin E deficiency Vitamin E deficiency is rare. Cases of vitamin E deficiency usually only occur in premature infants and in those unable to absorb fats. Since vegetable oils are good sources of vitamin E, people who excessively reduce their total dietary fat may not get enough vitamin E. Signs of toxicity from vitamin E There are no noted signs of toxicity however it’s important to note the following. Megadoses of supplemental vitamin E may pose a hazard to people taking blood-thinning medications such as Coumadin (also known as warfarin) and those on statin drugs. Vitamin K What is Vitamin K Vitamin K is naturally produced by the bacteria in the intestines. It plays an essential role in; normal blood clotting, promoting bone health, helping to produce proteins for blood, bones, and kidneys. Vitamin K Sources Good food sources of vitamin K are; green, leafy-vegetables such as turnip greens, spinach, cauliflower, cabbage and broccoli, certain vegetables oils including soybean oil, cottonseed oil, canola oil and olive oil. Animal foods, in general, contain limited amounts of vitamin K. How much Vitamin K do our bodies need? To help ensure people receive sufficient amounts of vitamin K, an Adequate Intake (AI) has been established for each age group. Please refer to Table 1 in the original material. What are dangers of insufficient Vitamin K Without sufficient amounts of vitamin K, haemorrhaging (bleeding) can occur. Why might my body be deficient in vitamin K? Vitamin K deficiency may appear in infants or in people who take anticoagulants, such as Coumadin (warfarin), or antibiotic drugs. Newborn babies lack the intestinal bacteria to produce vitamin K and need a supplement for the first week. Those on anticoagulant drugs (blood thinners) may become vitamin K deficient, but should not change their vitamin K intake without consulting a physician. People taking antibiotics may lack vitamin K temporarily because intestinal bacteria are sometimes killed as a result of long-term use of antibiotics. People with chronic diarrhea may have problems absorbing sufficient amounts of vitamin K through the intestine and should consult their physician to determine if supplementation is necessary. Signs of toxicity from vitamin K Although no Tolerable Upper Intake Level (UL) has been established for vitamin K, excessive amounts can lead to the breakdown of red blood cells and liver damage. People taking blood-thinning drugs or anticoagulants should moderate their intake of foods with vitamin K, because excess vitamin K can alter blood clotting times. Large doses of vitamin K are not advised. A Comprehensive Guide to Water Soluble Vitamins Here’s what you need to know about all water soluble Vitamins. The information below has been provided in large part by the Colorado State University. B-Complex Vitamins What are B-Complex vitamins Eight of the water-soluble vitamins are grouped together as the vitamin B-complex group: thiamin (vitamin B1), riboflavin (vitamin B2), niacin (vitamin B3), vitamin B6 (pyridoxine), folate (folic acid), vitamin B12, biotin pantothenic acid. The B vitamins are widely distributed in foods, and they function as coenzymes that help the body obtain energy from food.
https://medium.com/beingwell/covid19-influenza-and-the-role-of-vitamins-b17f69400f4c
['Robert Turner']
2020-04-05 11:15:07.361000+00:00
['Vitamins And Supplements', 'Vitamin C', 'Health', 'Covid-19', 'Coronavirus']
Yoga for Designers
Last week UXPA Magazine published my article on “Mindful Design: What the UX World Can Learn from Yoga” (also reprinted in a previous post on Medium). The article describes how mindfulness practices like meditation and yoga can benefit designers throughout the lifecycle of design activities, from seeking inspiration to ideation and execution. I wrote about a few ways in which designers can put mindfulness principles into practice. Here are a few simple poses to start a basic yoga practice that benefits the creative process. Open the heart and the mind prior to user interviews: Chest and shoulder openers Chest and shoulder openers counteract the effects of being hunched over a computer or steering wheel. The opening of the chest also symbolizes the opening of the heart, enabling compassion and connection to others. Before engaging in user research, open the chest and shoulders and prime the body to be more receptive to others. Modified Standing Backbend (Anyvittasana) Interlace your hands behind your back, and pull the heels of your hands together as you roll the heads of your shoulders up, back, and down. Start to lift the gaze and the chest up as you pull the hands down behind your back. As you draw the tips of the shoulder blades together, squeeze the upper arms towards each other and press the heels of the hands into each other. Tuck the pelvis in slightly to protect the lower back; you can help your body do this by lengthening the tailbone towards and heels and/or lifting the pubic bone up towards the chest. Supported Fish Pose (Matsyasana) At home, a restorative chest opener can be simple and relaxing. Roll up a blanket into a long, narrow log, and place it on the floor underneath your back, behind the heart. A yoga block or stack of books works well in place of the blanket too. Drape yourself over the prop and allow the shoulders to roll back. Let the palms face up to maintain an external rotation in the shoulders. Legs can be extended straight and relaxed, or you can passively open the hips by bringing the heels together and let the knees fall open to the sides. This is one of the most therapeutic things you can do for your chest and shoulders, and is especially restorative for people with respiratory problems such as asthma or getting over a cold. Boost your creative mind and playful energy: Hip openers The psoas is the only muscle to connect the spine to the legs. It is also connected to the diaphragm through connective tissue (fascia) which affects our breath and fear reflex. For many of us, our fast paced modern lifestyle causes the psoas to be chronically triggered; this tightness can be a source of low back pain. Conversely, a relaxed psoas is the mark of play and creative expression. The relaxed and released psoas is ready to lengthen and open, to dance. Do these stretches before ideation and hackathons to boost your creative mind and playful energy. Low crescent lunge (Anjaneyasana) Kneel on one knee and put the opposite foot in front (for the front foot, try to get the ankle directly under the knee on the same side leg to give the most structural support). Transfer your weight onto the front foot and push your hips forward and down until you feel a stretch along the front of your hip on the leg extended behind you. You can take this stretch further by employing a variety of options. Raise the arm that is on the same side as the back leg, and lean diagonally towards the side of the front knee. Another option is to stretch the quadriceps by grabbing the ankle on the extended leg with the opposite hand and drawing the foot towards your sit bone. In addition to opening the hip flexors, a wider range of poses help open the hips along different axes of rotation and create spaciousness within different muscle groups (outer hips, adductors, IT bands). Detox from the stress of negotiations and design reviews: Spinal twists Seated twist poses can help relieve tension from deep within the body, which often shows itself as emotional stress. They also help mobilize the joints of your spine and squeeze internal organs, bringing oxygenated blood to your internal organs while eliminating toxins and metabolic waste products. Perform a twisting pose whenever you feel stress. Gentle spine twist, based on Qigong One spinal twist that is playful and fun to do before design brainstorms is a qigong spinal twist. Stand with your feet hip width distance apart, and make fists with your hands. Begin by moving the right arm in front and the left arm behind you as you twist from your standing body to the right. Switch directions and twist to the left, allowing the arms to switch sides. Gradually increase your speed, maybe gently massaging the acupressure point at the top of the chest under the shoulder with your opposite fist as you twist around. Half Lord of the Fishes (Ardha Matsyeandrasana) Practice a more intense twist with a seated spinal twist. Sit with the legs extended from the hips. Cross the right foot over the left thigh and press it into the floor, to the left of the left thigh or knee. Place the right hand on the floor behind the right hip. As you inhale, reach the left arm towards the sky to lengthen the torso; as you exhale, twist to your right. Hook the left elbow outside the right knee to give more leverage in the twist. Alternatively, grab the outside of the right knee with your left hand if hooking the elbow outside the knee is too intense. With every inhale, lengthen the spine and grow taller; with every exhale, continue to twist to the right. Hold for 30–60 seconds and then switch sides. Calm the mind for quiet design time: Forward folds Forward folds have a detoxifying effect that can improve and stimulate digestion and help calm the mind and body. When you fold forward, you are turning inward physically, mentally, and emotionally, resulting in greater introspection and a sense of peace. Do forward folds at the end of the day and before quiet design time. Standing forward fold (Uttanasana) Stand with your feet hip width apart, with the outside edges of the feet parallel to each other. Fold forward at the hip crease, bringing the top of the pelvis forward. Lengthen the front of the body as you fold, keeping the neck and jaw relaxed. Engage the quadriceps to allow the hamstrings to lengthen. Let the weight be more on the balls of the feet, as opposed to the heels, to help align the hips over the ankles. As an option, you can choose to grab onto opposite elbows or forearms and just hang, noticing what you experience when you don’t have the goal of having to “get somewhere”. Remember that forward folds are not about how deep you can go but rather how deeply you can release. Have Fun Laughter (Laughasana) Yoga is serious work, but don’t take the practice too seriously. Adopt a playful attitude, know that “failures” are part of learning and growing, and have fun. It’s during moments of joy and flow that we get the best, most creative work out of ourselves. Namaste! Photos courtesy of the creative, multi-talented (and fellow yogi) James Witt.
https://medium.com/design-your-life/yoga-for-designers-162d86ad4ddc
['Irene Au']
2016-08-17 16:51:49.819000+00:00
['Yoga', 'Design', 'Mindfulness', 'Creativity', 'UX']
Hadoop Distributed File System
The data nodes send an activity update to the “Active” Name Node (every 5 seconds at minimum — configurable). This metadata is synced to the “Stand by” Name Node in real-time. Thus, when the “Active” fails, the “Stand by” has all the necessary metadata to switch over. The Zookeeper, through its Fail-over Controller, monitors the health of the Active and Stand by Name Nodes through a heart beat or instant notification it receives from each NN (every 5 seconds, again configurable). It also has the information of all the stand by name nodes available (Hadoop 3.x allows for multiple stand by name nodes). Thus, a connectivity between the data nodes, name nodes and zookeeper is established. The moment an active name node fails, the Zookeeper elects an appropriate stand by name node and facilitates the automatic switch over. The Stand by becomes the new Active Name Node and broadcasts this election to all the data nodes. The data nodes now send their activity updates to the newly elected Active Name Node within a few minutes. What is NameNode Metadata? NameNode Metadata The name node (NN) metadata consists of two persistent files, namely, FsImage — namespace and Edit logs — transaction logs (insert, append) Namespace & FsImage In every file system, there is a path to the required files — On Windows: C:\Users\username\learning\BigData amenode.txt and on Unix: /usr/username/learning/BigData/namenode.txt. HDFS follows the Unix way of namespace. This namespace is stored as part of the FsImage. Every detail of the file i.e. who, what, when, etc. is also stored in the FsImage snapshot. The FsImage is stored on the disk for consistency, durability and security. Edit logs Any real-time changes to all files are logged in what is known as “Edit logs”. These are recorded in-memory (RAM) and contain every little detail of the change and the respective file/block. On HDFS startup, the metadata is read from the FsImage and the changes are written to Edit Logs. Once the data is recorded for the day in Edit Logs, it is flushed down onto the FsImage. This is how the two work in tandem. As an aside, the FsImage and Edit Logs are not human readable. They are binary-compressed (serialized) and stored in the file system. However, for debugging purposes it can be converted into an xml format to be read using Offline Image Viewer. How does NameNode Metadata Sync? As you can imagine or see in the image ‘HDFS High Availability Architecture’, the name node metadata is a single point of failure, hence this metadata is replicated to introduce redundancy and enable high-availability (HA). Shared Storage Shared Storage Sync We now know that there exists an Active Name Node and a Standby Name Node. Any change in the active is synced in real-time to the shared folder/storage i.e. network file system (NFS). This NFS is accessible to the standby, which downloads all of the relevant incremental information in real-time to maintain the sync between the Namenodes. Thus, in the event of a failure of the active, the standby name node already has all the relevant information to continue “business as usual” post fail-over. This is not used in the Production environment. Quorum Journal Node (QJN) QJN Sync “Quorum” means minimum required to facilitate an event. The term is generally used in politics; it is the minimum number of representatives required to conduct proceedings in the house. Here, we use this concept to determine the minimum number of journal nodes aka quorum that is needed to establish a majority and maintain metadata sync. The image shows three (always odd) journal nodes (process threads not physical nodes) that help to establish metadata sync. When an Active NN receives a change, it pushes it to majority of the QJ Nodes (follow a single colour). The Standby NN, in real-time, requests the majority number of QJ Nodes for the required metadata to establish the sync. The minimum number for QJN to function is 3 and the quorum/majority is determined by the following formula: Q = (N+1)/2 where N = total number of Journal Nodes For example, if we have N=5, the quorum/majority would be established by (5+1)/2 i.e. 3. The metadata change would be written to 3 journal nodes. The QJN is the preferred Production method of metadata sync as it is also “highly available”. In the event of a failure of any of the QJ nodes, any of the remaining nodes are available to provide the required data to maintain metadata sync. Thus, the standby already has all the relevant information to continue “business as usual” post fail-over. This brings us to the end of my comprehensive guide on HDFS and it’s inner workings.
https://towardsdatascience.com/hadoop-distributed-file-system-b09946738555
['Prathamesh Nimkar']
2020-12-13 15:32:39.261000+00:00
['Hadoop', 'Hdfs', 'Data Engineering', 'Data Science', 'Big Data']
The Why and How of MapReduce
Hadoop’s MapReduce In General Hadoop MapReduce is a framework to write applications that process enormous amounts of data (multi-terabyte) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner. A typical MapReduce job: splits the input data-set into independent data sets each individual dataset is processed in parallel by the map tasks by the then the framework sorts the outputs of maps, the outputs of maps, this output is then used as input to the reduce tasks In general, both the input and the output of the job are stored in a file-system. The Hadoop MapReduce framework takes care of scheduling tasks, monitoring them, and re-execution of the failed tasks. Generally, the Hadoop’s MapReduce framework and Hadoop Distribution File System (HDFS) run on the same nodes, which means that each node is used for computing and storage both. The benefit of such a configuration is that tasks can be scheduled on the nodes where data resides and thus results in high aggregated bandwidth across the cluster. The MapReduce framework consists of: a single master ResourceManager (Hadoop YARN), (Hadoop YARN), one worker NodeManager per cluster-node, and per cluster-node, and MRAppMaster per application The resource manager keeps track of compute resources, assigns them to specific tasks, and schedules jobs across the cluster. In order to configure a MapReduce job, at minimum, an application specifies: input source and output destination map and reduce function A job along with its configuration is then submitted by the Hadoop’s job client to YARN, which is then responsible for distributing it across the cluster, schedules tasks, monitors them, and provides their status back to the job client.
https://medium.com/datadriveninvestor/the-why-and-how-of-mapreduce-17c3d99fa900
['Munish Goyal']
2020-08-12 19:02:34.210000+00:00
['Mapreduce', 'Hadoop', 'Data Engineering', 'Data Science', 'Big Data']
Experienced Writers: Pass on Your Passion for Writing
Experienced Writers: Pass on Your Passion for Writing Write for us and inspire new writers Photo by Cliff Johnson on Unsplash Have you been a published writer for a number of years, written for clients, worked as a ghostwriter, or published a book? We need experienced writers who want to pass on their passion for this creative field and inspire and encourage new writers. There are two ways you can be part of our mission: 1. Advice from experts Articles on writing, publishing, pitching editors, or marketing tips specific to writers. We accept tips for writing on Medium, but prefer advice for the wider writing market. If you have experience or expert knowledge in an area that could help new writers, we’d love to read it! Articles that are specific and actionable are best. 2. Inspiring creative non-fiction We love publishing beautifully written creative non-fiction on a wide range of subjects that will inspire new writers and provide them with access to high-quality reading material. We take published work from experienced writers! If you self-published less than a month ago and would like to find a home for your piece, send it in. Submitting Your Story: To submit a story, email your link to Kelly at inspiredwriter.kelly@gmail.com to be added as a writer. When you first email, please let me know your experience as a writer. We look forward to working with you! Remember to become a follower too so you can keep up with what’s happening here at Inspired Writer. We send out a newsletter once a month. When will you hear back: I aim to get back to all submissions within 48 hours. If you don’t hear from me within a week, email a follow up in case your submission got missed! Our promise to our writers: We only accept your story if we believe it has a chance of being curated. If we accept your story, we will work with you to make minor edits to enhance your curation chances. (Stories about Medium are not curated but may still be accepted by us.) We limit the number of stories we publish daily so that every writer gets a chance of being featured. We believe quality is better than quantity. We respond to every submission. If we don’t accept your draft we give brief feedback as to why. We will do our best to give writers from all backgrounds a chance to share their voice on our platform. Inspired Writer Editors: Kelly Eden Hi, I’m Kelly Eden. I live on the edge of a beautiful rain-forest in New Zealand with my family. I’m a professional writer with over 12 years of experience in the writing industry. I’ve been featured in newspapers, blogs, and online and print magazines. Some of these include Thought Catalog, Mamamia, Zoosk, Natural Parent, Family Times, and Highly Sensitive Refuge. I have also worked with clients from around the world, helping writers, businesses, and government organisations to develop and polish their content. I am passionate about writing and love to see others find and share their own unique writing voices. Ash Jurberg G’day I’m Ash Jurberg. I live in the world’s most livable city — Melbourne, Australia. I have worked in Marketing for (a LONG time) and have written copy for advertisements, websites, brochures, and blogs during that (LONG) time. I am a passionate writer who believes that being a successful writer is a combination of 50% writing skill and 50% marketing skill. I would love to be able to assist writers of all levels in how best to promote, market and sell their skills — across Medium — and other markets.
https://medium.com/inspired-writer/experienced-writers-pass-on-your-passion-for-writing-ebd97d9caaea
['Kelly Eden']
2020-07-03 00:10:13.639000+00:00
['Writing', 'Writing Tips', 'Creativity', 'Freelance', 'Submission Guidelines']
What to Buy (and What to Skip) to Prepare for Coronavirus
What to Buy (and What to Skip) to Prepare for Coronavirus Advice from people in quarantine about what you actually need — and how to make life work Photo: Universal Images Group/Getty Images Follow Elemental’s ongoing coverage of the coronavirus outbreak here. The World Health Organization has officially declared Covid-19 a pandemic, and if you’ve been hoping you can ignore coronavirus and it’ll just go away without affecting you or your family in some way, that’s looking less and less likely. But that isn’t to say you need to buy a deep freezer and build a storehouse in your backyard. So where should you begin, and how can you approach preparation without driving yourself up the wall? The Centers for Disease Control and Prevention has offered up guidance on how to prepare for an outbreak near you, including making a plan with your family, practicing good health and hygiene habits, and checking in with vulnerable people in your community. They also have instructed people in at-risk groups (like the immunocompromised, the elderly, and those with chronic health conditions) to stock up on supplies like nonperishable foods. Since things are moving quickly, it’s not a bad idea to stock up a bit before the virus hits your area, since social distancing (which can include limiting your visits to, say, crowded supermarkets and pharmacies) is such a helpful preventive measure, and you may need to self-quarantine with limited notice. In case you do have to hole up for a while, here’s what you may need to make it, well, suck less — based on advice from experts and folks who’ve actually been in quarantine. There’s already been a run on toilet paper and paper towels — for good reason. A sensible grocery list “It’s about stocking your pantry in a smart way, but also cooking with the stuff you already have in the house,” says Dawn Perry, a food writer who’s currently working on a pantry cookbook. Here’s her quick-and-dirty rundown of what you may need, including staples and flavor boosters so you’re not just stuck eating black beans out of a can: Dry grains: Rice, quinoa, oats, cornmeal (for polenta). Beans: dried or canned. Tinned fish: tuna, tasty cans of smoked sardines, or mackerel. A couple varieties of nut butter and tahini. Flour (you can add water, salt, and a leavening agent like baking soda or powder to make biscuits, tortillas, and crackers). Eggs. Onions (they keep for weeks when stored in a cool, dark place). Garlic. Potatoes. Boxed cereal. Hardy veggies: broccoli, celery, carrots, and kale (they stay fresh for a long time in your fridge). Bananas (when they start to turn, freeze for smoothies or bake into banana bread). Frozen peas (add to soups and pastas). Frozen sausage (just one link can go a long way in adding flavor to a dish). Bag of frozen shrimp. Sliced bread (pop it in your freezer, will last a long time). Taste boosters: soy sauce, mayo, ketchup, fruit jam, Dijon mustard, Parmesan cheese, olives, capers. First aid kit No matter what is happening — pandemic or regular day — “it’s prudent for every household to have basic essentials on hand to care for a minor emergency, such as a cut, burn, or insect sting,” says Brad Uren, MD, assistant professor of emergency medicine at the University of Michigan Medical School. (And while you should, of course, seek medical care if you need it, for minor issues you can treat at home, it’s preferable not to visit a health care facility right now if you don’t have to.) A standard first aid kit will have what you need. Extra supply of needed medication Make sure you have a few weeks of vital medications on hand, says Uren. Mail-order pharmacies, which often dole out 90-day supplies when available, may be an option for certain people, depending on what type of medication they’re taking and insurance, he says. If you’re having trouble making this happen (insurance doesn’t always make it easy), get on the phone with a pharmacist. OTC meds and other symptom treatments Should you get sick, you’ll want to have the right supplies on hand to treat your symptoms, says Aaron Reddington of the survival blog the Simple Prepper. Make sure you have things like a thermometer, OTC fever reducers (acetaminophen and ibuprofen), sports drinks for rehydrating, and the bland foods you want when you have no appetite, like crackers, he says. Hygiene stuff There’s already been a run on toilet paper and paper towels — for good reason. “While everyone has reacted and bought toilet paper and paper towels, unfortunately, I think you need to roll with the punches and buy some as well,” says Reddington. “It would be terrible to be stuck in your home for weeks without any, and while it may sound ridiculous [to stock up], worst-case scenario is that you have extra,” he says. We also tend not to buy other personal hygiene products until just before we run out, adds Todd Sepulveda, the editor of Prepper Website and the host of the Prepper Website Podcast. Check to make sure that you have the hand soap, shampoo, toothpaste, and bodywash you need, he says. If you get a period, then pick up some extra tampons or pads — or go for something reusable such as a menstrual cup, like the DivaCup, or period underwear, like Thinx. Tissues Grab an extra box or two. You’ll want to make sure you’re always sneezing and coughing into a tissue, says Uren. Then, throw it away — don’t stuff it in your pocket to reuse later. A healthy perspective Take comfort in the experience of Rebekah, 35, who has been under a government-mandated quarantine in her home in the Guangdong province in China with her husband and children since January 21. “As soon as we realized the seriousness of the situation here, my husband purchased food staples, like rice, flour, and oil to help get us through in the event that supplies became limited,” she says. “After about the first two weeks veggies, fruits, bread, and other fresh foods were back in stock and we have not experienced any shortages of food or supplies,” she says. Games and craft supplies If you have kids and their school gets closed, prepare to keep busy during a quarantine. “We haven’t lacked anything physical but have had to draw on all our powers of creativity to keep ourselves and them entertained,” says Rebekah. Card games, board games, crafts, making blanket forts, building block towers, and riding scooters around the living room are some activities that bring fun into their daily lives. Even if you don’t have kids, think about ways to keep yourself occupied. “If people have to be quarantined for a while, they might get bored of Netflix and surfing the internet,” says Sepulveda. He recommends making sure you have books to read and supplies for hobbies at the ready so you can stay entertained and engaged. “Unless the water from your faucet is not already safe, you don’t need to buy bottled water.” An office work-from-home plan Not all jobs can be done remotely, but even some of the ones that theoretically can still might not be set up for actual WFH success. Emily, who’s in the creative industry, has been in voluntary quarantine from working at her New York City office for nearly two weeks after coming home from a trip to Italy. She remains symptom-free, so she can go about her day as she chooses (still doing so responsibly, of course, so as not to put others at risk). But working from home has been a huge challenge. “There’s no structure in place and no way to collaborate with co-workers who aren’t in quarantine,” she says. “People keep asking me how my ‘second vacation’ is going. It’s not a vacation,” says Emily. To ensure things go more smoothly, talk to your supervisor about making a telecommuting plan; and think about how to set up your workspace and your own approach to work to maximize your productivity. A pet carrier Emily has resisted stocking up on special items — except for one: a cat carrier. Should she need to leave her place for some time, she’ll be able to bring her cat along without trouble or worry. Supplies You Can Skip: Water. “Unless the water from your faucet is not already safe, you don’t need to buy bottled water,” says Sepulveda. “Unless the water from your faucet is not already safe, you don’t need to buy bottled water,” says Sepulveda. Hand sanitizer. Yes, an alcohol-based hand sanitizer with at least 60% alcohol is recommended. However, these are nearly impossible to find right now, and you don’t need to stress yourself out searching. “Hand sanitizer can be used if soap and water are not available, but it is not necessary,” says Uren. Yes, an alcohol-based hand sanitizer with at least 60% alcohol is recommended. However, these are nearly impossible to find right now, and you don’t need to stress yourself out searching. “Hand sanitizer can be used if soap and water are not available, but it is not necessary,” says Uren. Masks. Unless a medical professional has ordered you to wear them, hoarding masks (and sanitizer) can put others at risk. “If these critical supplies are not available to the people providing one-on-one bedside care, it could increase the risk of spread of infectious diseases in the community,” says Uren. A final note: Buy only what you need. Remember that we have a responsibility as citizens to think about everyone else, too. Maybe buying a few weeks’ worth of toilet paper is okay, but you don’t need 200 extra rolls. Save some for others who also want to be comfortable should they have to hole up at home for a while. The coronavirus outbreak is rapidly evolving. To stay informed, check the U.S. Centers for Disease Control and Prevention as well as your local health department for updates. If you’re feeling emotionally overwhelmed, reach out to the Crisis Text Line.
https://elemental.medium.com/what-to-buy-and-what-to-skip-to-prepare-for-coronavirus-3b721d60eb82
['Jessica Migala']
2020-03-18 17:54:09.978000+00:00
['Health', 'Pandemic', 'Coronavirus', 'Life', 'Food']
5 Wrong Ways to Promote Your Articles
2. Don’t Spam Your Articles Inside Other Articles Another fairly common practice when we make a post is to include links to other related articles in case readers want to continue reading. The problem with this is that it can turn into spam. Believing that an excellent post should include at least three to four references to your other articles is a poor practice that does not work. You can even ask anyone who likes to read informative articles about how many times they have clicked those links while they are reading. They will probably tell you they don’t remember. I once clicked on a post that had a very interesting title. After the first paragraph, the writer linked to two other posts that promised to “fulfill” what he said in his first post. After I finished reading, I realized that the entire article was clickbait to get people to visit their other posts. None of the writer’s articles really helped me. Outcome? I ended up blocking the author. When someone cannot calmly read the content of an article without receiving spam or ads promoting other posts, they will not be able to follow the thread of the story and will get tired of not learn anything. In the end, you create the result that you do not want. What you can do instead Write stories that leave others wanting to read more from you. If someone wants to continue learning from your post, they will go to your other articles with or without links in the article. Also, there are places that allow you to post one or two stories at the bottom of your profile. That’s totally okay as long as the topic you put there is related to the content you have. But this strategy will help you gain only 5–10% of the views that the current article has.
https://medium.com/better-marketing/5-wrong-ways-to-promote-your-articles-e0e1047b024f
['Desiree Peralta']
2020-12-05 02:33:31.331000+00:00
['Writing Tips', 'Writing', 'Advice', 'Ideas', 'Creativity']
In the brain of Computer vision? (Eng)
Image classification: The problem with image classification is this: Given a set of images that are all labeled with a single category, we are asked to predict these categories for a new set of test images and measure the accuracy of the predictions. There are many challenges to this task, including point of view variation, scale variation, intra-class variation, image distortion, image occlusion, lighting conditions, and background clutter. How could we write an algorithm capable of classifying images into distinct categories? Computer Vision researchers have developed a data-based approach to solving this problem. Instead of trying to specify directly in the code what each image category of interest looks like, they provide the computer with many examples of each class of images and then develop learning algorithms that examine these examples and learn the visual appearance of each class. In other words, they first accumulate a set of tagged image learning data and then send it to the computer to process the data. Given this fact, the complete image classification pipeline can be formalized as follows: Our input is a training data set that consists of N images, each labeled with one of K different classes. We then use this training set to train a classifier to learn what each class looks like. finally, we evaluate the quality of the classifier by asking him to predict labels for a new set of images that he has never seen before. We then compare the actual labels for these images with those predicted by the classifier. The most common algorithms used to solve image classification are CNN convolutional neural networks Convolutional neural networks are currently the most efficient models for image classification. CNN has two very different parts. As input, the image is provided as a matrix of pixels. A 2-dimensional grayscale image. The color is represented by the third dimension, depth 3, representing the basic colors [red, green, blue]. The first part of CNN is the convolutional part. It acts as an image extractor. The image is passed through subsequent filters or convolution kernels, creating new images called convolution maps. Some indirect filters reduce the resolution of the image by a maximum local operation. Finally, the weave maps are flattened and combined into a vector of entities called CNN code. Creating a new convolutional neural network is expensive in terms of expertise, equipment, and the amount of data required with annotations. First of all, it is necessary to determine the architecture of the network, i.e. the number of layers, their size and the matrix operations that connect them. The training then involves optimizing the network coefficients to minimize output error. This training can take several weeks for the best CNN, with many graphics processors working on hundreds of thousands of commented images. Today, most image classification techniques are trained on ImageNet, a data set of approximately 1.2 million high-resolution training images. The winner of the first ImageNet competition, Alex Krizhevsky, has revolutionized deep learning with an Alexnet very deep convolutional neural network. Its architecture consists of 7 hidden layers, plus a few maximum aggregation layers. The first layers were convolutional, while the last two were globally connected. Activation functions were linear units rectified in each hidden layer. These train much faster and are more expressive than logistics units. In addition, it also uses competitive standardization to suppress hidden activities when neighbouring units have stronger activities. This allows for better management of intensity variations. In terms of hardware requirements, Alex uses a very efficient implementation of convolutional networks on 2 Nvidia GTX 580 GPUs (more than 1000 small fast cores). GPUs are very good for matrix multiplications and also have a very high memory bandwidth. This allows him to form the network in a week and to quickly combine the results of 10 patches at the time of testing. We can spread a network over several cores if we can communicate the states quickly enough. As cores become cheaper and data sets get larger, large neural networks will improve faster than older CV systems. Since AlexNet, many new models using CNN as a base architecture have been developed and have performed very well in ImageNet.
https://medium.com/analytics-vidhya/whats-computer-vision-eng-c216a4c54c73
['Magloire Ndabagera']
2020-05-24 10:05:40.208000+00:00
['Machine Learning', 'Artificial Intelligence', 'Computer Vision', 'Data Science', 'Deep Learning']
Haar Cascade Classifiers
Learn and implement Haar cascade classifier in projects… Viola Jones Algorithm Working of Classifiers The Viola-Jones object detection framework is a machine learning approach for object detection, proposed by Paul Viola and Micheal Jones in 2001. This framework can be trained to detect almost any object, but this primarily solves the problem of face detection in real-time. This algorithm has four steps. 1. Haar Feature Selection Objects are classified on very simple features as a feature to encode ad-hoc domain knowledge and operate much faster than pixel system. The feature is similar to haar filters, hence the name ‘Haar’. An example of these features is a 2-rectangle feature, defined as the difference of the sum of pixels of area inside the rectangle, which can be any position and scale within the original image. 3-rectangle and 4-rectangle features are also used here. Haar Features 2. Integral Image Representation The Value of any point in an Integral Image, is the sum of all the pixels above and left of that point. An Integral Image can be calculated efficiently in one pass over the image. 3. Adaboost Training For a window of 24x24 pixels, there can be about 162,336 possible features that would be very expensive to evaluate. Hence AdaBoost algorithm is used to train the classifier with only the best features. Image by Packt 4. Cascade Classifier Architecture A cascade classifier refers to the concatenation of several classifiers arranged in successive order. It makes large numbers of small decisions as to whether its the object or not. The structure of the cascade classifier is of a degenerate decision tree. Architecture Implementation Application Despite the arrival of deep learning(RCNN, YOLO, etc), this method is still used in many applications for face and object detection, as this is very simple yet powerful.
https://medium.com/datadriveninvestor/haar-cascade-classifiers-237c9193746b
['Om Rastogi']
2020-10-29 01:29:22.610000+00:00
['Machine Learning', 'Opencv', 'Python', 'Computer Vision', 'Haar Cascades']
RFPs done right
Love ’em or hate ’em, Request for Proposals (RFPs) are standard practice for many non-profit, higher education, and government organizations, and this leaves project managers with few options when they’re purchasing tangible goods or professional services. If this is your first project lead, make sure you know your organization’s regulations and purchasing limits. Be aware of all your options. With professional services like design and website development, there are lots of reason to avoid an RFP. But if an RFP is your only option, there is a right way to conduct the process. Developing the RFP Here’s how to develop an efficient RFP for professional services that gets you better results in less time. 1. Establish an effective team who can get on the same page about the project goals. Honest feedback and effective collaboration is an essential part of working with a team. Differences of opinion can be helpful for honing in on the goals and needs of your project. But you must manage the fine line between constructive discussion and derailing disagreement. Don’t be afraid to trim the fat from your team if you have a team member who disrupts the flow of the process. 2. Provide context for the project. Once your team is established, you need to determine the key elements of your RFP. These are important to the responding firm’s ability to develop an effective proposal that meets your organization where you are and solves your problems. These items include: Budget Schedule for RFP including submission deadlines Contact Information Contextual Background Information The Problem to Solve Compensation Terms Required Qualifications (minimum & preferred) Evaluation Criteria & Process (including notification process) Legal/Procedural Rules 3. Determine your budget. Some people believe that keeping your project budget a secret from potential respondents helps you to get the best deal. What it really does is hamper your ability to find the partner who is the best fit for your organization. If you hide your budget you will inevitably get responses with a wide variance in cost, experience, and ability. This means more time on your part to evaluate proposals that you’ll never select because of the cost or lack of fit. You will also have wasted money on the part of the firms developing those proposals just for you to throw them out without real consideration. Clearly defining a budget or at least a budget range allows firms to decide if they can provide you with an effective solution to your problem within the given budget. Those that can will write proposals, and those that can’t will self-select out which saves both parties time and energy. After setting your budget, instead of focusing on getting the best deal (i.e. price), focus on getting the best value. In other words, focus on identifying the firm who is giving you the most return on your investment given the budget limitations by evaluating the combination of deliverables and quality of work they are offering to complete for you. If you truly have an open-ended budget for your project, then your team should determine the value of this project to your organization. How important is it to have this problem solved? What role will a firm’s particular areas of expertise play in solving these problems? Is the product or service you’re looking for going to directly generate revenue for your company, such as an e-commerce website? These are just a few examples, but all of these questions will help you to determine value and establish a budget. That budget will help all potential partners determine if they are really the right fit for you. 4. Determine respondent requirements. Once you’ve clearly articulated your pieces of the RFP that help to educate the respondents about your organization and the particular project, it’s now your turn to get specific with a few requirements for the responding firms that define how much control your team wants over the creative process, and what you need to see from them to determine best fit. Be clear about what you want from them, but be respectful of their time. We all know the old adage that “time is money” and RFP responses cost time to develop and evaluate. To be efficient, your goal should be to develop an RFP that is as efficient as possible while still sharing all the information a firm needs to respond. Requirements you should request include: Proposed statement of work, process, and project timeline Description of Experience/Qualifications — may include team biographies Relevant Work Samples — Don’t ask for Spec Work References Legal requirements as dictated by local, state, or federal regulations Proposed Monetary Bid Of all of these, the most important piece to consider here is whether your team will outline the statement of work (i.e. the solution to your problem), the process, and the timeline for the project, or whether you will simply identify the problem and let your applicants outline these other elements (the process, timeline, and ultimate deliverables) for you. The benefit of the first option is that your team has control of how you want the work done, and the deliverables you receive at the end of the process. The negative is that when we identify our own problems and solutions we often bring preconceptions to the process that blind us to alternative solutions that may better serve us. This is magnified when dealing with technology because innovation is constantly happening, so you may not even be aware of other solutions that are available to you. In “Expository Sketch is the New RFP,” Stanford University Technology Strategist Zach Chandler provides good insight as to why engaging your partner firm in helping to solve your problem is essential to successful projects. While Chandler advocates for doing this outside of the RFP process, if you must seek this problem-solving expertise within the RFP framework, the firm’s response will also give you insight into their process, creative vision, and strategic thinking abilities. 5. Gather any additional information needed. Depending on your project you may consider asking a few additional questions beyond those outlined above. But don’t ask for frivolous requirements, documentation or intrusive information that has no real bearing on your project. Examples of these frivolous requirements and questions that the MAC team has seen in RFPs include (but are not limited to): ‣ Tell us about a challenge that you’ve faced and overcome. ‣ What project would you like to re-do? ‣ What’s one question we should be asking and haven’t? ‣ After providing a fixed cost bid and an estimate of hours needed to complete the project, please provide a breakdown of each individual team members estimated hours and hourly rate. ‣ Please provide project budgets and scope for similar projects you’ve completed for other clients. Some of these questions are a waste of time for firms to respond to and your team to read the answers to. This only serves to distract from the only elements of the RFP that truly matter: the quality of the work, the process, the timeline, and the total proposed cost. As for budgets, the only thing that should really matter is whether you think the work being done for you is worth the price the firm has placed on it. As with all businesses, as a firm does good work and grows, inevitably so will their internal overhead costs. It’s unfair to the firm for your team to try to nickle-and-dime them for the same rate they charged a client five years before. If your team feels the quality of work being proposed is worth the price tag being attached, then the creative firm should have the freedom to distribute the funds how they see fit. Soliciting proposals Once you’ve written the RFP and have a solid foundational idea of what you are looking for, it’s time to reach out to potential partners and solicit responses. Some organizations require you to post an RFP publicly, but many allow you to distribute your RFP selectively to pre-qualified firms. While selective distribution requires some upfront research on your part to identify prospective partners that fit your project requirements, it also provides you with tighter control of the process, shortens the required evaluation process, and eliminates wasted time developing proposals from firms that you’d never actually consider to complete the work. If you’ve failed to scope the project correctly, or underestimated the project budget given the type of firm you’d like to work with, selectively soliciting bids will also help to identify these issues because your prospective partners will politely decline the invitation to respond. If you can selectively distribute the RFP, your distribution list may include as few as 2–3 firms or as many as 20 depending on your objectives. The general rule is that if you don’t consider the firm a serious contender, don’t send them the RFP. Make yourself available to answer questions Depending on your organization, there may be ways in which your communication is legally limited after the RFP has been distributed. At the very least, you should conduct a pre-proposal conference call where firms can ask you clarifying questions to make sure their proposals are on point. If you’ve limited the pool through selective distribution and you’re legally allowed to, consider answering individual emails and phone calls. This will allow you to quickly and effectively answer questions, establish a relationship with potential partners, and get insight into their process and attention to detail. Evaluation and selection If you’ve done a good job outlining the RFP and soliciting proposals from prequalified firms, the evaluation process should be simple and efficient because you’ll already have a narrow, focused field of responding firms and a clear guideline for how you want to evaluate the proposals. When you’re writing your RFP and establishing timelines, your team should also internally pre-schedule proposal evaluation meetings as part of this timeline. The last thing you want to do is receive proposal submissions and then spend two weeks trying to nail down your team’s schedule to meet. Additionally, make sure you’ve established a clear scoring system that is being shared with the respondents as part of the RFP. For example, if your rubric uses a 100-point valuation process and 25 points can be awarded for price, then make sure it is clear what price (or range) will earn the proposal the full 25 points and what will just earn it 15 (or zero) points. A vague scoring system makes it harder for your team to objectively evaluate proposals, and it makes it more difficult for firms to see what you really value in their response. Successful selection committees An odd-number of selection committee members is also best in evaluation in the event that the decision comes down to a majority vote. If you have an even number of committee members or a voting system that requires unanimous approval, make sure you have a predetermined tie-break procedure in place. If you opted to distribute to a small field (3–5 firms) of prequalified firms, you may also consider if it is possible to have the firms present their proposals to you rather than your team just reading them. This will allow you to ask follow-up questions and clarify aspects of the proposal if needed. Finally, If you ask for references, use them. References can be extremely helpful in evaluating a firm’s expertise and ability to form positive working relationships if these are key evaluation metrics for your team. If you have no intention of contacting the references, though, then asking for previous work samples will go just as far in showing prior work experience. Final thoughts Ultimately you can only award your project to one firm, and someone will inevitably be disappointed given the amount of work responding to an RFP requires. However, the firm you select will be your partner for months, if not years. Make sure you treat this relationship with care by responding to emails and phone calls in a timely manner and keeping them up-to-date as you go through the evaluation process. When you’ve come to a decision, let all responding firms know of your choice and why you went that direction. While you may pass on a firm’s proposal this time, they could be a great partner for you on a project down the road, and letting them know where they fell short will help them to improve for the next project they take on. While sometimes necessary, RFPs are time consuming for organizations on both sides of the table. Following a simple, clearly defined process and avoiding anything that’s not essential to the process will help you to streamline your efforts and make it as painless as possible for all involved.
https://medium.com/madison-ave-collective/rfps-done-right-52fc2d8c1c86
['Logan Hoffman']
2017-03-23 19:12:20.820000+00:00
['Design', 'Marketing', 'Business']
‘No One Is Listening to Us’
On Saturday morning, Megan Ranney was about to put on her scrubs when she heard that Joe Biden had won the presidential election. That day, she treated people with COVID-19 while street parties erupted around the country. She was still in the ER in the late evening when Biden and Vice President–elect Kamala Harris made their victory speeches. These days, her shifts at Rhode Island Hospital are long, and they “are not going to change in the next 73 days,” before Biden becomes president, she told me on Monday. Every time Ranney returns to the hospital, there are more COVID-19 patients. In the months since March, many Americans have habituated to the horrors of the pandemic. They process the election’s ramifications. They plan for the holidays. But health-care workers do not have the luxury of looking away: They’re facing a third pandemic surge that is bigger and broader than the previous two. In the U.S., states now report more people in the hospital with COVID-19 than at any other point this year — and 40 percent more than just two weeks ago. Emergency rooms are starting to fill again with COVID-19 patients. Utah, where Nathan Hatton is a pulmonary specialist at the University of Utah Hospital, is currently reporting 2,500 confirmed cases a day, roughly four times its summer peak. Hatton says that his intensive-care unit is housing twice as many patients as it normally does. His shifts usually last 12 to 24 hours, but can stretch to 36. “There are times I’ll come in in the morning, see patients, work that night, work all the next day, and then go home,” he told me. I asked him how many such shifts he has had to do. “Too many,” he said. Hospitals have put their pandemic plans into action, adding more beds and creating makeshift COVID-19 wards. But in the hardest-hit areas, there are simply not enough doctors, nurses, and other specialists to staff those beds. Some health-care workers told me that COVID-19 patients are the sickest people they’ve ever cared for: They require twice as much attention as a typical intensive-care-unit patient, for three times the normal length of stay. “It was doable over the summer, but now it’s just too much,” says Whitney Neville, a nurse based in Iowa. “Last Monday we had 25 patients waiting in the emergency department. They had been admitted but there was no one to take care of them.” I asked her how much slack the system has left. “There is none,” she said. The entire state of Iowa is now out of staffed beds, Eli Perencevich, an infectious-disease doctor at the University of Iowa, told me. Worse is coming. Iowa is accumulating more than 3,600 confirmed cases every day; relative to its population, that’s more than twice the rate Arizona experienced during its summer peak, “when their system was near collapse,” Perencevich said. With only lax policies in place, those cases will continue to rise. Hospitalizations lag behind cases by about two weeks; by Thanksgiving, today’s soaring cases will be overwhelming hospitals that already cannot cope. “The wave hasn’t even crashed down on us yet,” Perencevich said. “It keeps rising and rising, and we’re all running on fear. The health-care system in Iowa is going to collapse, no question.” In the imminent future, patients will start to die because there simply aren’t enough people to care for them. Doctors and nurses will burn out. The most precious resource the U.S. health-care system has in the struggle against COVID-19 isn’t some miracle drug. It’s the expertise of its health-care workers — and they are exhausted.
https://medium.com/the-atlantic/no-one-is-listening-to-us-181671962027
['Ed Yong']
2020-11-16 20:05:46.585000+00:00
['Covid 19', 'Healthcare', 'Coronavirus', 'Healthcare Worker', 'Health']
Spark deserves a better IDE
Authors: Raj Bains, Maciej Szpakowski Spark has become the default Data Engineering platform in the cloud. With Databricks, AWS EMR, AWS Glue, Azure Databricks, Google Dataproc and Cloudera, one can rely on Spark being ubiquitously available. As we work with Enterprises to move legacy ETL to Spark, we’ve been focusing on building the right replacement. We find that the current interfaces fall short, and defining a new interface. Legacy Visual ETL Driven by Visual Drag-n-Drop interfaces, these have a vast number of Enterprise developers adept at using them. Actually, it’s quite nice to get a visual overview of how the data is flowing, but over time the clicking is exhausting. Now, legacy ETL products claim to support Spark. On the ground, this means developing workflows in their proprietary format, that sits in their legacy store, generating unmodifiable crappy code. There is no longer an appetite for these boxed solutions in the Enterprise. Spark Code A lot of technology companies especially in the bay area choose to write code instead. One can use notebooks, but without ordering and standard structure, there is a consensus that these are no way to write production code. IDEs give a vast canvas to paint code on, but with the power and flexibility, different teams paint differently. They end up with different ways of structuring code and managing configurations. In the worst case, this means long Spark scripts where it is a nightmare to understand how the data is flowing, by doing variable chasing across instructions. It’s no joy for a production support team to find errors under time pressure. Code=Visual We looked at various roles in Data Engineering including Architects, Engineers, QA and Support and Engineers with different preferences, and thought hard about how to make everyone successful.
https://medium.com/prophecy-io/spark-deserves-a-better-ide-92d23175f3b4
['Raj Bains']
2020-01-29 01:32:44.618000+00:00
['Kubernetes', 'Spark', 'Data Engineering']
5 Email Marketing Metrics Every Ecommerce Business Should Track
Every right decision in e-commerce is based on valid metrics. The most successful businesses understand their online store’s performance and know which areas they need to improve in order to grow. Email marketing is definitely one of the cheapest and most convenient ways to communicate with your prospect and loyal customers. But it is only effective if the recipients are engaged and responsive to your messages. This is why it’s important to measure the performance of your email campaign to see if we are actually achieving our goals or not. In this article, we are reviewing 5 crucial email marketing metrics that most affect revenue for eCommerce businesses. Here’s what we covered: The 5 metrics Open rate Click-through rate Conversion rate Bounce rate Unsubscribe rate Conclusion Between thousands of possible KPI’s, there are 5 metrics every e-commerce business owner should track to fully understand the performance of their email marketing campaigns. Open rate Your open rate is one of the first and most important metrics to measure. It simply determines how many of your emails have been opened, and directly shows the performance of your subject line. Besides, the email open rate can help you understand if you are targeting the right audience and if your emails reach their inbox (and not spam folder). 21.33% is the average email open rate but the numbers differ by industry: To calculate the email open rate, you should divide the total number of tracked open emails by the total number of delivered and multiply it by 100. Many email marketing platforms if not all show you what your campaign open rate is. In Klaviyo, it’s this column: Segmenting your email list and creating different campaigns for your customers based on their purchase behavior, is one of the high-level things you can do to improve your open rate. But here are 3 simple ways you can begin to improve the open rate of any email campaign: Use short and personalized subject The subject line should be personalized, not too “salesy” and most importantly — short. Use the name of the person in the subject or opt for an eye-catching phrase that drives attention. Nearly 70% of users check emails on their mobile phones, therefore make sure it is shorter than 90 characters — ideally 6–10 words. Emails with the highest opening rate include words as “secrets”, “e-sales” and “awesome” in the subject line. Choose the correct time Research from Get Response suggests the best day to send emails is Tuesday. When determining a good time to send emails, consider who your audience is — take a look at the bottom graph that segments the opening rate by day and industry. Receivers are most likely to read emails that arrive at 10 am and 1 pm. Avoid being marked as spam Sending you an email campaign from a good IP address and through verified domains is not enough to make sure your message does not land in the spam folder. Increase your chances by using merge tags in the “To” field and avoiding extremely sales-oriented language (f.e. “buy”, “clearance”, “discount”, “cash”). You can also guide your subscribers on how to whitelist your emails, and ask them to add you to their address book. 2. Click-through rate Measuring the click-through rate (CTR) is important to understand how many of your subscribers are engaging with your email campaign and click on the link in the email. The average click-through rate for email campaigns is 7.77% — and it increases by 14% if the subject line of your email is personalized! The CTR metric is closely connected to the conversion rate on your online business — if the click-through rate is high, the conversion rate should be high as well. 73% of online marketing specialists define the success of an email marketing campaign by its click-through rate. When calculating the CTR, you should divide the number of clicked emails by the number of sent emails minus the number of bounced emails, and multiply it by 100. You can measure the unique clicks through your email and see how many of your recipients clicked on the link, or all link clicks, in which you will count multiple clicks on the link from one of the recipients. Optimizing your email campaigns for mobile users can increase the CTR by 15%. If your click-through rate is lower than 7%, try to: Segment your email list Segment your subscribers by their purchasing behavior and send them personalized emails. For example, you can target new subscribers with the most popular products or offers. While you could send emails with similar products to their last purchase or loyalty offers to those who have already bought. Personalizing emails can increase CTR by 139%. Regulate the frequency of your emails Your email campaigns should be sent consistently. Companies that sent 16 to 30 email campaigns per month have 2 times greater CTR than those who sent less or more. Additionally, try sending emails on Monday or Tuesday when the click-through rate is the highest. More people are likely to click through emails from 6 am to 6 pm. Place your CTA’S strategically A recent case study from Super Office suggests that receivers will more likely click through the email with one call to action than three. Of course, you should include a CTA button in your email campaigns, but make sure to follow the saying “less is more” when doing so. Instead of placing multiple CTAs, concentrate on placing every CTA where it matters. Including a CTA button in the left region can increase your email click-through rate. 3. Conversion rate Perhaps the most important metric on this list. A conversion is the one action you would like the email recipient to take. That can be a reply, purchase, group join or add items to cart for browser adornment emails. Email is one of the best channels for driving conversions because with automation, you can send targeted messages to your audience on autopilot at no additional advertising cost. In fact, eCommerce businesses utilize email marketing to drive up to 20–30% of their total purchase conversions. The difference between conversion rate and CTR is revenue. CTR measures the number of people that click through to the destination while conversion rate measures those that took the desired/conversion action on the destination page. To measure your conversion rate, you should divide the unique conversions generated by successfully delivered emails, and multiply the number by 100. You can improve your conversion rate by: Offer An offer is an incentive given to the recipient so that they can take a desired action. In eCommerce offers are usually discounts on products visitors have viewed, added to cart or initiated checkout for but didn’t complete their purchase. Offers like these give customers a reason to buy but they still have to be compelling enough to drive a purchase. Depending on how much you can discount your products, test out different offers to determine what works best for different audiences. Content: Send content that matches their stage in the funnel. Every subscriber is at a different stage of their customer journey, some have just joined, others are considering purchasing your product while others have already made a purchase. You have to provide content that matches a customer’s stage in the funnel because at the different stages, every user has a different need. For example, new users are sent a welcome sequence to nurture them, help them learn more about the brand and build trust before they can purchase. While those that abandon carts are sent cart abandonment emails with discount offers and social proof to help them complete their purchase. Before sending out emails, make sure the content provided matches the recipients stage in the funnel to increase chances of conversions. 4. Bounce rate The bounce rate shows the percentage of your emails that have not been delivered to the subscribers and has therefore been sent back to your email. There are 2 types of bounces: Hard bounces — are permanently rejected emails because of the incorrect email address or block. Soft bounces — are temporarily rejected emails because the email size is too big, the recipient’s mailbox is full or his server is not working. To calculate your bounce rate, divide the total number of hard bounces by the complete number of sent emails. The bounce rate is a clear indicator of your email list where an average soft bounce rate is less than 2%, while a hard bounce rate should be less than 1%. A high bounce rate can give you a bad rate on Gmail and Yahoo, and even get your IP blacklisted that’s why it’s important to reduce it. Here’s how: Clean your email list regular Remove the email address that generated a hard bounce immediately. When a soft bounce occurs, you can try to resend the email a couple of times, and remove it if it keeps coming back. Get proper permission before sending emails Sending confirmation about the subscription or a welcome email will increase your chance to retain only the customers who are interested in your product or service. 5. Unsubscribe rate The number of unsubscribed customers is critical but crucial feedback to understand if your messaging is keeping your subscribers engaged. The unsubscribe rate defines the percentage of subscribers that are no longer interested in receiving your emails. The average unsubscribe rate is 0.17%. You can track the unsubscribe rate in the statistics provided by your Email Service Provider. Usually, it is measured for the last 30 days. To improve your unsubscribe rate, you can: Be upfront about time and frequency Informing your subscribers about how frequently and on which days will you send the newsletter builds trust and expectation. Send a follow-up email Send a follow-up email to the people that unsubscribed, and present an option to re-subscribe or redirect the user to another channel. It is even better if you use the follow-up email to ask for user feedback. People unsubscribe for a number of reasons and their feedback could help learn more about how to improve the experience of other subscribers and may-be decrease your unsubscribe rate. Conclusion Growing your email list is simple, but engaging your subscribers, keeping them interested and open to your email campaigns is a much harder — but also more rewarding task. When analyzing the metrics, ask yourself how they can lead you to increase the value of your email campaigns.
https://medium.com/analytics-for-humans/5-email-marketing-metrics-every-ecommerce-business-should-track-260e3f379f4d
['Mike Wagaba']
2020-12-22 19:15:44.910000+00:00
['Content Marketing', 'Analytics', 'Business', 'Marketing', 'Entrepreneurship']
3 Reasons Why You Should Get AWS Certified This Year
1. AWS is Quickly Becoming the Gold Standard of the Cloud AWS is leading the pack in almost every aspect. According to Gartner, Amazon’s cloud is 10 times bigger than its next 14 competitors, combined! This is bad news for the folks at Azure and Google Cloud Platform but it is great news for you. Whether you’re a web developer, a database admin, a system admin, an IoT developer, a Big Data analyst, an AI developer (and the list goes on and on), your life will be made much easier if you take advantage of Amazon’s platform. Their offerings touch almost every aspect of technology, and discussing them would be outside the scope of this article. They are constantly adding more offerings and innovating in a way that is leaving the competition in the dust. Gartner’s famous Magic Quadrant report has this handy graph, that shows AWS leading in every aspect of innovation and execution: 2. AWS Certifications Are Feasible and Within Reach Unlike other vendors, Amazon offers a realistic certification path that does not require highly specialized (and expensive) training to start. I am not saying that it is very easy to get certified, but you won’t have to quit your job and pay for expensive training to get your first AWS certification. As of early 2017, AWS offers 3 tiers: Associate tier: Certified Solutions Architect Associate Certified Developer Associate Certified SysOps Administrator Associate 2. Professional tier: Certified Solutions Architect Professional DevOps Professional 3. Specialty tier: Security Advanced Networking Big Data The most common approach is to start with the Certified Solutions Architect Associate. It is a great way to get familiar with the AWS ecosystem and core services. You are required to have an associate certificate before you can sit for the professional or specialty exams. Furthermore, AWS requires that you have your Solutions Architect associate certificate before you can take the Solutions Architect professional test, or that you have your Developer or SysOps Associate certificate before you can sit for the DevOps Professional test. As far as training, the best resource by far is A Cloud Guru. I passed all three associate certificates by relying mainly on their excellent courses. Ryan Kroonenburg and the rest of the A Cloud Guru team provide excellent training for AWS, Docker, and other cloud technologies and their courses are very affordable and unmatched in quality and content: https://acloud.guru Self-learners rejoice! With a bit of effort and discipline, you can become very proficient. Amazon also offers a free tier account so you can use most of their services for a year for free. The hands-on experience is crucial in your learning journey. 3. AWS Skills Are in High Demand and Pay Top Money According to Forbes, these are the top paying certifications for 2016: Need I say more? With that being said, please remember that simply getting the AWS Solutions Architect certification DOES NOT automatically mean that you will be making the annual salary indicated in the table above. Many other factors are at play here, including your other skills, experience, geographic location, etc. The point is, proving to potential (or existing) employers that you are competent in using Amazon’s cloud offerings will have a great positive impact on your career.
https://medium.com/hackernoon/3-reasons-why-you-should-get-aws-certified-this-year-7e44dbc51519
['Moneer Rifai']
2017-07-19 09:45:13.050000+00:00
['Career Change', 'Cloud Computing', 'AWS', 'Certification', 'Careers']
Deploy native Kubernetes cluster via AWS CDK
Deploy native Kubernetes cluster via AWS CDK Hallblazzar Follow Aug 30 · 7 min read Preface It has been a year after I assumed AWS Cloud Support Engineer. Looking back on the past 365 days, I feel none of them was unreasonably wasted. Peoples I met and cases/issues I worked on made me grow — just as Amazon’s philosophy, “it’s always Day 1”. But I have to say that I really don’t have free time to do side-projects during this year. Challenges filled every single days — bleeding edge technology, monthly goals, anxious users and urgent response times. Working as a Support is really different from working as a Developer. At least under most of circumstance, pressures from users won’t directly apply to developer. I spent most of my time learning non-technical skills to handle user’s issues more smoothly. For instance, effective negotiate and build trust with users. Now I’m getting little more used to working as a Support, so I start to have some free time to do some interesting things I’d like to do. In this topic, I’ll describe the progress that how do I use the AWS CDK to design a script to deploy a native Kubernetes(K8s) cluster on AWS. In addition to CDK, I also use kubeadm as the core to automate the whole deploying process. If you’d like to check the final script directly, please check the repository on my GitHub, kubeadm-CDK. About this topic What information are included? How do I design CDK script and shell scripts to deploy native K8s cluster, and how to make them work. Some technical issues(bombs💣) when I implement them. Future works. What information are NOT included? Basic of AWS CDK and AWS services. If you’d like to know that, please consider refer to AWS official documents and samples. Otherwise, if you buy AWS Support plans, you could look for AWS BD/TAM/SA’s assistance or you could also consider create a support cases for guidance(may be I’ll be the one provide you assistance in the case 🤣 ). Basic of K8s.If you’d like to know that, please consider refer to Kubernetes official document. How to use CDK to deploy EKS. This topic is about deploy NATIVE K8s cluster 🤣. 1. How do I design CDK script and shell scripts, and how to make them work. In my opinion, if I’d like to deploy an application or a service via CDK, I need to figure out: What AWS services are required as infrastructure? How to automate deployment process based on these infrastructure? Infrastructure Basically, for first question, to deploy a K8s cluster, the following AWS resources are required: EC2 instances to serve as master and worker nodes. Isolated VPC and subnets to ensure EC2 instances won’t be effected by existing VPC related configurations. Based on resource above, I also want to: Secure Control Plane to make administrators could only access it privately. Open least ports for both control plane and worker node. They could work properly under the settings, and keep cluster way from unexpected traffics as far as possible. Therefore, the following resources are additionally required: An additional EC2 instance to serve as a bastion host. This bastion host should be the ONLY host could access all ports of all host on VPC. Security groups to satisfies network security requirement. Once making sure what resources are required, then with CDK’s help, basic deployment scripts could be constructed. K8s cluster deployment Now the problem is, how to deploy K8s cluster on EC2 instance automatically? There are many existing K8s cluster deployment approaches. For instance, kubespray, Rancher or Ubuntu Juju. After spending many efforts on testing and surveying for long time, I finally decided to use kubeadm. The reasons are: This tool is officially developed, maintained and supported by Kubernetes community. I could gain more control over installation than other tools, but it also doesn’t require me to manage all details as directly install everything by myself. Few dependencies and configuration for dependencies are required. I could simply use shell script to automate deployment process with kubeadm. To install K8s cluster via kubeadm, I just need to follow instructions in kubeadm installation guide and cluster creation guide. Put these instructions in shell script and inject to EC2 user data looks simple. Based on kubeadm’s workflow, CDK scrips and shell script should follow the order below: But things are always not as simple as I thought … . 2. Issues 💣Issue 1. Security group rule. My first problem is, Pods on Kubernetes cluster created by my script cannot access the internet. In general, after bootstrapping a K8s cluster, the most important thing is making sure network connectivity. It will effects applications and services could work properly or not. The problem is, if I create Ubuntu Pods on cluster, and perform commands like ping , curl in these Pod via kubectl exec , I could observe that traffics below couldn't be successfully established: Pod <-> Internet [ping IP address was allowed, but ping specific domain name failed] Pod <-> Pod [ping both cluster IP or Service/Pod DNS name failed] Pods were unable to resolve DNS records or even access CoreDNS Pod. The first thing came up to my mind was security group. Actually, when I planned security groups rules, I just simply followed the port tables in kubeadm’s document. But one thing the document didn’t mention was, for cross-worker node traffic, ports would randomly be used. It relates to how K8s network works. When Pods sits on different worker nodes, if these Pod would like to communicate with each other, CNI plugin on these worker node will convert source and destination IP address between cluster IP and host IP to ensure packets could reach destination. Under the situation, port in packets will still be retained. Therefore, if worker nodes don’t expose all ports to each other, then cross worker node traffic will be blocked, and connections between Pods cannot be established. That is the reason I add the function, __attach_inter_worker_access_rule() in security group setting. After attaching the rule, it looked all network connectivity issues were gone. But the happiness didn’t last too long — when I deploy the CDK script again, different issue occurred. 💣Issue 2. Node taint issue. The new circumstance was, network connectivity became unstable. The syndromes were the same as the ones I encountered previously, but with different behavior: For Pod <-> Pod connectivity, connection intermittently lost. It means, if some Pods could connect to each other, but some couldn’t. For DNS resolving, failure also occurred intermittently. Some Pods could resolve DNS record and connect to CoreDNS Pods, but some couldn’t. So I had 2 choices: use TCPDump to analyze packets, or figure out what configurations led the situation. Finally, I chose the latter one. The reason was, clusters were constructed from scratch, so it shall not suffer from network issues. Analyze progress and setting I used was a good starting point. My first decision was trying other CNI Plugins. In the beginning, the one I used was Flannel, a simple and reliable solution. However, the issue forced me try other options such as Weave Net and Calico, but they were still unable to solve it. Therefore, I thought CNI Plugin might not be the problem. Then I started thinking OS-level issue. Basically, the one I used to create instance was Ubuntu 20.04(Focal Fossa). Thought I could successfully bootstrap K8s cluster, according to kubeadm installation instructions, it looked Xenial(Ubuntu 16.04) should be the one the APT packages source the kubeadm packed for. However, changing OS version was still unable to solve the issue. I also tried Ubuntu 18.04(Bionic Beaver) but problem did still persist. As a result, I did the following things: Tried using different OS and CNI Plugins combinations. Tried adding different wait and delay condition in shell scripts for worker nodes and master nodes. These attempts took me almost all 1-week after after-work times! The main reason was, CDK is based on AWS CloudFormation. Using it to creating and deleting resources is significantly SLOW. In my case, it took me at most half of hour to create EC2 instances(only EC2 instances!!!!💀💀💀💀💀)(wait/delate condition time hadn’t been counted!!!!💀💀💀💀💀). When I started to decide to give up the kubeadm, it occurred to me that I should verify nodes the Pods with network connectivity issue, and I found the root cause — those Pods were created on master nodes. Based on my security group rules(according to kubeadm recommendation), master node only allows traffics from worker nodes to access its 6443 port. Therefore, if Pod were scheduled on master node, being unable to establish connection with CoreDNS and Pods on worker nodes was expected. However, according to kubeadm troubleshooting guide, by default, the node-role.kubernetes.io/master:NoSchedule taint will be applied to control-plane nodes. For that reason, Pods SHALL NOT able to be scheduled on master nodes. To verify that, I added the following settings to kubaadm configuration yaml file: And it WOKRED!!!! Such a great document!!!! Thanks, Kubernetes!!!!! 💣Additional Issue. Rook Ceph To provide cluster created by script a persistent storage, I also add instructions to deploy Rook Ceph in scripts. But the problem is, no matter how many disk space did I allocate to EC2 instances, the ceph status command always throws the error that no disk space could be used(using rook-toolbox). After verifying logs of each Pod created by Rook Ceph, I found messages mentioned that no disk could be mounted. Therefore, I tried attach additional disk to nodes in CDK scripts, and it WORKED 💀. However, I think I just solve the issue by dumb luck. To use Rook Ceph as persistent storage for K8s cluster, great understanding to Ceph is required. It will be one of my future work. 3. Future work. As I mentioned in preface, you could see the final scripts in my personal GitHub, kubeadm-CDK. If you encounter issues while deploying, please feel free to let me know via GitHub issues. I’d be glad to provide assistance. Besides, here are what could be improve for the project: Allow the script to deploy multiple master nodes control plane. More CNI plugin options. More persistent storage option(based Rook). Concrete IAM permission list for deployment. If you’re also interested in the project, you also star it on my GitHub. Besides, any advice and feedback are welcomed!
https://medium.com/hallblazzar-%E9%96%8B%E7%99%BC%E8%80%85%E6%97%A5%E8%AA%8C/deploy-native-kubernetes-cluster-via-aws-cdk-ea0a9430e648
[]
2020-09-05 04:32:01.691000+00:00
['Aws Cdk', 'Kubernetes', 'Kubeadm', 'Programming', 'AWS']
Why Do Most Prices End in 99?
Where’s the Evidence? Psychological pricing has been studied for decades, and we’ve come to some very satisfying conclusions. In 1997, Kaushik Basu used game theory to explain how rational consumers use their time and effort when it comes to calculations. He gave an economic explanation for why this happens. Photo by wu yi on Unsplash Consider a large enough marketplace with many products. Here, each consumer is rationally making decisions. When they read the price from left to right, they ignore the last two digits of the price as they value their time and effort. What they do instead is associate them with the mean cent component of all the products in the marketplace. What matters to the seller is the dollar component of the price in determining the demand for the product. The seller can now change the cent component without significantly affecting consumer behavior and the average cent component according to the demands for the product. If I am a seller, and I set my cent component to 99, I am raising the average cent component and directly hurting other brands. So, other brands choose 99 too. The average is now so skewed towards 99 that the average won’t be changed by a single seller, which implies the consumer behavior won’t be changed too. Consider each brand as a player, and each player’s strategy is to choose 99 as the optimal cent component. 99 is the optimal number to maximize profits for all the players. It has been further supported in an experimental study by Bradley J. Ruffle.
https://medium.com/better-marketing/why-do-most-prices-end-in-99-5aae32953792
['Binit Acharya']
2020-10-23 14:01:01.548000+00:00
['Economics', 'Psychology', 'Business', 'Money', 'Marketing']
How a Good Night’s Sleep Boosts Productivity
Photo by pixpoetry on Unsplash As an entrepreneur, you are inclined to be a work-ethic. You have a planned day each day, where every portion of your time is carefully selected for any particular activity, and you try to wrestle with it to fulfill it. Being an entrepreneur, I can understand why many CEOs and people in business opt to de-emphasize their relationships, personal goals, and well-being to give their all to their projects. As a result, sleep is one of the primary necessities treated like a luxury and gets thrown out the window. Surprisingly, it is more common than you can anticipate. Reasons for Entrepreneurial Insomnia The problem with running a business is the complexities that tag along, which require a full-time dedication to tackle. All this process can also increase stress. I have found that to be very true. With stress comes the guilt that if you take some time off or focus on yourself, you might end up lagging behind the competition. “Successful People Sleep Less” Jack Dorsey (Twitter founder), Marissa Mayer (former Yahoo CEO), Indra Nooyi (PepsiCo CEO) are some of the names thrown around when people talk about entrepreneurs who work more than 16 hours and sleep less than 4 hours. They are hailed as champions, so the younger generation of entrepreneurs look to them for inspiration. This is where they can be doing more harm to themselves, scientifically speaking than they are willing to consider. It got me thinking about the importance of sleep and finding out what general consensus about sleep was there in the business community. The Other Side of the Picture I think this is one of the major hurdles new and upcoming entrepreneurs and businessmen face as soon as they begin focusing on their time. Contrary to popular belief, a good night’s sleep is an essential ingredient for staying healthy. Especially during pandemic times, when our immunities are already at risk, we need to focus on our sleeping schedule. Finding ways to manage my sleep schedule while quarantining, I stumbled about a fact. The Huffington Post’s founder Arianna Huffington once woke up in her home in a pool of her blood. She had gone without sleep for some time, and it resulted in her blacking out, hitting and rupturing her cheekbone against the desk. She was not suffering from any disease; she just felt exhausted and had no time to sleep. I was fascinated by Huffington’s realization about entrepreneurs’ struggle with sleep and decided to dig a little further about the world’s most prominent entrepreneurs’ sleeping habits. Top business moguls such as Elon Musk, Tim Cook, Bill Gates, Sheryl Sandberg, Jeff Bezos all preach a 7 to 8-hour sleep cycle. In an interview, Musk admitted that he slept at least 6 hours a night (from 1 am to 7 am), although he confessed that he could stay up longer if he wants. Similarly, Bill Gates has disclosed that he sleeps 7 hours per night (from 12 am to 7 am). I found that Apple’s CEO Tim Cook sleeps 7 hours every night (from 9:30 pm to 4:30 am ). Sheryl Sandberg balances her busy days at Facebook by sleeping 7 hours (9:30 pm to 5 am). Scientific Basis for Better Sleep Of course, as I started digging deep, I found many types of research done about entrepreneurs and their circadian rhythm — the body’s biological clock that determines when to release hormones that influence sleep. The most helpful study I found online was of Gish et al. They found out the effects of low sleep habits in 784 entrepreneurs. The data of the research showed lower sleep cycles continuously result in lower sympathy and friendliness among entrepreneurs. The researchers also added how low sleep cycles lead to heart and kidney diseases, diabetes, stroke, and high blood pressure. Partial sleep deprivation has undoubtedly affected me personally. A study done at the University of Pennsylvania found that even people with partial sleep deprivation can lead to them facing work irritability and inability to control anger. This change in mood can adversely affect your skill as a leader and an employer. In all of the research I did online, I found that it is not about increasing sleep time but actually about better-quality sleep. Photo by Taisiia Stupak on Unsplash Setting A Healthy Sleeping Routine Here is how you can fix your sleep cycle to boost business productivity that I tried to implement in my life, based on research and practice: Purchase the right pillow and mattress Believe it or not, having a quality pillow and mattress is directly related to sound sleep, and smart entrepreneurs know how to manage their precious resting time by investing in them. If your mattress is too soft or too hard, you will end up tossing and turning all night. You would even bother your partner and wake up cranky and groggy. Similarly, you should purchase pillows that are not too high or low to strain your neck muscles. They should align with your shoulders as you lay. If you have allergies, invest in hypoallergenic pillows. Change your dietary choices For managing businesses and for being on top of the game all the time, entrepreneurs consume foods and beverages to increase productivity, e.g., caffeine and alcohol. A general rule of thumb that I have found to be helpful is restricting caffeine consumption after 12 pm, as the substance stays in our systems longer than we think it does, and it is a key factor in disturbing sleep schedules. It goes the same for people who have a high tolerance for caffeine. Do not take sleeping pills Do not take sleeping pills right before going to bed. Even though it may seem like an easy fix, studies have shown that sleeping pills make people more tired and irritable. If you have to use a sleeping aid, choose relaxation supplements over sedatives. Adopt a healthy lifestyle Since this hack works in general for anyone, especially people facing the challenges of COVID-19, I will highly recommend changing your lifestyle. This includes working out each day for about 20 minutes, eat healthy, drink plenty of water, and try to meditate. Meditation helps in calming us and relieving us of our daily stressors, which can definitely help in falling asleep earlier. Pay attention to your sleeping environment To better help you with falling asleep, you can ensure that the place you sleep at is quiet, dark, and a cool room. Optimize the room temperature at around 60 °F — 75 °F (16 °C — 24 °C). These qualities improve the nature of sleep and make it fulfilling. You can also experiment with other soothing products or rituals. Many people recommend candles to help you with sleep. Others recommend a warm bath. Some people prefer reading books or listening to ambient or white noise before bed to help induce sleep. Establish consistency You can try to keep a journal to maintain your sleep schedule. There are some researches that consider keeping consistency as mute. Others argue that it might prove useful. This method might help in your case. The times you go to sleep and wake up are both important, and it helps develop a habit if you stick to it. This might prove to be a little difficult initially, but habit-building does not come easily. Reduce the use of technology I had always heard people saying that we should not use blue-light gadgets just before going to bed, but I had never paid attention to its root cause. The way blue-light works is that it hinders the production of melatonin that induces sleep. So, to help me manage my sleep schedule, I stopped my social media and technology use an hour before sleeping, and I found it immensely helped me maintain a fixed sleep cycle. Photo by Sincerely Media on Unsplash Avoid daytime naps Although entrepreneurs hardly get any free time to take naps, but if you do, avoid going to sleep. When I searched about this tip, I found out that eliminating multiple circadian cycles results in elevated diabetes levels, damage to the immune system, and your metabolism. Hack your sleep This method is a cheat for sleeping for eight hours or so and has a fair share of its own critique. Some entrepreneurs claim that doing a short interval of sleep helps them. They propose a two-part sleep system, where they wake up in the middle of the night to do some tasks. I am not a huge fan of this tip, but if it suits you, then you should experiment with this tip to see if it works in your case. Bottom Line Despite what many entrepreneurs claim, a single-phased, uninterrupted sleep of 6 or more hours has been scientifically proven to increase workplace productivity and decision-making abilities. It also keeps us energetic and rejuvenated to help us better grasp at opportunities. I feel that entrepreneurs should definitely start talking more about improving their sleeping patterns and how external factors and stressors influence the circadian rhythm.
https://medium.com/illumination/entrepreneurial-insomnia-how-a-good-nights-sleep-boosts-productivity-5deff483ada9
['Changwon C.']
2020-12-15 14:24:50.887000+00:00
['Sleep', 'Business', 'Insomnia', 'Productivity', 'Entrepreneurship']
Save money when using AWS Managed Kafka (MSK)
Save money when using AWS Managed Kafka (MSK) pascal.brokmeier Follow Jun 28 · 4 min read Let’s be blunt here for a second: MSK is not a mature managed service. The author of that post may have changed his mind in the meantime, but I have not. A simple kafka cluster on AWS runs you ~500 $. Do you want SSL with that? Well, better factor in another 500 $ for a private certificate authority because that’s the only supported authentication mechanism on MSK. If you enabled the detailed metrics, well, you are in for a surprise. We are currently facing almost 10k metrics on our 2 clusters (6 nodes total), running us another 3.000 $ on the cloudwatch bill. So here are 3 tips to save on MSK: 1. Private CA If your org has several AWS accounts, make sure you only register one PCA for the whole organization. We automated the creation of a CA with terraform, one per environment. That ran us almost 2.000 $ just to be able to create certificates. Probably not the best idea. 2. Disable detailed metrics on AWS console If you happened to enable enhanced topic-level logging (like we did) and then never disabled it, you probably saw your cloudwatch costs rise nicely. We had around 8000 metrics on DEV and 2000 on the ACC(eptance) environment. As a frame of reference, each topic creates between 5 and 15 metrics. So each topic cost us ~ 1.5$- 4.5$ a month. Thinking we pay for the cluster already, we didn’t think much of creating topics and leaving them on the cluster. It’s a shame you cannot enable the metrics only for specific topics on MSK. # fetch the metrics from the API and create a newline delimited json aws cloudwatch list-metrics | jq '.Metrics[]' -c > /tmp/metrics.json # load into pandas and use script from https://stackoverflow.com/a/57334325/1170940 to flatten the json Alright, so disabling detailed metrics on DEV alone saved over 2.300 $. We will use Prometheus instead which we already have deployed on our K8S clusters. If you can’t give up the metrics per topic, consider only adding metrics to the prod cluster. If you can, try switching to prometheus. Grafana works great with it and only needs a small instance (or pod). 3. Don’t scale up unless you absolutely have to The minimum setup for AWS MSK is 3 nodes, one per AZ. It’s a cluster technology, so naturally you want to scale up under load. Well, you’ll have to go for 6 nodes now, you can only scale up for all AZ at the same time, so that is 3,6,9,12,… nodes. Here’s the fun part: Kafka is a storage so don’t think this works like lambda functions or K8S. Once you scale up, you won’t scale down again. Hindsight tip: Consider not using kafka This may be the GCP favoring engineer talking in me but consider not using kafka. Kafka links compute nodes and storage together. Technologies like BigTable or BigQuery disconnect compute from storage. When doing so, you can scale compute independently from storage. MSK however is basically a set of nodes running on EC2 instances, storing kafka’s partition data on disk storage. If you want to go for an event-driven organization and maybe even integrate many databases through kafka connect, you may need a lot of storage. You may increase the volume storage per node but once you hit a CPU bottleneck, you have to choose: Wait and suffer the performance or upscale and pay for it from that day forth. Kafka has an active community and offers lots of tools like kafka connect or Debezium which allow you to generically stream events from many RDS or BI tools. But the bottom line to me is: Kafka is an enterprise technology. If you don’t waste a second thought on spending another 1.000$/month on infra, kafka is an option. But if these sort of numbers make you think if you’re doing the right thing for even a few seconds, kafka may be too big for you. Consider cloud native alternatives instead!
https://medium.com/datamindedbe/save-money-on-msk-661ff2c9c06b
[]
2020-06-28 09:47:28.441000+00:00
['DevOps', 'AWS', 'Streaming', 'Amazon', 'Kafka']
How Netlify migrated to a fully multi-cloud infrastructure
Netlify’s platform team is responsible for building and scaling the systems designed to keep Netlify, and the hundreds of thousands of sites that use it, up and running. We take that responsibility seriously, so we’re constantly looking for potential points of failure in our system and working to remove them. Most of our platform is cloud agnostic, and since we favor an approach that minimizes risk, we wanted to extend that to include our origin services. This led to a project that culminated in us migrating our origin services between cloud providers on a recent Sunday night — without any service interruptions. When you deploy a website to Netlify, your content automatically gets pushed to the edge nodes in our Content Delivery Network (CDN). If you make changes in your Git repository, your content is continuously and automatically synced with our origin servers and pushed to the CDN. Our CDN runs on six different cloud providers, but up until recently, our origin servers relied on only one. That meant our origin servers were subject to the performance and uptime characteristics of a single provider. We wanted to make sure that in the face of a major outage from an underlying provider, anywhere in our network, Netlify’s service would continue with minimal interruption. Our origin services can now swap between three providers: Google Cloud (GCP), AWS, and Rackspace Cloud without downtime. This post will cover how we planed, tested, and executed our first multi-cloud migration Planning for change The main goal for this project was to have a multi-cloud production system where we could direct traffic to different providers on demand, without interruptions. Our primary sources of data are the database and the cloud store — the first for keeping references to the data and the second for the data itself. To keep latency down, we needed to run a production-ready copy of both sources in every provider. They also needed to have automatic fallbacks configured to the other clouds. For example, Google instances prioritize Google Cloud Storage (GCS) and fallback to S3 automatically. We first built a tool called cloud bench that would check the performance of uploads to different cloud storages. It showed us what we suspected: upload to a cloud storage was faster if it stayed in the same provider. We also confirmed that pricing was better if uploads stayed inside the same network. We went through the different services that touched the data in the cloud provider, extracting simple Get/Put interfaces for that data and implementing concrete versions for Google, Amazon, and Rackspace. We then released the service, with the abstracted interface, that would use Rackspace — our current cloud provider. Then we started to test that service with the different implementations for the other clouds. AWS and GCP offer more robust load balancers than what we were using. Rackspace is limited to around 10GB/s, which is a lot under normal operation, but not in the event of DDoS. Both Google and Amazon don’t place any limits, saying that the frontend load balancers will scale up as needed to support the traffic. Data Replication Netlify customers have deployed terabytes of data and they’re deploying more every day. Before the migration, all of this data was stored in a single service: Rackspace’s Cloudfiles. Our first challenge was to build a system able to replicate the millions of blob objects we stored and start replicating new data as it came in. Our approach attached a bitmask to the object reference that indicated where it was replicated. We did this to keep down the bloat on the database as there are millions of entries and adding long strings to each would balloon the database size. It also meant that any service that needed to use that blob now knew from where it could load the data. This bitmask opened the way for intelligent prioritization in different services. We built a Go service that would inspect an existing blob, figure out where it needed to push it to, actually push it, and record its progress. The service was able to act in both online replication and batch backfilling modes. We fed a few million objects to the service for replication. After manually checking those blobs, we started two processes. One that would process newly-uploaded objects and one to backfill the existing objects. We chose a passive replication strategy for the objects; the service would come through and clean up from time to time. A more active strategy, like a 3-phase commit, introduced risk into the upload phase. For instance, bugs or provider issues could slow or disable uploads completely. It is also a simpler change in our system as it didn’t impact the request chain at all. We could iterate on the service with minimal risk to the primary flow. There is only a small fraction of objects that aren’t replicated in the event that we swap providers. Objects are available across providers and the system automatically handles that. Recovery The system is flexible enough to handle recovering quickly; we’d take degraded performance over an outage any day. Traffic can be quickly redirected internally by updating service configurations. We can direct traffic to any of our clouds, including splitting traffic, in order to keep from dropping requests. Of course there are cost and performance considerations when doing this, especially when crossing cloud boundaries, but uptime is the priority. Just the other day we had to configure a cross cloud setup. We detected a latency issue in our Google integration and manually intervened to swap reads from S3 while we investigated. Once we worked out the kinks with the integration we returned the configuration to its resting state, preferring the cloud the service was in. Doing things manually is often too slow. We needed the system to be smarter, preferring to serve the request even if it would be slow. To accomplish this, we made it so that our services all had fallbacks for each provider. The service could look at the bitmask of the content it was serving and then try each of the providers by priority. If the node was in Google, it would prefer GCS, but in the event of an error it would automatically load that from S3 or Cloudfiles as appropriate. Database considerations As with most companies, our database sizes keep going up. In Rackspace we had to manually create LVMs and rehydrate the instance. Doing this caused one of our secondary nodes to be unavailable for around a day. We run five fully-replicated nodes spanning two providers, meaning it wasn’t an unsafe procedure, it was a unnecessary risk. Now in Google and Amazon, we can provision much larger disks, as well as do quick resizing of the attached disks. We don’t have to completely destroy the node, just suspend, resize, and start it. One of our main worries in the migration was doing the database step down. We had a chicken and egg situation: either we move the origin traffic then the database master or vice versa. In either case we would increased the latency-per-query. We do around 1500 req/sec and if we introduce extra latency by spanning providers, we could put enough back pressure on the system to start dropping requests. It would only be a few minutes while traffic swapped over, but it was a definite risk. To compensate, we made sure that we were at a low point in traffic, that we had plenty of capacity to swap over, and that were quick about it. We ended up choosing to move the database master then origin traffic. Ultimately, it was ultimately not a concern. The query times jumped up but not enough to impact serving traffic. Testing for a full migration After we updated all of the services we could to be cloud agnostic, we started testing. Lots of it. We needed to test all the ways that people interact with the systems that touch a provider, like uploading assets and deploying sites. We tried to be exhaustive because we knew that the first cutover would be risky — it was picking everything up and shifting it. If we missed one hard-coded path it could have tanked the whole migration. The driving question of the testing phase was, “How will we know when it isn’t working?” That drove us to create a document with all the outage scenarios we could come up with. We made sure we had the monitoring in place to detect errors, as well as code that should handle the event. Once we had a reasonable unit test suite, we spun up all the boxes and services in the request chain as if they were production. We started to run sample requests through those services to verify that nothing was being dropped and timings were acceptable. We found some gotchas and quickly iterated on fixing those. Once confident that our staging area was working right, we started to send it a small trickle of production traffic. We carefully watched all the charts and monitored all the incoming requests possible, making sure that none of them were dropped or stalled. We did this in each provider — carefully verifying that the services were preferring the right providers, that response times were roughly the same as the actual production system, and that no service got backed up from increased latencies. Now it was just a matter of actually picking up the origin and migrating it to another cloud provider. Actually pushing the button Now that the whole platform team was confident that Netlify’s origin servers were sufficiently agnostic, we had to do the actual cutover. The steps to the task were: Spin up the new capacity in the secondary provider. Fail the database primary over to the new provider. Update the entries in our service discovery layer. If successful, all the edge nodes in the CDN would automatically start to send traffic to the new nodes as the value rolls out. Each of the edge nodes would start to prioritize instances that are in the same provider and traffic would flow uninterrupted. We chose a Sunday night to do the actual migration because that’s when Netlify sees its lowest traffic levels. The whole platform team got on a hangout, loaded up our dashboards, and prepared to aggressively monitor for errors or degraded performance that might result from the steps to come. Then we started pushing the buttons. We narrated each step, coordinating, and talking about any issues we saw. Thanks in large part to the planning and testing that went into the migration, the night was mostly spent chatting while the migration went off without a hitch. As a result of the migration, we can now swap between different cloud providers without any user impact. This includes the databases, web servers, API servers, and object replication. We can easily move the entire brains of our service between Google, Amazon, and Rackspace in around 10 minutes with no service interruptions. Side note: Netflix’s world-class engineering team recently got failovers down from nearly an hour to less than 10 minutes. We’re going to work our tails off to beat their record. What comes next? We have tested that we can fail over our origins to the other providers as well, just not done the full failover. Going forward we are going to be adding more monitoring (as always on a platform team). We also set up a live standby in the other providers that we will be sending a trickle of real traffic to in order to make sure it is always ready. Then we are going to be taking steps to work to reduce the amount of time it takes for us to switch — our goal is under a minute — and remove any manual steps. As we refine the process and add more insights we think that we can reach that goal. Every investment in our infrastructure introduces some level of risk and this project was fraught with it. I joined Netlify two years ago and we’ve talked about the importance of this project almost weekly since then. In that same time, it became increasingly clear that our current cloud provider could not offer us the quality of service we want to provide to our users. To me, this was a huge accomplishment and a great example of what a good team like ours can do. If you’re looking for a home on a platform team that’s doing exciting work, we’re hiring. Let’s have a chat or a coffee. This post was original published on the Netlify blog.
https://medium.com/netlify/how-netlify-migrated-to-a-fully-multi-cloud-infrastructure-cfca95bbbb0f
['Ryan Neal']
2018-05-14 19:50:32.521000+00:00
['Google Cloud Platform', 'Distributed Systems', 'Rackspace', 'AWS', 'Cloud Computing']
Sea Change in the High North
When did you join the U.S. Fish and Wildlife Service? Kathy: I first started as a seasonal wildlife biologist technician in 1978. I worked off and on, and then became permanent in 1997. Robb: Cumulatively, I have spent 13 years working for the U.S. Fish and Wildlife Service; first seasonal position was 2002, followed by a 4-year term position in 2010, and then a permanent position in 2014. Liz: I started in Alaska as a biologist for the U.S. Geological Survey in 1999, and then began working for USFWS in 2001, and became a permanent biologist in 2013. What do you think people would find most surprising about your job? Liz: Maybe that they think of the agency’s mission as wholly land-based. In Alaska we also do a great deal of research in coastal and offshore marine environments. Seabirds will typically only come to land for a few weeks out of the year to nest and raise their chicks. They spend the rest of their time at sea. Kathy: They’d probably be surprised by how much creative and interpretive work is involved. People tend to think of USFWS as a permitting and regulatory agency, but we do a great deal of hands-on field work, including projects involving international effort and cooperation. That’s necessary, of course, due to migratory species — they don’t recognize national boundaries. Robb: That as a wildlife biologist, 90–95% of my typical work day is NOT spent thinking about biology, but instead reading/responding to emails and managing my inbox. How do Alaska’s wild places sustain you? Kathy: Just the fact that I live in one of the world’s most beautiful places is sustaining. I really feel at my best when I’m outside, either at sea or in the wilderness. Liz: I feel rejuvenated every morning when I step outside or go up to the bridge on a boat. They say Alaska begins 10 minutes from downtown Anchorage, and it’s true. Being surrounded by such a vast expanse of wilderness is an inspiration. Robb: Wild places connect me to why I am a civil servant working to keep wild places wild. What’s your foremost concern about Alaska’s wildlife resources? Liz: Speaking specifically to seabirds, I’d have to say the changes in the marine environment — especially the pace of these changes, including receding tidewater glaciers and the decline in annual sea ice thickness and extent. Sea ice triggers spring blooms of algae forming the base of the marine food web. When the sea ice declines, marine productivity can decline along with it. I’m also concerned about the potential impacts of increased vessel traffic in the Arctic that is accompanying the decline in sea ice and how those changes can potentially impact the marine resources in the region. Kathy: The impacts of climate change. It’s affecting ecosystems everywhere, but we’re feeling it at multiple levels here, and at such a rapid pace we notice year-by-year changes. Wildlife adapts over geologic time. In Alaska, wild species have been able to adapt to extended periods of glaciation and de-glaciation. But they’ve never had to adapt to anything like we’re currently seeing in such a compressed time scale. Robb: My foremost concern about Alaska’s wildlife resources is that management goals and objectives sometimes seem to have little regard for what future generations of Alaskans will inherit. When I’m not at work, I’m… Robb: I am biking with my wife Leah, and dog Otto, on the awesome trails and pedestrian boulevards around and throughout Anchorage. Kathy: Outside, and probably birding. Liz: I’m outdoors, just doing as much as I can and enjoying Alaska — running, biking and gardening in my backyard. Being a marine biologist sometimes gives the impression that you’re always outside, but so much of what we do is behind a desk and in front of a computer. I have to get out whenever possible to remind myself why we do this work. What’s the greatest misconception visitors have about Alaska? Robb: Perhaps that you can “see it all” in one trip. I’d say it takes at least three trips to Alaska and even that is only the tip of the iceberg (which you should try to see during your first trip). Kathy: They expect to see wildlife in spectacular concentrations, doing exciting things all the time. As anyone who has spent any time in the wilderness knows, you typically have to devote many hours to being outside, being quiet, watching and waiting, to see anything at all. Also, people are often surprised by how intensely green it is here during the spring and summer. Liz: They often misunderstand the scale of the state, and how Alaska’s various regions differ so dramatically. In Southeast Alaska you have old-growth forests, while the interior is a mix of spruce, birch and tundra, and the Aleutians don’t have any trees at all. Alaska is vast and extremely diverse ecologically. What’s your most treasured memory of Alaska or your job? Liz: There are so many — including just being at sea. A special moment was my first time in the Bering Strait. I had a family member tell me about the time they spent stationed in this region years ago on the Russian side of the border. I was touched to think about his experience and having the opportunity to see this unique part of the world for myself. When I was there in May the Bering Strait was packed with sea ice surrounding Big and Little Diomede Islands located in the middle of the Bering Strait. The ice was impassable even for the U.S. Coast Guard icebreaker that we were on. Robb: Sharing our “own island” for six summers in the western Aleutians with my life partner, Leah. Kathy: A touchstone experience for me was working on Naked Island in Prince William Sound. It’s this spectacular, tidal-influenced wilderness. Your days revolve around the tide come in and out, and feeling the entire island breathe as a single organism. What advice would you give people who want to work for the U.S. Fish and Wildlife Service? Kathy: Contact the people working in a field that interests you, and be flexible in terms of what you’re willing to do. Liz: It’s like my mentor, Guy Baldassarre, said to me: take opportunities as they come. You never know what you might learn, or where it will take you. Just engage, and see where things lead. Robb: There are a lot of options for career paths at the world’s premier conservation organization. I would tell people to (i) think long and hard about what gets you excited or you find rewarding about wild places, fish and wildlife, then (ii) seek out a career that will result helping save some of that for future generations to enjoy. What wildlife species particularly inspires you? Robb: Seabirds and marine mammals. I am inspired by how seabirds and marine mammals make a living wandering the ocean, which on its surface looks barren and can be unforgiving. Kathy: The Laysan Albatross. It nests in Hawaii, and then soars all the way across the Pacific to Alaska to feed. I’m always excited when I see one. Liz: Well, I’m biased, so I’d have to say seabirds in general. They’re out there making a living on the open ocean, and it isn’t easy. We have three species of albatross in Alaskan waters, and I’m so impressed by the distances they travel. And fork-tailed storm petrels are these tiny gray birds that use the air currents in cresting seas to dance over the waves. When you’re watching a storm-petrel fly near the surface of the water, sometimes it looks like it’s going to be knocked out of the air by a massive wave, and at the last moment it just casually flits over the top of the wave crest with seemingly little effort at all. Whether they’re large birds like the albatrosses or tiny ones like the storm petrels, they all use the resources of the ocean, each in its own way, to survive. How and why did you come to Alaska? Kathy: I spent most of my childhood at the China Lake Naval Weapons Center in California, where my dad worked as a chemist. It’s in the Mojave Desert, and I loved it — there were so many places to roam and explore. But then I took my undergraduate degree at California State Polytechnic, San Luis Obispo on California’s Central Coast, and I was fascinated by the ocean. It seemed exotic to me, and I made my first positive seabird identification there — a Pigeon Guillemot. After graduation, I took a job in the Fisheries Division of the Fish and Wildlife Service, sampling streams and lakes. I went back to school and completed my Master’s Degree from the University of California at Irvine with research focused on Pigeon Guillemots at Naked Island in Prince William Sound. I ended up living on the Kenai Peninsula. I did some private consulting, fished commercially with my husband, we had our son, and then the Exxon Valdez Oil Spill happened in Prince William Sound, and I conducted damage assessment and recovery studies for Fish and Wildlife. I got my PhD at the University of Victoria in British Columbia based on that work, and I’ve been working for the agency ever since. Robb: I was raised in Spokane, but my love for the outdoors grew out of visits to my grandfather’s cabin on Priest Lake in northern Idaho. I took my undergraduate degree at The Evergreen State College in Washington, where I fell in with a bunch of birders from the East Coast. Birding became both a passion and a gateway to my career. After college, I had a variety of bird-related jobs. I worked for the U.S. Forest Service in Northern California surveying Marbled Murrelets and Spotted Owls, Peregrine Falcons at Vandenberg Air Force Base for the UC Santa Cruz Predatory Bird Research Group, then finally came to Alaska in 2001 to survey Spectacled Eiders and other seaducks for a private consultant who had a contract in Prudhoe Bay on the North Slope. After that I studied ovenbirds in shaded coffee plantations in Jamaica, went back to Alaska to work for the U.S. Fish and Wildlife surveying Common Murres, Thick-billed Murres and Red-faced Cormorants at the Alaska Peninsula Becharof National Wildlife Refuges, studied Loggerhead Shrikes on San Clemente Island off the California coast, and then returned to the Alaska Peninsula to work for USFWS again. I spent six summers surveying seabirds and Evermann’s Rock Ptarmigan on Agattu and Attu Islands in the Aleutians, work that also was the basis for my Master’s Degree from Kansas State University. I worked seasonal and term positions for USFWS through that period until I was hired permanently in 2014. Liz: I grew up on a small dairy farm in upstate New York. It was a wonderful place to be free to explore the woods and creeks, catch frogs and watch birds— it all instilled a love of nature that carried through to my undergraduate work at the University of New York’s College of Environmental Science and Forestry. When I was a junior in college I got my first job in Alaska working on a summer waterfowl project on the North Slope for the U.S. Geological Survey. I spent three months on a remote island in the Arctic called Flaxman Island in the Beaufort Sea researching and banding long-tailed ducks and eiders, and I just loved it. I learned a lot on the biology side, and I also learned how to live and work with people in a remote field camp setting. I went back to the Lower 48, got my degree, and then returned to work at Flaxman Island again for USGS the following summer, followed by a wading bird foraging ecology project in Cape Canaveral in Florida. Then in 2001 I got a call from Kathy to come back up to Alaska for a seasonal summer position surveying Kittlitz’s Murrelets in Prince William Sound. The temporary seasonal position with USFWS evolved over time — and I never went back. I’ve been here ever since.
https://alaskausfws.medium.com/sea-change-in-the-high-north-53c7f7fe90ba
['U.S.Fish Wildlife Alaska']
2020-07-13 19:04:09.991000+00:00
['Oceans', 'Climate Change', 'Environment', 'Birds', 'Science']
The Silicon Lie: It’s Time To Wake Up From The Dream
I’ve spent my entire career in technology, and now I am leaving. I remember walking the halls of my university’s computer science department — hopeful in tech and woefully naive. The chair of the department approached me one afternoon and gave me an opportunity to work for GE. A few years later, I was working at the GE innovation center, building what I thought was the future. It was there that I first found out the models don’t have the answers, but I couldn’t say anything. After that, I went to a series of startups, but I still couldn’t say anything because I ignored it. It had consumed me — the trancelike state that all of us in technology succumbed to. I used to call it the Silicon Dream. It was this idea that through the manipulation of bits and bytes, we could change the world without actually changing reality. If only we could compute the answer, the world would follow without question. What a naive dream. It spread like a virus across the bright young technologists, investors looking for new growth, and even the intellectual community. It was a dogma that, like bits, are highly abstracted from reality. Dissenters are few and far between because the dream absorbs all opposition and suppresses any counter-narratives. Technology went from being a tool to being a god and that god has pulled the wool over our eyes. Over the last few years, a conviction began growing in me. I had this feeling deep within my heart that all of this was wrong. We’ve been lying to ourselves and the world. I couldn’t admit it because if I did, I would become a dissenter and my career would self destruct. My survival was tied to the lie. I no longer have a fear of career or reputation loss. So I find myself posting this piece because I will not be complicit in the Silicon Lie any longer. We sit abstracted from reality and try to use models to direct our world. These same models ignore the complexities of reality and drive us to our demise. I watch as people believe that by using machines, AI, or blockchain that the problems of our world will be solved — that the neon gods will save us. With all of our technological progress, our local communities are falling apart. With all of the information at our fingertips, we know NOTHING. The truth is out there. It isn’t in the social media feeds, it isn’t in the ultra-polarized media, it isn’t in AI predictions. The truth is all around us. The truth is outside of the neon world and in the ACTUAL world. We must go out and see it for ourselves. We must feel it for ourselves. We’ve lost track of what makes us human. We’ve become the machines. And as long as we are machines, we will make a world that is devoid of human life, devoid of real connection. But we are not machines, we are a human. Why is it that we are betraying ourselves? Why are we building a future for the neon gods? This is why I have decided to leave the world of the Silicon Lie. I now dedicate my life to strengthening local communities and using tech to enrich our ACTUAL reality. We must use these tools to strengthen our humanity, not erase it. Damn the machines. Damn the models. Damn the dream. I have no plan for what is next in my life, but I know that it will be a life lived in reality.
https://medium.com/beyond-the-river/the-silicon-lie-its-time-to-wake-up-from-the-dream-2abc97eae73b
['Drunk Plato']
2020-06-07 15:59:43.816000+00:00
['Essay', 'Technology', 'Artificial Intelligence', 'Community', 'Society']
Here’s All You Need To Know About How to Market to Your Audience
I went to Powell’s City of Books in Portland with my daughter and my sister yesterday. I thought I might buy a marketing book — something to give me some new ideas about bringing my business to the “next level.” You know, maybe something by Seth Godin or Gary Vaynerchuk or Ryan Holiday. Powell’s didn’t disappoint. Its marketing section was two full bookcases, at least twelve-feet tall and five feet wide. In fact, it was overwhelming. But I took a deep breath and flipping through the titles. Most of them were at least five years old. Many felt fairly irrelevant or outdated and the ones that didn’t, I’d already read. Then I remembered reading this post the other day by Tim Denning. It starts by telling the reader that Gary Vee has completely changed in the last five years and goes on to share all the ways the man and his message have shifted. And I thought — that makes sense. And I also realized something else. There’s a lot I can learn about marketing, of course. But one of the biggest things is that I don’t need any tactics or secret tricks or gimmicks. I don’t actually need two massive cases full of books to teach me anything. Because I already know most of what I need to know. And so do you. I know because I’m being marketed to almost non-stop, at all hours of the night and day. I know because I’ve read a few important books. If you’ve given any thought at all to starting a business, you’ve probably read a couple, too. Or at least you follow some of the gurus. I know because most of it is common sense. Social Media is about being social. Denning talks about Gary Vaynerchuck teaching this in his post. And it’s true. The reason why Facebook is my favorite social media tool is because I have friends there. When I log on, I have people who reach out to me and I reach back to them. The people who follow me there follow me because I actually post my thoughts— not just links I want them to click. I’m far less effective at Twitter and Instagram and Pinterest because when I use them, I only market. One hundred percent, I use those outlets only to try to get you to do what I want you to do. On those platforms I’m a marketer. A mediocre one at best. But on Facebook? I’m a whole person. You might not be able to be a whole person on more than one platform. So far, I haven’t been able to be. That’s okay. Just know what you’re doing. And know the difference. The people you’re marketing to are people. If you can do this one thing, you’ll never need gimmicks or tricks or anything else when it comes to marketing: Remember that your audience is made up of people. Every time you send an email or make an ad or do any other form of marketing, you’re reaching out to people. Individual people who might be interested in what you have to say. Some of them don’t have time for you today. And some of them aren’t in the right place for what you’re offering. But they are all people. Talk to them like people. Since you are also a person, that makes this whole marketing thing a lot easier. Speak to your audience the way you want to be spoken to. Deliver on what you promise, then deliver a little more. Don’t spam. If you mess up, apologize. When you sell something, make it something you’re very, very proud of and something that your audience really wants. The only real tactic you need is listening. If your audience wants what you create, you won’t need to trick them into buying it. You find out if your audience wants it by asking them. You ask them by choosing a small group that you already know is interested. Maybe they’ve responded to a blog post or requested a download that’s related to what you’re trying to sell. Then you ask them if they’d be interested in your product. That’s all. No gimmick. You just reach out to that person individually and ask for their opinion. And then pay attention to the response. You’re in partnership with the people you’re creating for, after all. You can’t work without them. Your business doesn’t work without them. You need them more than they need you, in fact. Because if you don’t pay attention, someone else will or they will stumble on to what your audience needs and create it. Ideas are the real currency of our strange times. When I was looking at those shelves and shelves of books at Powell’s, most of them more than five years old, my predominate thought was — ideas move too fast for this medium. Those people have things to teach me, but they’ve already done it in their classic books. I’ll list a few for you at the bottom of this post. Once you’ve read them, the rest is about ideas that happen so fast and furious, the glacial pace of publishing just isn’t the best way to be on the receiving end of them. If you really want to know what those men are saying, follow their newsletters or podcasts or Instagram feeds. Something that moves quicker and happens today not five years ago. But also? Start cultivating your own ideas. They are your real currency. We live in an ideas economy and your ideas matter. James Altucher advocates coming up with ten a day, as a daily practice, and that’s one of the best (and most timeless, by the way) habits you can get into. Challenge yourself to come up with ten ideas, every day, for reaching your audience. For connecting with them. For serving them. For things you might create for them. For ways you could partner with them. And then choose the best ideas and do them. Do that for six months and see what happens. If that doesn’t give you goosebumps, you might be in the wrong business. Some Timeless Books Here are some books that I found particularly helpful when I was first learning about marketing, and a couple of newer ones that I’ve read more recently. These books have timeless ideas and will give you a well-rounded education about marketing, rather than a lot of gimmicks or tricks. The Purple Cow by Seth Godin Permission Marketing by Seth Godin Growth Hacker Marketing by Ryan Holiday Atomic Habits by James Clear Jab, Jab, Jab, Right Hook by Gary Vaynerchuck Contagious by Jonah Berger The Tipping Point by Malcolm Gladwell Superfans by Pat Flynn (On a side note, it’s interesting to me that there are very few general books about marketing that are written by women. None on my personal bookshelf. If you are aware of any, would you mind letting me know in the comments? I’d be interested.)
https://medium.com/the-write-brain/heres-all-you-need-to-know-about-how-to-market-to-your-audience-a2228aec4fe3
['Shaunta Grimes']
2019-08-24 17:18:40.385000+00:00
['Work', 'Email Marketing', 'Marketing', 'Business', 'Entrepreneurship']
Artificiality Bites 💊 Issue #10
Hello Human! This is a new issue from my weekly newsletter, holding a tiny compilation made of interesting articles from last week, projects, tutorials and tools; all related to Data, Artificial Intelligence and adjacent topics. Bon appetit! 📝 Interesting articles this week 🔧 Tutorials Continuous Delivery for Machine Learning 41' Extensive reading on how to automate the entire lifecycle of Machine Learning applications Extensive reading on how to automate the entire lifecycle of Machine Learning applications Build a PyTorch Style Transfer Web App with Streamlit 📺 25' Build an interactive deep learning application easily. 📦 Resources tensorflow/recommenders Google released an open-source package that makes building, evaluating, and serving recommender models easier with TensorFlow. Google released an open-source package that makes building, evaluating, and serving recommender models easier with TensorFlow. gradio-app/gradio Gradio enables the creation of customizable UI components over your PyTorch and TensorFlow models or arbitrary Python functions. Gradio enables the creation of customizable UI components over your PyTorch and TensorFlow models or arbitrary Python functions. victordibia/neuralqa A library for question answering on large datasets using BERT. 🎓 Courses / Books Natural Language Processing with Attention Models (Coursera) Deeplearning.ai releases today the fourth and last course from its NLP specialization. (Coursera) Deeplearning.ai releases today the fourth and last course from its NLP specialization. Best Practices for Managing Data Annotation Projects (Bloomberg) A book with a lot of advices for the different stages of a complete data labeling project. 🚀 Extra bits
https://medium.com/yottabytes/artificiality-bites-issue-10-78ee086c55d6
['Jaime Durán']
2020-09-28 12:23:47.595000+00:00
['Machine Learning', 'Data Science', 'AI', 'Artificial Intelligence', 'Deep Learning']
UI Development Trend in 2020: Descriptive UI to Rule Them All
In year 2020 it seems that major players in UI development are agreed in how we, developers, should design the UI (User Interface) of our apps or web apps. In this article I will show you the current UI development trends with different frameworks and environments available in 2020 and sum them up. Later I’ll take a look on the UI development trends in Java / GWT / J2CL. Design Trend in UI SwiftUI SwiftUI has renovated the whole UI development in Apple world. Native on all Apple platforms with all those nice design tools set a very high bar for UI development. Also move from imperative UIKit / AppKit / WatchKit to declarative design to be able to make UI development more easier and straight forward. So how do you implement the UI today? You describe your UI textually. If you need a state you can mark a variable with @State and then use it to write something to it like $name. To read the content we just need to use \(name). This is what we call “two ways binding”. So each time we write something into the TextField the variable name will be updated and the Text which uses that variable will be updated automatically. SwiftUI with Two Ways Binding “name” Android Jetpack Compose Android UI development follows the way of SwiftUI by adding Android Jetpack Compose to the UI development area in Android. Jetpack Compose only uses Kotlin (no Java implementation) because of some advantages of Kotlin over Java? I don’t agree that we can only have this type of UI development in Kotlin but it seems Google wants to push Kotlin over Java in Android development and won’t implement the same feature for Java developers. Android Jetpack Compose UI Development with Kotlin (Source: https://bit.ly/3gJUiPJ) If we take a look at the structure of the code in Jetpack Compose it really looks similar to SwiftUI. You have different syntax but in a whole the structure and the way to implement are quite similar. Two ways data binding is also simply possible with @Model annotation. Flutter Flutter has a nice overview to tell us what is the difference between imperative and descriptive UI development. Flutter Descriptive UI and Structure (Source: https://bit.ly/3ndrt0E) Flutter uses Dart as a programming language and it also has a nice structure for developing UI. Again the structure is very similar to other two frameworks before. Only in “two ways data binding” Flutter is not so easy to use in comparison with SwiftUI as you can see in this Flutter example.
https://medium.com/swlh/ui-development-trend-in-2020-descriptive-ui-to-rule-them-all-737e56f28dbe
['Dr. Lofi Dewanto']
2020-12-14 16:09:29.294000+00:00
['JavaScript', 'UI', 'Java', 'Development', 'Swiftui']
New beta features of AutoAI in IBM Watson Studio automates feature engineering on multiple datasets
New beta features of AutoAI in IBM Watson Studio automates feature engineering on multiple datasets Yin Chen Follow Aug 4 · 2 min read Have you ever tried using automated machine learning tools, but found out they only accept one input file while you are dealing with multiple relational data tables like the following sales prediction scenario? If your answer is yes, AutoAI in IBM Watson Studio now has a new beta feature that could make your life easier by automating feature engineering on multiple data tables and saving your time wrangling data and debugging code. Watch this video to have a glimpse of the end to end experience: The end to end experience can be summarized as the following 3 steps: Configure join relations between different data sets Run AutoAI experiment and get the best models Score new data sets with the deployed model If you are interested in experiencing this new feature in IBM AutoAI, please join the early access! We will send out invitations on a first-come, first-served basis, and we have limited capacity, so act fast!
https://medium.com/ibm-watson/new-beta-features-of-autoai-in-ibm-watson-studio-automates-feature-engineering-on-multiple-datasets-fb9cc51675e6
['Yin Chen']
2020-08-04 23:42:03.326000+00:00
['Machine Learning', 'Automation', 'Data Science', 'Business Intelligence', 'AI']
Why I Rebranded my Company 4 months After Launching It
Four months ago, I opened the doors to Creative Quo, a boutique marketing agency with a vision of making professional marketing services accessible to all. Within a week of announcing our launch, the projects started rolling in from referrals or leads acquired through my personal social media. This type of response we didn’t anticipate and by June, we were already adding to the full-time team. However, I soon questioned if the direction in which we were headed was one that I wanted to go. One night, while codifying our values, vision, and mission statement, it occurred to me the amount of good that would be wasted if we continued to remain unfocused and leave our potential untapped. Our mission itself, while noble, wasn’t one that particularly mobilized us. It didn’t mobilize us because we knew we could quite literally change the world. In the grand scheme of things, excellent work is fueled by passion. Anything less is subpar. I simply did not want to settle. I could have kept Creative Quo as a cash generator. It would’ve given me a steady pay cheque and cushy lifestyle. But if you’ve been following me since my early entrepreneurial days, you’ll know that my journey has been far from cushy (nor is cushiness even something I necessarily want). I started my first business, an online magazine, out of university and kept it running for a year until it ultimately failed. I then worked in corporate marketing for two years. It was at that time that I slipped into a clinical depression that culminated in an attempt to take my own life. The significance of having survived didn’t occur to me until days later, when I became aware of the windowless surroundings of the hospital in which I was. Paradoxically, for the first time in my life, I felt truly free. I felt freed from the expectations I had placed on myself and from those that others had placed on me. Rock bottom had a silver lining. In feeling like I had nothing to live for, I also had nothing to lose. This newfound enlightenment inspired boldness in me in the coming weeks. After being discharged from the hospital, I immediately quit my full-time job with no Plan B. Shortly after, the MVP for Targeted Tweets was born. That summer, I sold my first business and by fall, I had begun the blueprints to what would ultimately become Creative Quo. And now, here we are. July 2016. After a whirlwind that began after a spike in interest from VCs and larger clientele, we realized we had grown too big, too fast. The idea for a rebrand came in a phone call with my COO Chelsea at an airport bar. We were discussing a repackaging of services when we realized that our name could no longer carry the pace at which we were growing. I remember spewing out, “The Incubator” as a suggestion (my mouth full of pasta). Chelsea repeated it back. A lightbulb (much like the one you see in our new logo) simultaneously flashed above our heads. The Incubator: a marketing accelerator for the brands you’ll know and love. The official home of Targeted Tweets. We have an exciting few months up ahead and big partnerships in the works, but today, I’m also on the lookout for our next big, homegrown success story. If you’re a fledgling business with a great product/service, we would like to hear from you. We’ll set you on the proper foundation and establish you as a household name. This is not an objective. It’s a promise. We can launch you to the top if that is where you truly want to be. Our mission, fueled by passion, is to be the catalyst through which you can conquer yours. In this way, we are not just another marketing services firm. You can see now why “Creative Quo” had to go.
https://medium.com/insights-from-the-incubator/why-i-rebranded-my-company-4-months-after-launching-it-2485064a5d7c
['The Incubator']
2016-09-13 21:25:05.199000+00:00
['Rebranding', 'Business', 'Strategy', 'Entrepreneurship', 'Startup']
Software And Life Goes In Tandem
Life: You can’t forgive somebody, if you think you’re incapable of making mistakes! Software: You introduce bugs, if you think your code not misbehave. Life: Be open and acceptable around, and feel relaxed. Software: Get rid of bugs, by adding appropriate tests. Life: You will be submissive until you think others can’t accept you the other way! Software: You will introduce production bugs, if your test criteria miss to include appropriate data. Life: You can’t love and hate the same person Software: What you miss is what you get as bugs Life: It’s your dream house and you pay your EMI. You still have to pay your house tax. Software: Enjoy the treat of bugs caught by your tester :) Life: Meal is like a medicine for life. More in-take will end up in high dosage. Software: Manual Testing is part of software life. More you depend the turn around is high Life: If you date many, you will be confused to choose whom to marry. Software: If you manage memory badly, you go out of memory. Life: Hunger brings life. Anger takes away life. Software: Loop when needed, if not it becomes infinite loop. Life: At the peak of spiritual, you turn mystical. Software: Write programs for defined input over defined output. Life: Stay fit to stay healthy. Software: Solution right to code better Life: Mistakes build up and cause suffering Software: Tech debts cause more pain than actual development Life: Remove bad habits and add discipline for better health. Software: Deprecate unwanted code to improve the performance Life: Day full of activity will let you sleep well Software: Add try block to catch intentful errors Life: Dreams without action leave you as a dream Software: Code that are not deployed will RIP Life: Touch others life to make your life meaningful Software: Build open source and benefit the community
https://medium.com/know-javascript/software-and-life-goes-in-tandem-5e961ca2a695
['Sankar Ganesh']
2020-06-08 06:24:09.809000+00:00
['Self-awareness', 'Software Engineering', 'Software Development', 'Software Testing', 'Self Improvement']
Edge Detection in Python
How to Perform Edge Detection — The Math Before talking about the code, let’s take a quick look at the math behind edge detection. We, as humans, are pretty good at identifying the “edges” of an image, but how do we teach a computer to do the same thing? First, consider a rather boring image of a black square amidst a white background: Our working image In this example, we consider each pixel to have a value between 0 (black) and 1 (white), thus dealing with only black and white images for right now. The exact same theory will apply to color images. Now, let us say we are trying to determine whether or not the green highlighted pixel is part of the edge of this image. As humans, we would say yes, but how can we use neighboring pixels to help the computer reach the same conclusion? Let’s take a small 3 x 3 box of local pixels centered at the green pixel in question. This box is shown in red. Then, let’s “apply” a filter to this little box: Apply the vertical filter to the local box of pixels The filter we will “apply” is shown above, and looks rather mysterious at first glance, but let us see how it behaves. Now, when we say “apply the filter to the little local box of pixels” we mean multiply each pixel in the red local box by each pixel in the filter element-wise. So, the top left pixel in the red box is 1 whereas the top left pixel in the filter is -1, so multiplying these gives -1, which is what we see in top left pixel of the result. Each pixel in the result is achieved in exactly the same way. The next step is to sum up the pixels in the result, giving us -4. Note that -4 is actually the smallest value we can get by applying this filter (since the pixels in the original image can be only be between 0 and 1). Thus, we know the pixel in question is part of a top vertical edge because we achieve the minimum value of -4. To get the hang of this transformation, let’s see what happens if we apply the filter on a pixel at the bottom of the square: We see that we get a similar result, except that the sum of the values in the result is 4, which is the highest value we can get by applying this filter. Thus, we know we found a pixel in a bottom vertical edge of our image because we got the highest value of 4. To map these values back to the 0–1 range, we simply add 4 and then divide by 8, mapping the -4 to a 0 (black) and mapping the 4 to a 1 (white). Thus, using this filter, called the vertical Sobel filter, we are able to very simply detect the vertical edges in our image. What about the horizontal edges? We simply take the transpose of the vertical filter (flip it about its diagonal), and apply this new filter to the image to detect the horizontal edges. Now, if we want to detect horizontal edges, vertical edges, and edges that fall somewhere in between, we can combine the vertical and horizontal scores, as shown in the following code. Hopefully the theory is clear! Now let’s finish up by looking at the code.
https://towardsdatascience.com/edge-detection-in-python-a3c263a13e03
['Ritvik Kharkar']
2020-01-22 22:22:02.740000+00:00
['Machine Learning', 'Data Science', 'Image Processing', 'Artificial Intelligence', 'Data Visualization']
Building Dashboards using Dash (< 200 lines of code)
Dashboarding in MVC Most UIs follow an MVC framework, by MVC, we mean Model-View-Controller. Each interconnected component is built to take on a specific task in the development process. Model: The model is the heart of the dashboard. The model gets the data from the database, manipulates it and stores it in objects which can later be consumed by the view. Controller: Controller is how the user interacts with the dashboard. It usually requests the data from the model and presents it to the view. View: The view is where data is present to the user or the frontend. A view oversees the visual part of the dashboard. The MVC framework reduces the application’s complexity and makes it easier to maintain; for example, the developer can choose to change the UI without needing to change any backed code. We will look at Dash from an MVC perspective for more fundamental understanding. Figure 1 MVC diagram Getting Started with Dash As mentioned earlier, Dash is a simple Python tool that helps us build beautiful and responsive web dashboards quickly. It has support for both R and Python and is maintained by Plotly. It uses React for the Controller and View, and Plotly and Flask, for the Model in our MVC setting. We build a flask application that creates a dashboard on a web browser, which can call the backend to re-render certain parts of the web page. Installation Dash and Plotly can be installed very easily using pip . Using a virtual environment manager (like Conda ) is recommended. pip install dash==0.26.3 # The core dash backend pip install dash-renderer==0.13.2 # The dash front-end pip install dash-html-components==0.11.0 # HTML components pip install dash-core-components==0.28.1 # Supercharged components pip install dash_table_experiments #for tables pip install plotly --upgrade # Plotly graphing library For the data manipulation, we install pandas . pip install pandas Please check the dash installation guide to get the latest version of these tools. Now we move to the problem statement. The problem statement In a previous post, we had talked about building our own data set using Scrapy, that taught us how to use web scraping to download data. We had downloaded the top 500 albums of all time from the Metacritic.com library. Here is a snapshot of the dataset: We see that we have a genre, artist, release date, meta-score and the user score for each album. We will build a dashboard that has the following four components: Interactive Bubble chart of genres Histogram of decade popularity of each genre (bar chart) Meta-score/User-score trends (line chart) Table of the top 10 most popular artist by meta score/user score User can interact with the bar table with a drop-down to get top 10 artists by average meta score or average user score. User can also interact with the bubble chart; on hover, it will change the bar chart to give the number of albums of that genre published in each decade. Mock-up It is a good practice to build a small mock-up of the dashboard on paper or using a software (Adobe Illustrator and Balsamiq are great tools used by professionals). Here we have used MS PowerPoint to build a simple static view. Figure 2 Dashboard mockup Dash in MVC framework We will divide our dash component into three parts: Data Manipulation (Model): Perform operation to read data from file and manipulate data for display Dashboard Layout (View): Visually render data on the web page Interaction Between Components (Controller): Convert user input to commands for data manipulation and re-render. Initialization Here we import the relevant libraries. import dash import dash_core_components as dcc import dash_html_components as html import dash_table_experiments as dt import pandas as pd import plotly.graph_objs as go from dash.dependencies import Input, Output, State, Event import random Data Manipulation (Model) Here we have the data in a csv so we can simply import it using pandas . However, if the data is in a SQL database or on a server somewhere we need to make a data connection that connects data to a pandas dataframe in our backend and the rest of the process will be the same. Data manipulation with pandas We assume that the reader has a basic knowledge about pandas and can understand the data manipulation. Basically, we are creating 4 major tables for each of the dashboard components: df_linechart: This table groups the data by year and gives us the number of albums, average meta-score and average user score. We also multiply user score by 10 to get it on the same scale as meta-score. This will be used to draw the Score trends graph. df_table: This table groups the data by artist and gives us the number of albums, total meta-score and total user score. The generate_table function uses this table to get the top 10 rows sorted by user score or meta score. This is used to draw the table. df_bubble: This table groups the data by genre, and gives us the number of albums, mean meta-score and mean users score. The number of albums becomes the size of our bubble and the mean scores become the axes. This is used to draw the bubble chart. df2_decade: This table groups the data by genre and decade and returns the number of albums for each genre in each decade. This is used to draw the bar chart. ############################################################## #DATA MANIPULATION (model) ############################################################## df= pd.read_csv("top500_albums_clean.csv") df['userscore'] = df['userscore'].astype(float) df['metascore'] = df['metascore'].astype(float) df['releasedate']=pd.to_datetime(df['releasedate'], format='%b %d, %Y') df['year']=df["releasedate"].dt.year df['decade']=(df["year"]//10)*10 #cleaning Genre df['genre'] = df['genre'].str.strip() df['genre'] = df['genre'].str.replace("/", ",") df['genre'] = df['genre'].str.split(",") #year trend df_linechart= df.groupby('year') .agg({'album':'size', 'metascore':'mean', 'userscore':'mean'}) .sort_values(['year'], ascending=[True]).reset_index() df_linechart.userscore=df_linechart.userscore*10 #table df_table= df.groupby('artist').agg({'album':'size', 'metascore':'sum', 'userscore':'sum'}) #genrebubble df2=(df['genre'].apply(lambda x: pd.Series(x)) .stack().reset_index(level=1, drop=True).to_frame('genre').join(df[['year', 'decade', 'userscore', 'metascore']], how='left') ) df_bubble= df2.groupby('genre') .agg({'year':'size', 'metascore':'mean', 'userscore':'mean'}) .sort_values(['year'], ascending=[False]).reset_index().head(15) df2_decade=df2.groupby(['genre', 'decade']).agg({'year':'size'}) .sort_values(['decade'], ascending=[False]).reset_index() Dashboard Layout (View) The layout determines how the dashboard would look after deployment. Dash provides Python classes for all the components. The components are saved in the dash_core_components and the dash_html_components library. One can also build their own components with JavaScript and React . Introduction to Responsive Layout We use a Bootstrap layout for our dashboard. Simply, Bootstrap standardizes the position of the components by containing them in a grid. It divides the screen into 12 columns and we can define as many rows as we like. So our dashboard will look like: Row Column-12 (title) Row Column-6 (Table of most top 10 popular artist) Column-6 (Meta-score/User-score trends- line chart) Row Column-6 (Interactive Bubble chart of genres) Column-6 (Histogram of decade popularity (bar chart) Here is another visualization of the mock-up: Figure 3 Mockup with Bootstrap grid Adding Style We can add custom CSS , to our dashboard by using .append.css command. app = dash.Dash() app.css.append_css({ "external_url": " https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" }) We also append the require JavaScript libraries for Bootstrap. Please refer to the Bootstrap page for latest versions: # Bootstrap Javascript. app.scripts.append_script({ "external_url": "https://code.jquery.com/jquery-3.2.1.slim.min.js" }) app.scripts.append_script({ "external_url": "https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js" }) Layout Fundamentals The layout skeleton can be defined as html.Div( [ SOME_DASH_OR_HTML_COMPONENT( id, content (data), style (a dictionary with the properties)) ], className (name of the Bootstrap Class) ) the classname will be the Bootstrap class name which can be row or columns. Dash makes it very easy to draw graphs thanks to the core components . We can use functions to draw components or we can write the code inline. Adding table We can either use the html.Table tag or import Plotly tables. Here we’re using a html native table. def generate_table(dataframe, max_rows=10): '''Given dataframe, return template generated using Dash components ''' return html.Table( # Header [html.Tr([html.Th(col) for col in dataframe.columns])] + # Body [html.Tr([ html.Td(dataframe.iloc[i][col]) for col in dataframe.columns ]) for i in range(min(len(dataframe), max_rows))], style={'width': '100%', 'display': 'inline-block', 'vertical-align': 'middle'} ) Simply, the function takes a pandas dataframe and coerces it to a table. Graphs Dash uses the Plotly graph layout. We define a graph inside a dcc.Graph component and define the graph inside a figure object (defined as go ). In this figure object we set the data, the style and layout. Each type of graph will have different relevant components to it. We discuss three types of graphs here: Adding Line Graph For the line graph we have to choose the Scatter type of graph with go.Scatter and in mode we define lines as shown below. We also need to define the data for x and y for each line. In the layout section we can choose to display the legends, title and other style elements. html.Div( [ #Line Chart dcc.Graph(id='line-graph', figure=go.Figure( data = [ go.Scatter( x = df_linechart.year, y = df_linechart.userscore, mode = 'lines', name = 'user score' ), go.Scatter( x = df_linechart.year, y = df_linechart.metascore, mode = 'lines', name = 'meta score' ), ], layout=go.Layout(title="Score trends") )),], className = "col-md-6" Adding bubble chart Similarly, for a bubble chart, we define a go.Scatter but use mode = markers . We can also define a text component that gives the text on hovering over the marker. Later in our dynamic view will use this text to fill our fourth bar graph of “Decade Popularity”. Markers are the points or bubbles on the graph, we can further customize the markers by passing a dictionary object with various options: Color: A list of colors to be used for the markers , can be names of colors or a list of numbers for a color scale. Here I have passes a random list of numbers between 0 and 100. Size: Size specifies the size of the bubble. Here we put the number of albums as the size. So a genre with more number of albums will have a larger bubble. With size we can further customize the size by passing the sizemode , sizeref and sizemin components. html.Div([ dcc.Graph(id='bubble-chart', figure=go.Figure( data=[ go.Scatter( x=df_bubble.userscore, y=df_bubble.metascore, mode='markers', text=df_bubble.genre, marker=dict( color= random.sample(range(1,200),15), size=df_bubble.year, sizemode='area', sizeref=2.*max(df_bubble.year)/(40.**2), sizemin=4 ))], layout=go.Layout(title="Genre poularity") ))], className = "col-md-6" ), Adding Bar Chart Lastly, we draw a bar chart using a function. Note we could have directly pasted this part in the html div but since we want this graph to be interactive we use a function that can be used in a callback. We use go.Bar . def bar(results): gen =results["points"][0]["text"] figure = go.Figure( data=[ go.Bar(x=df2_decade[df2_decade.genre==gen].decade, y=df2_decade[df2_decade.genre==gen].year) ], layout=go.Layout( title="Decade popularity of " + gen )) return figure Here we have wireframed the app next we populate with data and add controls. Entire layout code: #generate table def generate_table(dataframe, max_rows=10): '''Given dataframe, return template generated using Dash components ''' return html.Table( # Header [html.Tr([html.Th(col) for col in dataframe.columns])] + # Body [html.Tr([html.Td(dataframe.iloc[i][col]) for col in dataframe.columns]) for i in range(min(len(dataframe), max_rows))], style={'width': '100%', 'display': 'inline-block', 'vertical-align': 'middle'} ) #generate bar chart def bar(results): gen =results["points"][0]["text"] figure = go.Figure( data=[ go.Bar(x=df2_decade[df2_decade.genre==gen].decade, y=df2_decade[df2_decade.genre==gen].year) ], layout=go.Layout( title="Decade popularity of " + gen )) return figure # Set up Dashboard and create layout app = dash.Dash() # Bootstrap CSS. app.css.append_css({ "external_url": "https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" }) # Bootstrap Javascript. app.scripts.append_script({ "external_url": "https://code.jquery.com/jquery-3.2.1.slim.min.js" }) app.scripts.append_script({ "external_url": "https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js" }) #define app layout app.layout = html.Div([ html.Div([ html.Div([ html.H1("Music Dashboard", className="text-center", id="heading") ], className = "col-md-12" ), ],className="row"), html.Div( [ #dropdown and score html.Div([ html.Div( [ dcc.Dropdown( options=[ {'label': 'userscore', 'value': 'userscore'}, {'label': 'metascore', 'value': 'metascore'}, ], id='score-dropdown' ) ], className="col-md-12"), html.Div( html.Table(id='datatable', className = "table col-md-12")), ],className="col-md-6"), html.Div( [ #Line Chart dcc.Graph(id='line-graph', figure=go.Figure( data = [ go.Scatter( x = df_linechart.year, y = df_linechart.userscore, mode = 'lines', name = 'user score' ), go.Scatter( x = df_linechart.year, y = df_linechart.metascore, mode = 'lines', name = 'meta score' ), ], layout=go.Layout(title="Score trends") ) ), ], className = "col-md-6" ), ], className="row"), html.Div( [ html.Div( [ dcc.Graph(id='bubble-chart', figure=go.Figure( data=[ go.Scatter( x=df_bubble.userscore, y=df_bubble.metascore, mode='markers', text=df_bubble.genre, marker=dict( color= random.sample(range(1,200),15), size=df_bubble.year, sizemode='area', sizeref=2.*max(df_bubble.year)/(40.**2), sizemin=4 ) ) ], layout=go.Layout(title="Genre poularity") ) ) ], className = "col-md-6" ), html.Div( [ dcc.Graph(id='bar-chart', style={'margin-top': '20'}) ], className = "col-md-6" ), ], className="row"), ], className="container-fluid") Don’t be scared by the size of the code. The code is formatted in this way for easier understanding. Note how each div component has a class defined along with it. Interaction Between Components (Controller) Now that we understand the layout let’s move to the controller. We have two interactive components in this dashboard one is the table that changes according to the type of score and the other is a bar chart that populates as per the selected genre bubble in out bubble chart. Our controller skeleton will look like: @app.callback( Output(component_id='selector-id', component_property='figure'), [ Input(component_id='input-selector-id',component_property='value') ] ) def ctrl_func(input_selection) Here we have 4 parts: Callback: @app.callback is the callback function that hands the Input and Output. The Inputs and Outputs of our application are the properties of a particular component. Input: this takes the id of the component uses as input and the property of that component we need to capture. This can be the value, the hoverData , clickData and so on. Output: Output takes the id of the component which is to change and the property which will change typically this is either figure or children. Control Function: cltr_function defines how the html for the Output will change. Apart from these we also have State which allows us to add additional information apart from Input and Output. Simply put, the app callback automatically captures any change made in the input and updates the output based on the cltr_function defines. We can have multiple inputs and multiple outputs in a single callback. The code for callback in our case will look like: ############################################################## #DATA CONTROL (CONTROLLER) ############################################################## @app.callback( Output(component_id='datatable', component_property='children'), [Input(component_id='score-dropdown', component_property='value')] ) def update_table(input_value): return generate_table(df_table.sort_values([input_value], ascending=[False]).reset_index()) @app.callback( Output(component_id='bar-chart', component_property='figure'), [Input(component_id='bubble-chart', component_property='hoverData')] ) def update_graph(hoverData): return bar(hoverData) In the first callback, we give the input from the dropdown. The callback captures the value of the dropdown and passes it to the function update table which generates the given table. It then passes the table html data to the datatable component and we see the relevant table in out dashboard. In the second callback, the data is passed is on hover from the bubble-chart component to the bar function. This data is a dictionary so we extract the relevant genre details using the text key and then pass the data to the bar function. The bar function subsets the df_decade data fro the given genre and plots a bar chart. Note how many different components are involved here and all it took was 10 lines! Initialization The following two lines need to be added to the code to run the app if __name__ == '__main__': app.run_server(debug=True) Running the app The app can be run using: python app.py and we’re done! You will see a message like this: Serving Flask app “songsapp” (lazy loading) Environment: production WARNING: Do not use the development server in a production environment. Use a production WSGI server instead. Debug mode: on Running on http://127.0.0.1:8050/ (Press CTRL+C to quit) Restarting with stat Debugger is active! Debugger PIN: 282–940–165 The dashboard is available on the link (can change on execution refer to the link in your own code execution): Our dashboard looks like: Figure 4 Dashboard in Action Next steps — Hosting Finally, our dashboard is ready. The next step for this is to host the code on a server so that it accessible from any device. While hosting is easy, the steps involved require an in-depth understanding of hosting. Instead of burdening the reader with new concepts, we leave her with quick links to some hosting tutorials. This can be of use to both seasoned developers and freshers. Some popular hosting options are: Heroku: Dash has a short tutorial about hosting apps on Heroku. Alternatively, here is the tutorial by Heroku for deploying python apps. AWS: Amazons’ tutorial on deploying python apps. Conclusion So this is how we build a simple yet interactive dashboard in < 200 lines of code. To really understand the real power of dash, do check out the dash gallery and the amazing stuff people have done with it. The entire code for this post can be found in this git repo. Helpful resources Dash tutorial Dash core component Library Writing own components using dash Video tutorial on Dash
https://towardsdatascience.com/building-dashboards-using-dash-200-lines-of-code-ae0be08d805b
['Rishav Agarwal']
2020-07-11 18:38:36.787000+00:00
['Data Science', 'Tutorial', 'Dashboard', 'Data Visualization', 'Python']
The Creative Benefits of Being Deeply Aware of Your Environment
The Creative Benefits of Being Deeply Aware of Your Environment 5 stories that show the power of open-awareness Photo by Roberto Nickson on Unsplash When John Reed, Citibank’s head of checking account business, took a vacation in 1976, he didn’t expect to have such an epiphany. Sitting in a beach chair and enjoying the warm sun of the Caribbean, strange ideas came to him about an ever concerning issue: the profitability of personal banking. Taking his notebook next to him, the ideas then flowed. He imagined on the same beach the concept of ATMs and email account balance verification, concepts that would revolutionize the modern banking system. This story, although incredible, is not unusual: many creative people made discoveries in a state of open awareness, far from the focused concentration at work. In Focus, Daniel Goleman shows especially that wandering minds are not opposed to focused attention, but on the contrary, increase your productivity. Being more aware of the possibilities of your environment helps you select information better and notice more hidden details, which lead to unexpected discoveries. It makes you broaden your attention, better control states of open imagination, strengthen your states of flow and increase your capacities for acute observation. Here’s how 4 people reinforced this spontaneous state of consciousness to foster new states of enlightenment.
https://medium.com/thinking-up/the-creative-benefits-of-being-deeply-aware-of-your-environment-d2ea033122d4
['Jean-Marc Buchert']
2020-09-30 12:18:17.234000+00:00
['Awareness', 'Life Lessons', 'Productivity', 'Creativity', 'Mindfulness']
5 Powerful Lessons I Learned Teaching A Blogging Workshop For a Year
Over the past year, I’ve taught dozens of writers blogging skills and creative nonfiction. We’d meet on Zoom (yes, before and during the pandemic) to discuss writing, where to make improvements, and how to raise visibility. While I was the instructor and moderator, I learned a heck of a lot from my students. Here are five takeaways: 1. Writers are amazing, creative, and interesting, even though most think they’re not. Each of us has unique experiences to share and so much to offer the world, even if we think we don’t. Example: One of my students hails from the Midwest. Her mom raised her and her siblings Pagan with no technology in the home, yet she now lives and works in Japan for a tech company. She also is writing a story about how she walked away from a $50,000 bonus while working for an NYC investment firm. Fascinating. I want to read that, don’t you? Another talks about the transition of growing up lonely and shy in India, being the only one in his family interested in mathematics, to now working for a satellite company with a high-level government security clearance he can’t discuss. He’s also an author and painter, successfully merging his analytical and creative sides. His brain is so beyond anything most of us can comprehend, it’s mind-blowing. Sharing these interesting experiences most of us probably haven’t lived through ourselves is why writing about them is so crucial. Beyond that, sharing them here on Medium (or on other mediums) is how we connect, particularly now that we are so isolated due to the pandemic. This leads me to my next point: 2. Most writers don’t see the connection between blogging and social media. When it comes to writing stories in a blog post or personal essay format, writers are all about that. Excited to learn how to go about doing so here or how to optimize it correctly for their own blogs. However, when it comes to sharing their work on social media, the majority of writers do little more than sharing the link in a Facebook Medium group (or two or ten), depending on others to clap, comment, and share to raise awareness. The big hope is for curation, which helps up one’s readership and visibility immensely. When that doesn’t happen, look out. Some writers will write novels about their disappointment, the unfairness of the model, or how to somehow cheat the system, complaining for days about how little money they’re making. Listen: we don’t control what happens here. Do the very best you can, which requires you to build your own following on social media and via email marketing. Here’s what I recommend: write what moves you. What inspires you. Be authentic. Then, share snippets of the blog on social media. You’ve already done the work! Now let the work do the work for you. The work is sitting there waiting to help you. So, let it. As they say in the Netflix show, Halt and Catch Fire (great, by the way), “It’s not the thing. It’s the thing that gets us to the thing.” I’ve written extensively here and on my own blog eleven tips you can do now to share your blog posts for maximum exposure. This post tells you exactly how to create that connection! Learn more here: It is work. Do the work is one of my mantras, and for good reason. Being a working writer requires writers to work. This leads me to my next point. 3. Most writers don’t want to market their work. They think it’s difficult and confusing. Or will require a huge timesuck. Or they need a degree in marketing. None of which is true. Point is, they don’t know where to start, so many just…don’t. They want someone else to do it for them, though they’re not sure who, and they don’t want to pay for it because, money. And time. I get it. My entire BadRedhead Media business is created around this model: I help writers with their branding, marketing, social media, and promotion via training, consulting, or doing it all for them, depending on their time and budget. All of that stuff doesn’t come easily to people, and even when it does, it takes a lot of time. And money. This is not completely true, of course. I wrote an entire book that walks writers through how to market their book (or blog) in thirty days, with assignments each day (visit my site to learn more). Most of the assignments offer insights, tips, and tools that are completely free. What isn’t free? Your time. Writing is only one part of this gig. Marketing is the other part. If all you want to do is write, cool. That’s a huge milestone in many writers’ careers. If and when you decide you want people to read your work, or you want to make a little dough, you’ll need to learn how to market your work because writing is a career choice. Invest in yourself. You deserve it. 4. Many writers enjoy writing for the sake of writing, not for the business aspect. I’ve heard this from almost every writer I’ve ever worked with. Though they want to be famous, selling millions of books and making tons of money, most writers have no idea how to go about making it happen. It’s a dream. I get it. I want that, too. Heck, who wouldn’t? (Six books in, and yea, I’ve sold tens of thousands of books and I’ve won several writing awards which is great, yet no millions in sight for this girl.) In reality, bloggers who ‘make it,’ have been writing for decades, whether that’s blogging, books, or both (think The Bloggess or Dooce). “Overnight sensations” typically have already released dozens of books, have written professionally for years, and/or toiled away anonymously for a long time. Or, they’ve embraced affiliate marketing in a huge way — again, a business model. Think of the iceberg analogy: we only see the tip.
https://medium.com/ninja-writers/5-powerful-lessons-i-learned-teaching-a-blogging-workshop-for-a-year-b7b4f26dd663
['Rachel Thompson']
2020-12-12 23:50:45.920000+00:00
['Social Media', 'Marketing', 'Business', 'Blogging', 'Writing']
Big Dreams of Personalized Health
Big Dreams of Personalized Health Azizi Seixas uses sleep to study health inequalities — and make us all feel and snooze better. By Elizabeth Preston When people find out Azizi Seixas studies sleep, they sometimes ask him about their dreams. That’s not really his field — but he does have big dreams for his own research. By using technology to combine precision medicine with population-level research, he hopes to erase disparities and bring better health to all. Growing up in inner-city Kingston, Jamaica, “I was the have-nots,” Seixas says. He learned early lessons about inequality and-being raised by seven women in a three-bedroom home-resourcefulness. Today he’s carried those lessons to New York University’s Grossman School of Medicine, where he’s an assistant professor of population health and psychiatry. In his lab, Seixas explores why certain groups such as racial and ethnic minorities have higher risks of chronic illnesses, the long-term consequences of those disparities, and how people can change their behavior to improve their health. Sleep has been a kind of lens through which Seixas looks at these questions. For example, how are disparities in people’s sleep related to heart disease risk and other health effects? And how might doctors tailor sleep advice to individuals, along with their other recommendations? That’s important because sleep plays an integral role in our health 24/7, not just the hours we’re in bed. “Sleep is not just the act of unconsciousness,” Seixas says. Besides keeping our bodies refreshed and running, sleep is important for consolidating things we’ve learned, and for cleansing our brains of protein gunk that’s linked to Alzheimer’s disease. Yet not everyone can get as much sleep as they need. He gives the example of a single mother who works two jobs. If he tells her she needs to sleep eight or nine hours a night, “She’ll look at me and scoff,” Seixas says. “And they have, to be quite honest.” Seixas imagines working with that single mom to figure out ways to offset her lost sleep using other health recommendations. Maybe dialing up her exercise can lower her risk of certain diseases, even while she continues squeezing in just six hours a night. If exercising more isn’t feasible, maybe she can adjust her diet instead. The data to make this happen might come from wearable technologies that track the mom’s activities and biometrics, as well as artificial intelligence and machine-learning models that predict how changes to her behaviors will affect her health. Scientists are still learning about the intricate ways our traits, behaviors, and risks may affect each other, so this scenario is still hypothetical. But one goal of Seixas’s research is to be able to personalize the advice a doctor gives a patient, rather than assuming that the same guidance is right for everybody. Seixas calls his philosophy “precision and personalized population health.” He thinks general guidelines for the public are important, too. But to fulfill what he calls his “sacrosanct” role in public health, he wants to find precisely the right way to help that single mother, or anyone else, stay healthy. How is their nightly sleep related to their daily steps? Some of his research hints that it might be possible. In a 2018 paper, he and his colleagues used machine learning to analyze survey data from more than 280,000 people about whether they’d had a stroke, as well as their age, sex, nightly sleep, and physical activity. The analysis showed which combinations of factors put people at higher or lower risk of strokes. In a similar 2017 paper, Seixas and others calculated which combinations of activity, sleep, stress, and body mass index were linked to the lowest diabetes risk in Black and white Americans. The more health data he can include from a diverse range of people, the better the recommendations that might emerge from it. Among many other projects to help improve these datasets, Seixas is soon launching a study with funding from Merck that will focus on people with hypertension and diabetes. Seixas and his team created an app that will give participants higher-level analyses from the Fitbits or other health trackers they already use. For example, how is their nightly sleep related to their daily steps? The app will also automatically gather health-related news and articles that might interest the user. And, critically, it will tell users about clinical trials they can enroll in. “We want to appeal to the greater good of individuals,” Seixas says, tapping into people’s drive for altruism and volunteering to fight chronic health conditions. Encouraging more people to enroll in clinical trials — which include trials of behavioral changes, not just tests of new drugs — could help researchers get better data on underrepresented groups. Seixas hopes it could also help the public to see science in a positive light. “Especially now,” he says, “where you have political figures questioning whether or not science should be the bright force that it has always been.” Seixas’s ideas are especially timely during COVID-19, says Girardin Jean-Louis, a professor of population health and psychiatry at NYU Langone Health, who is Seixas’s mentor. During the pandemic, vulnerable communities are having an especially hard time accessing healthcare. “His research is poised to address how various health issues plaguing underserved communities can be addressed adequately,” Jean-Louis says. Seixas hopes the questions he and his research group at NYU are asking will someday help to transform healthcare. “We have very ambitious dreams and goals,” he says.
https://medium.com/neodotlife/big-dreams-of-personalized-health-fbd25d73e910
[]
2020-11-28 20:46:58.729000+00:00
['Equality', 'Sleep', 'Health', 'Wellness', 'Race']
The Best Platforms to Build Your Personal Brand
9. Guest Posts on Related Blogs While there are many ways to promote your content, there is one strategy that stands out as a consistent way to generate new organic traffic — writing a guest post on someone else’s blog. Initially, some bloggers are hesitant to pursue this strategy, wondering: Why would I give someone else content for their blog? Shouldn’t I be building out my own back catalogue of content? However, as you will see from the reasons described below, there are many benefits to guest posting on blogs other than your own. Create Backlinks Most blogs that accept guest posts allow their contributors to leave at least one link to their own site. After all, most of them don’t offer any financial compensation for your hard work. A brief newsletter mention, blog post, or keyword-optimized link is fairly standard. Still, even a single backlink from an authoritative blog will greatly benefit your own blog’s SEO. Backlinks make your content more discoverable to search engines like Google. This is especially true if you are guest posting on another site within the same niche as your own site. Build Your Social Media Following Guest blogging increases the amount of social media shares your content will receive. This is a great way to boost your follower count and accelerate your lead generation efforts. By contributing to an authoritative blog, you are essentially receiving an endorsement from the author of the guest blog. This means that their readers are far more likely to explore your content. In most guest blogging arrangements, you get to customize your contributor profile. Here you can include direct links to your social media accounts, making it easy for new readers to become followers. Improves Your Authority in the Industry Guest blogging helps you establish authority in your industry. Imagine that you search for the phrase “What is X?” or “How can I improve at X” in Google. Which of the two search engine results sounds more authoritative: A single website that appears to answer your questions (but it contains no endorsements, only self-proclaimed endorsements) and is one of many sites all making similar claims. A website that contains an answer to your question, which is also referenced across other top links, all referring to your specific website, which other bloggers tout as an expert. Obviously the second scenario is preferable. Yes, your own blog post may reach the number one result in either search, but readers are likely to skim a few other top results. If all of the top sites are referring to you, and several even include a guest post from you, your own content will be much more easily accepted as authoritative. In the first search scenario, even if your content ranks highly, there will be other articles written by other authors, all making similar claims. This diminishes your perceived authority within a niche, to most readers browsing. Expand Your Blogger Network Every now and then, you’ll guest post for someone and the new connection you made turns out to be a home run. This new blogging contact can be someone who allows you to be an affiliate for their new product. They may follow you on social media and comment on all your posts. They may share your blog posts on their Twitter feed when they see it. It is difficult to make these types of valuable connections. One of the best ways to do this is by taking the time to write a guest post for someone’s site. For example, I recently met Chris Craft after he posted a comment in response to one of my articles. After some friendly chatting over email, I offered to do a guest post for his site InspireFirst. Read The Guest Post Expand Your Portfolio of Work Portfolios highlighting your best work are important for both individuals and companies. Consumers want to see for themselves the impact a brand has, especially when it’s accessible online. One of the benefits of guest blogging is enhancing that portfolio. The more valuable the information you share in your guests posts, the more this demonstrates expertise and experience within your industry. It can also function as a lead generation tool, since readers may also be interested in a service you describe or demonstrate in your guest post. In the previously mentioned guest post I did on InspireFirst, I created a custom infographic. I have already received a few emails from writers looking to collaborate, solely because they want me to create them their own evergreen infographic.
https://medium.com/digital-marketing-lab/the-best-platforms-to-build-your-personal-brand-528fad2c7f15
['Casey Botticello']
2020-11-13 04:07:28.593000+00:00
['Social Media', 'Technology', 'Writing', 'Business', 'Entrepreneurship']
How Netlify’s deploying and routing infrastructure works
Every now and then, people ask about the advantages of using Netlify before a traditional cloud provider’s file storage service. Cloud providers can serve static files from your own domain name, so why use something else? In fact, Netlify uses those services to store files too. The difference is in the value added by Netlify on top of those services. If we were going to look at this difference on a map, we’d say that file storage is a commodity. Storage costs are very low, which turns those services into utilities. Anyone can take several files and upload them to the cloud. This presents us with some opportunities that we’re going to explore. The first one is about what to show your visitors when they use different domains and subdomains. For instance, I want to show specific content for www.example.com and beta.example.com. This is usually solved by custom solutions. You can make that work with the right combination of DNS records and storage buckets. Another interesting opportunity is efficient file uploads. Off the shelf tools usually upload all your files to a storage bucket every time you run them, even if there are files that have not changed. It makes this process slower than it should be if they only uploaded the files that have changed. This was one key advantage for Dropbox. Their desktop clients knew how to manage files to save bandwidth and time. Off the shelf tools don’t offer bucket integrity either. If two visitors request the same file while you’re updating your bucket’s content, they can get different content. If that content requires new CSS and JavaScript, your visitors won’t see what you expect them to see. You can work around this with the right combination of CDN caching, expiration headers, and scripts to expire the cache after each upload. Netlify solves these problems for you, and many others. We’ve created a real product from the experience of implementing those custom built solutions ourselves. The rest of this post explains some infrastructure details behind Netlify’s deploying and routing mesh.
https://medium.com/netlify/how-netlifys-deploying-and-routing-infrastructure-works-c90adbde3b8d
['David Calavera']
2018-03-30 19:17:58.627000+00:00
['Infrastructure', 'Mapping', 'Wardley Maps', 'Computer Science', 'Engineering']
The Myth Of The ‘Yoga Body’
By Lily Silverton PHOTOGRAPHED BY MOLLY CRANNA. So this feature was nearly finished, with a completely different opening, when a woman I had just met at a party asked what I did for a living. I explained that I taught yoga and worked as a writer. “Ah,” she said with a sage nod, looking me up and down, “so you have no fat on your body.” It wasn’t a question, it was a statement. And some variation of it is a very common one to be on the receiving end of as a yoga teacher. But why? Why have we decided that yoga = thin? (I do have fat on my body FYI.) Where has this idea of the “yoga body” come from? When I started practising — more than 20 years ago — I understood the yoga body only in relation to pictures of middle-aged Indian men, looking serious and focused. Now there are 126 million search results on Google for “yoga body”. And they almost all show the same thing: a young, thin, tanned, flexible woman, who is probably also beautiful and radiantly happy, and quite possibly semi-clothed on a beach. (There are also a few men who come into this category, invariably they have a top-knot.) We’re at this strange juncture where yoga is both more inclusive and exclusive than ever. There is wide availability — with studios popping up by the minute and online classes gaining in popularity — but the sometimes eye-watering prices per class, combined with the tapered visual identity, have simultaneously made it feel much more intimidating and alienating. Leaving aside the systemic issues around class, race, money and privilege (four different features in their own right), the image of yoga is painfully narrow. Yoga magazines, websites, advertising — they all echo the mainstream ‘perfect body’ fable, with images of people with model bodies combined with the flexibility of ballet dancers. And then there is social media. London-based yoga teacher Becky Farbstein says: “I scroll through my social media feeds, full of yoga teachers, yoga practitioners, and other members of the health and fitness profession, and I feel like I’m paging through a digital version of Sports Illustrated’s Swimsuit issue. When I step away from the internet, I know I am healthy and strong, but in the myopia of social media, that perspective gets fuzzy.” “At its core, yoga invites you to get to know yourself better — to develop awareness from the inside out, rather than view the self from the outside in.” The thing is, this “fuzzy” perspective is really dangerous — just as mainstream advertising is — because we are excluding and alienating huge swathes of society from our visual world, perpetuating the ridiculous myth that the most important thing about a woman (and increasingly a man) is how they look. And — here’s the key ­- that above all else they must be thin. “Here in the Western world we are obsessed with weight loss, which then translates over to yoga,” says teacher and BigGalYoga founder Valerie Sagun. “We don’t have to be thin (or even fit) to practise yoga. Making people think that you need to be thin to practise yoga is bullying and fat-shaming, and enforcing that if you don’t have a small body you are not wanted in the world.” Yoga will give you a good body — in the sense that practised regularly it will increase strength, stamina, flexibility, balance, and lung capacity. It may very well lead to weight loss and body sculpting and all of the other things associated with the “yoga body” but, with that as the aim, you will probably miss out on what I think is one of the greatest gifts of yoga — the understanding and appreciation it gives you for your body. At its very core, yoga asks you to connect (or for most of us, reconnect) with your body. It invites you to get to know yourself a little better — to develop awareness from the inside out, rather than view the self from the outside in. So it’s really, really sad that yoga has become yet another space in our society that has been taken over by a set idea of how our bodies should look. That it is being marketed and sold (make no mistake, it is now a multi-billion pound industry) as a product to help us “ lose weight” or “ get the perfect body “. This kind of thinking invariably leads to a deep disconnect from the body, resulting in precisely the opposite outcome of the aim of yoga. This is something author Lauren Lipton is seeking to combat with her new book, Yoga Bodies. Featuring 80 different yoga practitioners of all ages, shapes, sizes, backgrounds and skill levels, Lauren created the book because she “know[s] so many people who could benefit from yoga, but it can be intimidating to those who have never tried it. People say, ‘I can’t do yoga because I’m not flexible’ or ‘I’m not in good enough shape for yoga’. I wanted to address every reason I could think of why people don’t practise.” Featuring yogis with larger bodies, disabilities (both visible and invisible) and yogis in their 90s, Lipton would like readers to “look through this book, find someone who looks or thinks like them and say, ‘If that person can do yoga, so can I’.” Despite the efforts of women like Lauren, Valerie, and BoPo activists such as Jessamyn Stanley, depressing stories of teachers making larger or older students feel singled out are ten a penny. Kim, a 30-year-old copywriter, had a horrible experience at a recent aerial yoga class. “A new teacher asked me to move closer to her. I said I was fine and explained I’d been doing the class for six months, but she insisted that she needed to ‘keep an eye on me’. She didn’t move any of the slighter girls who had never actually done the class before. It was mortifying.” And John, a 53-year-old secondary school teacher explains: “I’ve done yoga for more than 20 years, but at almost every class with a new teacher I will be the only person handed blocks and bricks for support, even though I don’t need them. And I’m often asked if I’m ‘okay’. It feels condescending, though I try to just laugh it off.” We make split-second and grossly unfair assumptions about people based on how they look, and yoga is no different. Even internationally acclaimed teacher Dana Falsetti has experienced “quite a bit of subtle discrimination” on account of her size. “When I take a class teachers assume I’m a beginner — none of them ever think I’m a teacher! But I recognise as a teacher — and someone living in this body — that these things are so ingrained, they don’t realise they are ostracising someone.” “The thing is, yoga is now a brand and, like any good capitalist brand, it needs an image in order to make you feel inadequate and want to buy stuff.” Ingrained indeed. I know that we are all distinct and individual and I know that yoga is not about what you do on the mat but about how you treat people off it. I’ve taught 3-month-old babies and wheelchair-bound veterans, and I know that all you need is the breath and that physical asanas can be adapted to help any person of any age, body type, or skill level to find their own mindful, embodied practice. But still, like Farbstein, if I spend too much time on Instagram I begin to feel insecure and wonder whether I should lose a bit of weight or be branding myself as a “yoga goddess”, despite being fully aware that it is insane and misses the point of yoga altogether. There is nothing wrong with self-improvement, or striving to be a better, healthier, stronger version of yourself. However, if you’re losing sight of self-acceptance, and focussing only on the body, you’re increasing your self-absorption and narcissism, and moving further away from your authentic self. It’s the internal qualities that make a yogi! We know that, right? Yes we do (I hope), but the thing is, yoga is now a brand and, like any good capitalist brand, it needs an image in order to make you feel inadequate and want to buy stuff. It’s hard to sell socks, after all. In the realms of wellness and spirituality we are firmly back to Naomi Wolf and The Beauty Myth, when we thought we’d come so far… I arrived on my yoga mat as a young teenager with awful body-image problems, and those classes (which were categorically not full of skinny Lycra-clad girls and in fact included my 50-year-old father) were a safe space for me to practise and (unwittingly) develop self-love. It took many more years for me to put that into practice, but the seeds were sown in, and grew from, yoga. If I were 12 now and coming to the practice today, I seriously wonder whether I would have the same experience. As the teachers from YogaWith comment: “Viewing Instagram images of uber-flexible girls in poses defying gravity one might question the therapeutic value of yoga.” While writing this feature I asked my Facebook friends if anyone practised yoga and didn’t have a traditional “yoga body” and would like to talk to me. 30-odd people replied saying they would, and nearly every single one of them has a slim body. It was another huge eye-opener for me in terms of our collective body dysmorphia and society’s view of the “yoga body”. The lessons? Curate your personal visual world wisely; demand more diverse representation and visibility from your external world; and finally, work keenly on your inner world. “We’re all so bogged down with superficial thoughts that we don’t even realise they are false,” says Dana. “Wake up and think critically! Increasing self-awareness for the individual is the first step to creating a more inclusive environment.” Lily Silverton is a London-based yoga teacher and writer. www.allbodyyoga.com
https://medium.com/refinery29/the-myth-of-the-yoga-body-8ac1710de4b1
[]
2020-07-20 18:56:01.002000+00:00
['Health', 'Yoga', 'Body Shaming', 'Wellness', 'Yoga Body']
The Definitive Guide to Medium’s Quotation Feature
Writers. We’re creative people. We’re our own bosses. No flimsy cubicle walls can hold our high-flying, freedom-loving spirits. We set our own hours, follow our own passions, and write what we know. And while we generally agree that we should write every day, if for no other reason than to prime the pump, we’re not too keen on rules in general. You can’t tell us what to do. And I like that about us. I think it’s admirable. Usually. But when it comes to Medium’s quotation feature, our independence is getting in the way of clarity. If you’re a regular reader, you know exactly what I mean. There’s a jarring lack of consistency in writers’ use of the tool. If you’re a regular writer, you’re probably thinking something like, “Make me. Just try and make me play by the rules. I’m an artist. Why don’t you go lecture ee cummings about capitalization, ya grammar Nazi you?” Oh dear. I’m afraid we’ve gotten off on the wrong foot. Let me try again.
https://medium.com/the-brave-writer/the-definitive-guide-to-mediums-quotation-feature-e9adc34ebd60
['K M Brown']
2019-10-05 22:23:06.133000+00:00
['Advice', 'Medium', 'Writing Tips', 'Creativity', 'Writing']
Git Ready: A Git Cheatsheet of Commands You Might Need Daily
I use Git every single day. So do most software developers. Honestly, Linus Torvalds’ little side project almost feels like a miracle. However, the tool is so powerful and extensive that it’s easy to get lost in all the possible commands it has. Hence, based on my own experience, here’s a compilation of answers to common questions about “how do you do X with Git” that I’ve encountered on a regular basis. Some of us may even use these solutions on a daily basis. Indeed, many of the commands addressed here will be rather simple, and often well-known by the majority of developers. However, I thought this could be beneficial as a one-stop-shop place for remembering that one command you forgot, as well as providing a good foundation for beginners. If instead you’d prefer to go over a practical deep dive of Git, you can check out this article. The Git Cheatsheet 🗒 S️toring changes without committing them This is a simple one, just run: git stash Then, to bring back these stored changes, making sure you’re on the same branch, you can run: git stash apply Getting rid of all uncommitted changes Sometimes you want to try something and it just doesn’t work out. To get rid of all changes you made since the last commit, just run: git checkout -- . To only erase changes from specific files or directories, . can be substituted by a list of files and/or directories you wish to erase changes from. Syncing your fork with the main repo When you fork a project, it’s important that you keep your fork up-to-date to avoid complicated merge conflicts when you make a pull request, or simply to make sure you have all the new features and security patches. As such, here’s how you sync your fork: Add a remote repository Get the address of the upstream (main) repo from where you forked the project. Then run the following, substituting the URL: git remote add upstream <upstream_repo_url> You can check that this worked by running git remote -v . 2. Sync the fork with the upstream repo To sync the fork, fetch the upstream repo: git fetch upstream Then, on the branch you wish to sync with (generally master ), run: git merge upstream/master Or git rebase upstream/master depending on your strategy of choice. Erase the last X commits Made some commits that you ended up needing to revert? You can do so in two ways: git reset HEAD~2 # undo the commits but keep the changes git reset --hard HEAD~2 # undo the commits and discard changes With the second option, it will be as if the commits never happened. You should replace the 2 with the number of commits you wish to go back from the latest commit (HEAD). Squash various commits into one (without rebase!) If you want to get rid of all your "fix typo" commits and join them all together into one, you can do so with: git reset --soft HEAD~2 && git commit -m "your message" Remember to replace the 2 with the number of commits you want to squash counting from the HEAD. Checkout the state of the project at a past commit To go back in time and see the state of your project at a given commit in the past, first run git log to see the commit history and select the commit you wish to go back to. Then, copy its hash and simply run git checkout <commit_hash> . This will leave you in “detached head” mode. To go back, just checkout the branch by name. Ignoring a file you already added to Git We’ve all been there — adding or committing something we shouldn’t have. To remove the file from Git tracking while keeping it in the system, just do: git reset <file> && echo <file> >> .gitignore Adding to a commit after committing If you want to change your commit message, or add a new file to it, you can use the --amend flag. To change the message, use: git commit --amend -m "<new_message>" And to add a new file to the last commit: git add <file> && git commit --amend Note that this “saves you the trouble of creating a new commit”, but in fact does create a new commit under the hood. Hence, you should only be doing this if you haven’t pushed the changes to the remote repo yet. Removing a file from Git and pruning its entire history If you ever push sensitive data to a remote repository (e.g. on GitHub), you’ll not only need to remove the file from Git tracking, but also delete its entire history. You should also not use that data anymore if at all possible, such as in the case of API keys, passwords, etc. The process for doing so isn’t the simplest, but GitHub has written a full-page tutorial about it, so I thought I should just link it here instead. “Removing sensitive data from a repository — GitHub”. Record merge conflict resolutions To avoid having to resolve the same exact merge conflicts multiple times, you can enable a Git cache of merge conflict resolution. This will store how a merge conflict was resolved and automatically resolve the same conflict if it comes up again: git config --global rerere.enabled true Read more about this on the Git Docs. Commits made on the wrong branch If you made a commit on the wrong branch, you should be able to use our knowledge about erasing commits to solve the problem, like so: git branch <new_branch> && git reset HEAD~2 --hard This will create a new branch and delete the specified number of commits from the current branch where you wrongly added the commits. If you actually want these commits on an existing branch rather than a new one, then you can do: git checkout <desired_branch> && git merge <branch_with_commits> git checkout <branch_with_commits> && git reset HEAD~2 --hard And yet, if merging is not an option, you can use git cherry-pick , like so: git checkout <desired_branch> git cherry-pick <branch_with_commits> <branch_with_commits>~2 git checkout <branch_with_commits> && git reset HEAD~2 --hard Changing a branch name To change the name of a branch, use git branch -m . You can either change the name of the current branch: git branch -m <new_name> Or change the name of any branch: git branch -m <old_name> <new_name> Finding the commit with a bug If you run into an issue that you know is unrelated to your commit, you’ll need to determine what commit in the past introduced this problem. This is common with tests, for example, when they aren’t passing due to a test completely unrelated to your work. In this case, to find the “bad” commit, you can use git bisect . The way it works is as follows: Start the process git bisect start 2. Mark the current commit as “bad” git bisect bad 3. Mark a commit in the past as “good” Find a commit in the past, using git log for example, where things were as intended (i.e. good). Then, run: git bisect good <commit_hash> 4. Get bisecting! You should now get a message like this: Bisecting: 2 revisions left to test after this (roughly 3 steps) [6ca4a67aeb4b0d9835ecf15e44505c48f93642c9] my-branch The numbers, hash, and branch name will naturally be different for you. Here, what Git is doing is going through your commits one by one, until you find the one which is broken. You don’t need to run git checkout as that’s being handled for you. At every commit, you should then check if things are OK. If they aren’t, mark the commit as bad with git bisect bad . If they are, mark it as good with git bisect good .
https://medium.com/swlh/git-ready-a-git-cheatsheet-of-commands-you-might-need-daily-8f4bfb7b79cf
['Yakko Majuri']
2020-11-13 08:04:38.172000+00:00
['Python', 'Technology', 'Software Engineering', 'Programming', 'Software Development']
The Programmer’s Paradox
How perfect does code need to be? Image by Reimund Bertrams from Pixabay In every project a programmer takes on, a tiny war is waged between time and perfection. Coding is a complex task that offers multiple ways to achieve the same end and is counter-intuitively both rigid in its function but subjective in its appraisal. What do I mean by that? Well, in JavaScript, if you leave out a closing parenthesis you’ll likely break the code completely; it is rigid in that function. However, comparing how two different programmers choose to solve the problem of validating a form will result in a wildly subjective debate over approach, merits, efficiency, etc. So, it’s important to recognize that each programmer has a distinct and personal definition of “perfect” when it comes to code.
https://medium.com/better-programming/the-programmers-paradox-52e2c062b400
['Ryan Nehring']
2019-11-08 20:47:15.719000+00:00
['Development', 'Technology', 'Programming', 'Design', 'Software Development']
The Biggest Hurdle for Young Entrepreneurs
The Biggest Hurdle for Young Entrepreneurs And what it takes to finally make the jump. Photo by John Cameron on Unsplash To be both young and a successful entrepreneur is to be an anomaly; to defy human nature at its very core. Most of the hurdles that make entrepreneurship at a young age so difficult are known and rather straightforward, such as the lack of capital, skills, and knowledge. However, these are merely stepping stones since there’s a wealth of knowledge available to help us get over them. Other hurdles aren’t so straightforward. They aren’t defined, quantifiable, or measurable. The one that I’m going to be alluding to in this article is the misalignment of the young entrepreneur’s incentive structure when compared to, say, a 35-year-old with a spouse and three kids. Because odds are, if you happen to be reading this article, you probably aren’t one of the anomalies that I referred to in the first paragraph… yet. Let me explain: Why We Get Into Entrepreneurship in the First Place Generally speaking, there are two types of people: (1) conventional thinkers, and (2) independent thinkers. Those that lean more towards the conventional-minded side are more likely to earn good grades in school, go off to college, and land a nice and secure job after graduating. This is the ideal path for them; they wouldn’t have it any other way. But you’re not a conventional thinker; you’re more independently-minded. And those that are independent-minded, among other things, have a higher propensity to create new things and challenge the status quo. The next thing we need to ask ourselves is why we’ve decided on entrepreneurship instead of following the beaten path. On one end of the spectrum, we have those that have arrived here by means of “via positiva.” On the other end of the spectrum, we have those that arrived here by means of “via negativa.” VIA POSITIVA: You were born with an entrepreneurial spirit or possess some otherworldly skill. Most of the young & successful entrepreneurs fall under this category. Action-takers by nature. VIA NEGATIVA: You long to become an entrepreneur because conventional education, work, etc. are slowly crushing your soul. Since you’re looking for answers in the article instead of taking action, then there’s a 90 percent chance you’ve arrived here by means of “via negativa.” Or, to put it another way: entrepreneurship doesn’t come naturally to you yet but you desperately need to find a way to make it work. If it makes you feel any better, this is the category that I fall into as well. Why Entrepreneurship Doesn’t Come Naturally to Twenty-Somethings It is human nature to strive for things that make us feel powerful or important. For instance, in elementary school, I vividly remember putting a lot of stock into what I placed for my gym class’s mile-run in the fall and spring. In adolescence, our energy coalesces towards popularity, being liked, and making ourselves look cool. And as we mature, our focus turns outwards towards the support of those that depend on us. Most of us can remember how painful the first transition was — that is, from being an innocent child to an incredibly awkward and judgmental adolescent. But perhaps the most confusing transition is the one from adolescence to true adulthood. Especially as an aspiring entrepreneur. Think about it. Just a few short years earlier, you gained approval through the quality of your grades, how your classmates perceived you, your dates, and your success as an athlete. You cared more about the façade you put up than anything else. If it were cool to take your future, career, and creative works seriously, you probably would have. But it wasn’t cool — just look at how much fun the “nerds” had! Even so, as somebody in their late teens to early-to-mid twenties, it still makes sense to put stock into those same vanity metrics that you cared about a few years ago. There’s still a logical application. After all, it’s highly likely that you’re still dating around, developing your inner circle of close friendships, and building your reputation. I’m not shaming you for caring about those things. But, there is a major drawback: It is hard as f*ck to care both about how you’re perceived by your peers ~AND~ do the things required to become a successful entrepreneur. Let me illustrate by pitting two people of different age groups against one another: The Married Man vs. the Youngster The 35-year-old entrepreneur is a married man with three kids and ten employees that depend on him. That is, the decisions he makes and the risks he takes for his business are predicated on how they can benefit them. He has almost entirely removed his ego from the equation. If the cost of implementing the highest ROI strategy for his business is that he might look stupid for a little while, then so be it — he’ll do whatever it takes to move the needle for his business. On the other hand, the young entrepreneur is grinding for himself. By and large, he is still the same self-centered kid he was a few years ago. Because he is more risk averse towards the things that put himself in a place of vulnerability, he will miss out on wonderful opportunities to grow his bottom line. Since he is optimizing for the wrong things (i.e., looking cool) through no fault of his own, he will greatly hinder his ability to grow his business. As we can see, there is a stark contrast in incentives between the married man and the youngster. In his motivation to support his wife, kids, and employees, the married man will do the things that are best for his business. Conversely, in being self-centered the youngster will miss out. Does this mean that the married man doesn’t care about how he’s perceived by others? Of course not — it would be asinine to think so. It’s just that he cares more about the bottom line of his business. The nerds that you went to high school with didn’t want to be unpopular, either — they just prioritized knowledge over other people’s opinions. So, long story short, a young entrepreneur’s incentive structure is terribly misaligned. The things that motivate them at this stage in their life aren’t conducive to success. For this reason alone, the pursuit will almost always feel like an uphill battle. The Takeaway One time I read that the worst way to catch a cat is by chasing after it. In this case, the young entrepreneur is both the chaser and the cat — in chasing himself, his goals are an ever-receding horizon. This is just another one of the weird ways the universe works. So, my advice to youngsters who want to launch their own business? Realize that you’re supposed to feel like you’re going to vomit 90 percent of the time. This is just the price you pay for going against the incentive structure that most people your age are motivated by. Find something other than yourself to grind for. Maybe you don’t have a spouse, kids, or a handful of employees yet. But there has to be ~something~ that can make the pain and discomfort worthwhile. Take failure off of the chin. The more often you fail, the more feedback loops and data points you have to work with. This is how you gain experience and wisdom. But most importantly, understand that this is the biggest hurdle you face on your way to success. In working up the courage to jump over it, you are already half of the way there. Learn to fight the inner resistance and the world will be yours.
https://medium.com/the-innovation/the-biggest-hurdle-for-young-entrepreneurs-d5975a4a3925
['Peter Davig']
2020-12-24 16:32:42.848000+00:00
['Self-awareness', 'Mindfulness', 'Business', 'Entrepreneurship', 'Life']
Here is Why Vitamin C is So Crucial to the Human Body
Here is Why Vitamin C is So Crucial to the Human Body C for crucial & collagen Photo from amoon ra on Unsplash Vitamin C, also known as ascorbic acid, is a water-soluble vitamin that is found in various foods. Water-soluble vitamins are not stored in the body and because of this, they must be consumed daily either through food or a supplement. Its main benefits are: Vitamin C is a powerful antioxidant that helps support the immune system and can reduce the risk of many diseases. Vitamin C helps lower the risk of heart disease. Vitamin C boosts collagen production which helps support skin health. Vitamin C is a powerful antioxidant that helps support the immune system and can reduce the risk of many diseases. As we naturally age, we become more susceptible to infections and diseases as our immune system weakens. The immune system has a harder time fighting off pathogens and this can lead to many problems. Antioxidants assist the immune system and thus getting enough antioxidants from food helps in keeping the immune system strong. Vitamin C acts as an antioxidant which helps protect your cells against free radicals. Free radicals are unstable molecules that have unpaired electrons and because of this, they steal electrons from other cells which causes widespread damage and can lead to disease. Consuming enough vitamin C helps supply the body with ample amounts of antioxidants which over the long run can help prevent many diseases. Antioxidants have been shown to prevent diseases such as cancer, Alzheimer’s disease, dementia, etc. It is commonly believed that intaking a vitamin C supplement every day can help treat or even prevent a cold. However many studies have shown that taking daily vitamin C doesn’t reduce the risk of getting a cold. On the other hand, research has shown that vitamin C shortens the length of a cold and decreases the severity of the illness. A 2013 review from the Cochrane Database of Systematic Reviews showed that people who took at least 200 mg of vitamin C regularly over the course of the trial recovered from the cold faster than participants who took a placebo. Adults and children who took vitamin C saw an 8% and 14% reduction respectively in the duration of their cold compared to the placebo group. This illustrates that while vitamin C can’t prevent a cold, it helps support the immune system by providing extra support which helps get over a cold faster. Vitamin C helps lower the risk of heart disease. Heart disease is the leading cause of death worldwide. As we age, heart disease becomes more prevalent with heart attacks and coronary blockages occurring mostly in the elderly. Atherosclerosis is the buildup of fats and cholesterol along the arterial wall. When this blockage becomes large enough to the point where it starts blocking a majority of the blood flow to the heart, a heart attack ensues. A major predisposing factor to atherosclerosis is LDL oxidation. Since vitamin C is a powerful antioxidant, it can help prevent this LDL oxidation which helps reduce the occurrence of atherosclerosis. An analysis of 13 studies found that taking at least 500 mg of vitamin C daily reduced LDL cholesterol by 7.9 mg/dL and triglycerides by 20.1 mg/dL. Additionally, high blood pressure is another predisposing factor for cardiovascular disease. High blood pressure puts too much strain and stress on the blood vessels which by making arteries less elastic can lead to a decrease in blood flow and heart disease. Vitamin C helps relax blood vessels which in turn lowers blood pressure and reduces the stress on the cardiovascular system. A research study showed that vitamin C supplements on average reduced systolic blood pressure by 4.9 mmHg and diastolic blood pressure by 1.7 mmHg. Vitamin C boosts collagen production which helps support skin health. Collagen is the most abundant protein in the human body making up around 75–80% of human skin. As we age, collagen production starts to decline which can cause premature aging, wrinkles, etc. This is where vitamin C can help. Vitamin C is the co-factor for prolyl hydroxylase & lysyl hydroxylase. Both of these hydroxylase enzymes catalyze the hydroxylation of proline & lysine residues of procollagen which helps the collagen molecule properly fold into its triple-helix structure. Thus, vitamin C is directly involved in collagen production which overall helps improve the skin. Since vitamin C is also an antioxidant, it helps fight off skin damage caused by UV rays and helps diminish the appearance of fine lines and wrinkles. Vitamin C helps in the growth and repair of tissue which helps keep the skin healthy and firm. It helps heal wounds primarily through the formation of collagen which helps build connective tissue.
https://medium.com/in-fitness-and-in-health/here-is-why-vitamin-c-is-so-crucial-to-the-human-body-6e76abf88160
['Samir Saeed']
2020-12-24 15:26:50.668000+00:00
['Health', 'Science', 'Healthcare', 'Fitness', 'Fitness Tips']
‘You’ve Done The Impossible!’ Declares Bitcoin.com Host to Richard Heart, Founder of HEX
‘You’ve Done The Impossible!’ Declares Bitcoin.com Host to Richard Heart, Founder of HEX Highly Informative, Stimulating: Bitcoin.com Podcast Host Dustin Plantholt Sits Down With Richard Heart, Founder of HEX The following are excerpts in their original context. For the full interview please check out Bitcoin.com Podcast Network. Plantholt: How did this idea come to you, and what is HEX? Heart: Well I’ve been in Bitcoin since early 2011; I’ve mined full blocks on my own, which had 50 Bitcoin block rewards at the time. I’m a serial entrepreneur. I retired in 2003; started traveling the world; had 150 employees; I’m a self made man who’s done a lot of things successfully, and HEX is another one of those things. If you want to replace banks you’ve got to replace all of their products. And what’s the most popular product, or one of them: The time deposit. There’s $7.2 Trillion in time deposits in the United States and China; there’s only $5 Trillion of printed money. So it’s a 50% larger market than cash, which is what Bitcoin was designed to replace. HEX addresses a larger market, and then by accident also does the cash part better than Bitcoin. If you want to replace banks you’ve got to replace all of their products, and what’s the most popular product, or one of them: The time deposit. HEX can do 2,000 transactions per second through zkSync.io; you can do anonymity through t.me/HEXnado, Zero Knowledge proofs; you also have 13 second blocks which means you can do a transaction in 13 seconds instead of the 10 minutes it takes Bitcoin — everything’s better. So it’s faster, more secure, cheaper, higher throughput, and has better price performance. HEX is up in 2020 right now like 70 or 80x versus Bitcoin. And it’s been as high as I believe 140x [vs. Bitcoin] this year. If you had bought on January 5th and held until now, you could have bought back 50 times the Bitcoin you started with and still have a sizeable amount of HEX left. It’s just a vastly superior product. Plantholt: What do you attribute that to? Is that pure speculation, or is that utility? Heart: The reason that HEX has superior properties is: 1) It was invented 10 years after Bitcoin, so we had the advantage of all that learning and saw the mistakes that they made; and 2) It’s just more ambitious to start. People don’t realize how simple the Bitcoin network is. It only has 2 functions. You can either mine inflation by doing hashing of things, which burns electricity; or you can send coins that you bought from some guy that mined the inflation. That’s it. There’s 2 buttons: Mine and Send. There isn’t any other button. When the miners get their Bitcoin rewards what do you think they do with them? They sell them and dump the price. In Bitcoin the miners take the rewards, pollute the environment, and sell the price down. Plantholt: So you’d also assume there’d be a lot of incentive for miners to just keep holding? Heart: Well, they don’t have a choice. The economic model behind mining is that it should not be profitable because it’s a commodity. In economics the cost to extract a resource trends towards the value of the resource as margins decrease. So when Bitcoin goes up in value, miners waste more money trying to acquire it; when Bitcoin goes down in value miners shut off their machines because they don’t want to lose money. So as long as it’s an efficient, commoditized system it shouldn’t be that profitable. And historically it hasn’t been. Anyone that just held coins has outperformed miners. Miners go bankrupt left and right, and always have. Plantholt: Are you still bullish though on Bitcoin long term? Heart: Sure! I’m bullish on the price. I think the price is going to go up, but the technology’s garbage. It’s not even secure. People think Bitcoin’s secure because it has a high hash rate, but they’re dumb and I can educate them: the bugs that cryptocurrencies have primarily are software bugs; the minority of bugs in cryptocurrency are hash rate bugs, and by the way it’s a protection racket as well. Who do you think you’re protecting yourself from when you pay a miner a fee? You’re protecting yourself from him and his buddies because they’re the only people that will attack you. Just like if you have a restaurant in a bad area, and some strong thugs come in and say, “It’d be a terrible thing if there was a fire here tonight. You should pay us to protect you.” It’s the same thing. The people that you’re paying are the people that would attack you. There’s no other party that’s sitting on a bunch of SHA-256 hash rate that you’re protecting yourself from. It’s a protection racket, and it’s gross. So in HEX we don’t pay miners block rewards. We don’t pay miners to pollute the environment. You open 1 transaction when you start your stake; 1 transaction when you end your stake. There’s no overhead in the meantime, and it costs pennies to a dollar. Plantholt: So it sounds like these were the problems you were looking to solve when you created HEX. Heart: Some of them. I wanted to design the world’s quickest appreciating asset. In order to do that I had to find an untapped market that no one else was addressing. We’re the world’s first [blockchain] time deposit. Is anyone else doing time deposits? Nope. It’s just us. How’s that worked out? The price in fiat is up 263x this year. Let me know when Bitcoin will show you a 263x return. It’s not happening. Not in your lifetime. Well, we did that. This year. And we’re set to do more. I wanted to design the world’s quickest appreciating asset. In order to do that I had to find an untapped market that no one else was addressing. People don’t realize how high cryptocurrencies can go. Bitcoin’s up 2,000,000x from 1 penny to $20,000. Ethereum went up from $0.15 to $1,500 — that’s a 10,000x in 2 ½ years. So Bitcoin did 2,000,000x, Ethereum did 10,000x, and now HEX is only up 263x. These are real numbers. Let me know when Bitcoin will show you a 263x return. It’s not happening… Well, we did that. This year. And we’re set to do more. This is the reality, and people are happy and hoping for some new all time high in Bitcoin when it’s up like ten percent. Who cares? Are you in crypto for ten percent? I’m in crypto for hundreds of percent, just like the good old days. Just like when I bought Bitcoin back when it was $30, held it down to $2. That sucked. But then it went up to $1,300. So lucky me right? Plantholt: So what’s this run been like for you? I mean, you were in it from the early days. You’re one of the OG’s. What has it been like for you to see that all of a sudden from carpenters to plumbers people start to talk, “Do you own many Bitcoin?” From around the world people now talk about it but in the early days it was just a couple hundred or couple thousand. Heart: Well it’s entirely different now. Back then we were deciding what the [Bitcoin] logo would look like. We were deciding what the ISO code for the Bitcoin symbol should be; should it tilt left, should it tilt right? And those were all people that cared about privacy. Today it’s entirely different. I believe 2% of all Bitcoin is owned by a company that only takes money from accredited investors in the United States, so what do you actually have? Plantholt: Why do you think that is? When they make an announcement like that, but then they put on limitations, why is that? Heart: They have no interest in you having control over your money; they have no interest in you having the freedom to do what you want to do with your own money. The powers that be have no interest in that. They want to entrench themselves, and stay in the middle, and rent-seek. It’s what they’re used to doing for decades. The Bitcoin that I knew and loved was the one that’s supposed to replace these guys, not empower them. You know, now when you buy Bitcoin the bankers own a lot more of it than you do; which is part of the reason I love HEX, the bankers don’t own it. The people own it. And maybe one day that will change, but today it’s beautiful. Plantholt: Going forward for HEX, what are you guys working on in the near-term, long-term, that you believe will ultimately keep driving the price? Heart: Well, you see, HEX is an interesting animal. When you talk to founders of other projects they beat you up with buzzwords and word salad, hopes and dreams, and partnerships, and all this other garbage. It’s not real. In HEX we do the opposite of that. We do not expect profit from the work of others. There is no roadmap. There is no central entity. The good things that have happened in HEX I can tell you about in hindsight. ⠀ I don’t make forward looking statements about what will happen. So I can tell you what has happened, and maybe that gives you some indication of what might continue to happen. ⠀⠀ When you talk to founders of other projects they beat you up with buzzwords and word salad, hopes and dreams, and partnerships, and all this other garbage. It’s not real. In HEX we do the opposite of that. We do not expect profit from the work of others. Right now there’s an app on the phone, iOS and Android, called Staker.app. You can real time see who’s buying, who’s selling, who’s staking, for how long, who’s making money, who’s losing money, who’s emergency end staking, where you rank versus other stakers, it’s got Leagues in it. That just came out a couple weeks ago. Staker.app for iOS and Android Two-thousand transactions per second using Layer 2: we’re the first in the world that had a 250,000 person airdrop on Layer 2. No one else has ever done that. It’s called zkSync.io if you’d like to use that. We’ve got anonymity with t.me/hexnado if you want to more privatize your coins; they’re already pseudo-anonymous from the start, but if you want real privacy in addition to anonymity — you can Google what the difference is between those two words, privacy is just stronger. We’ve got a 3 letter dictionary word dot com that you can’t misspell; we’ve got the best logo; the best brand name; there’s ads all over the United Kingdom; taxi’s, busses, billboards, magazines, The Economist magazine has had HEX ads, Car and Driver has had HEX ads, currently I believe Card Player magazine ads. What else? Price performance 263x in 301 days. Plantholt: You’re building something that ultimately becomes sustainable, does it not? Heart: It’s got superior features so we’ve got higher engagement; better game theory; more metrics that you can watch. In Bitcoin the only 2 things you can watch are hash rate and price. And that’s it. What else? Nothing, that’s it. Pretty boring for people if you want them to stay engaged. In HEX we’ve got average stake length; emergency end stakes — when someone ejects and doesn’t do what they said they’d do and pulls out their money early — all of the stakers who were honest and did what they said they were going to do they get rewarded for that. We had a referral program [during Launch Phase]; we gave free coins to Bitcoin holders, and over $5 Billion worth of Bitcoin has minted their HEX. That’s a lot: over 300,000 Bitcoin. Plantholt: You’ve done the impossible! In a very short time period when you’re competing against goliath and then six-thousand-some [cryptos] out there, I mean that’s remarkable! Did that surprise you at all or did you think you’d be much further?
https://medium.com/datadriveninvestor/youve-done-the-impossible-declares-bitcoin-com-host-to-richard-heart-founder-of-hex-dd95a3526063
['Taylor Kennedy']
2020-12-08 20:18:15.142000+00:00
['Business', 'Startup', 'Ethereum', 'Bitcoin', 'Entrepreneurship']
According to Your Body, There Are Only Two Seasons
According to Your Body, There Are Only Two Seasons New research suggests human biology only knows winter and spring Image: Xuanyu Han/Getty Images Autumn leaves changing color. The first winter snow. Daffodil blossoms signaling the start of spring. The long, hot days of summer. At temperate latitudes in Europe and the Americas, nature’s four seasons are a big part of people’s lives. But it turns out human biology has a different schedule. In a recent study published in the scientific journal Nature Communications, Stanford geneticist Michael Snyder, PhD, looked at how people’s biological data changed over the course of the year. Armed with a vast trove of information — over 1,000 measurements from more than 100 people assessing genes, proteins, metabolic markers, immune system markers, and the microbiome — he discovered that instead of four distinct seasons, the body seems to undergo two shifts: one at the beginning of winter and the other in the middle of spring. Elemental spoke with Snyder about the recent study, what might explain the biological seasons, and what they mean for your health. This interview has been lightly edited for length and clarity. Elemental: What inspired you to look at seasonal changes in biology? Michael Snyder: I had recently been wondering, why do we think there’s four seasons? It’s kind of arbitrary. Maybe there’s 15 seasons, maybe there’s three, I don’t know. Why don’t we let the data tell us how many seasons there are, at least from the standpoint of human biology. Meaning, are there patterns in the data to tell you just how many seasons there really are? So the inspiration for this study was really the combination of those two things: always trying to understand people’s health patterns, and the concept that seasons are pretty arbitrary, when you think about it. What did you find? We’ve been profiling this group of 109 people for a number of years, and we had over a thousand measurements in total. We just looked for biological patterns in the data. First, we started with individual molecules, and we saw [patterns] that were known already. For example, hemoglobin A1C [a test that measures blood sugar levels] was known to peak in the spring. We also found a lot of new [patterns], as you might imagine, because we looked at so many molecules, so there’s a lot of stuff that hadn’t been reported. One was [the expression of] this circadian rhythm gene that’s called CIR1 that’s known to fluctuate during the day. But what’s interesting is that we found it actually showed a seasonal pattern and peaks in late April/early May. Then, there were changes in various cytokines [immune system proteins] that are involved in fighting off viral illnesses. We were also very interested in the microbiome, which had been studied a little bit, but again, not at the level we did, and we saw quite a few changes there as well. After we saw these individual molecule changes, we said, “If we take all these molecules together, do they fall into major patterns?” Turns out, they do. There are two major patterns [in these 109 people]: one is what you would expect, it’s late December/early January — a winter pattern, if you will. But then you might have thought the other pattern should be July or August when it’s really hot — late summer. But that wasn’t the case at all, actually. The pattern that came up was late April/early May. So that was, at least to me, a surprise. All of the people in the study are from Northern California, and obviously the seasons there are different than in New England or the Southeast. Do you think that the two-season pattern that you found would hold up elsewhere, or do you think it would be dictated by local weather patterns and temperature changes? I don’t know, but I think the approach we use now can be applied anywhere, we’ve just got to get the data. There’s no reason it has to be two; in some places it might be three seasons, in other places it could be 10. Who knows? I think it would be really fun to figure that out. What different kinds of molecules did you look at? I know that in prior studies, you’ve analyzed the genome, the metabolome, and the microbiome. Where did you see the biggest differences in this study? We measure as many molecules as possible. A number of them are clinical markers, but the bulk of them are RNA — your transcripts, proteins, and metabolites. We measure all those. It’s about 20,000 molecules overall, plus microbes. We saw tons and tons of microbes in people’s nasal cavity changing, plus some changing in the gut. I think that makes sense given people’s exercise and probably the food they eat [change with the seasons], so they should see some shifts in their gut microbiome. Also in their nasal cavity because, again, you’re sucking up whatever’s around you, and that’s going to change through the year. Fascinating. What do you think explains these seasonal patterns? Is it driven by the biology or the environment? The winter pattern is what you would expect, it’s viral infections and things like that showing up [and changing the biology]. But there’s some other cool stuff too, [particularly with the microbiome]. Bacteria associated with acne, for example, actually peak in the winter. In the late spring, early April, asthma and allergies [drive a lot of the molecular changes], again which you would expect. But there are also a lot of metabolic changes I didn’t really expect. Our explanation is that people are kind of dormant through the rainy season in California. That is to say, they exercise less than they would in summer. So what we think is that stuff builds up through, say, March and April. Then as people come out of that, they start exercising more, and their metabolic health and cardiovascular health improves. In hindsight, it makes a lot of sense. Can you talk a little more about the relationship between the environment and people’s health? The changes in the immune system and inflammation that you’re seeing, for example, is that solely driven by the increase in viral infections during the winter, or could that be a change that makes someone more susceptible to a viral infection? That’s a good question. I think it’s probably in part due to the viral infections, because we’re seeing cytokines go up. We’re only showing associations, so we can’t prove one versus the other, but there’s most likely an increase in viral infections. But it’s true that people are less healthy in winter, so that probably does make them a little more susceptible as well. They probably go together. The spring pattern is much more complex. It definitely has the allergy and asthma signature, but it also has all those metabolic markers, like hemoglobin A1C, which is associated with diabetes and insulin resistance. Some of the type 2 diabetes markers are high in the spring pattern; same with the cardiovascular disease markers. And again, I think it does relate to the external environment — in this case, we can correlate it with pollen. We haven’t published this yet, but we can definitely see correlations between external exposures and some of the internal metabolites, meaning we see things happening on the outside, like pollen, can correlate with metabolic changes inside of you. So we think that some markers correlate with the environment, and some correlate with people’s lifestyle. What are the implications of knowing our “health seasons”? I think the implications are twofold. One is, we don’t really use longitudinal data very well in medicine. One of my biggest gripes is that when you walk into a doctor’s office, they measure you and they compare you with what they’re expecting across the population. They’re not looking for trends, and I would argue that you really should be following people’s trends. For example, if your hemoglobin A1C is going up, you really want to follow that. But it’s nice to know what the seasonal effects are, too. So if something’s running a little bit high in late April/early May, you can say, ‘Well, that’s probably because they’ve been less active and it might be seasonal, but they probably can get it down as they head into summer.’ Now, if it’s running high in winter, then there’s probably something going off there and it’s good to have caught that. So you can take this into account as you’re interpreting people’s health. Or it could mean that you need to work a little bit harder, so you don’t have a higher peak of cardiovascular disease and metabolic disease markers in late April. Maybe people should be pushing themselves a little bit more during those rainy seasons to keep those markers down. That would be another way to interpret it. I think it helps interpret people’s health and what to do about it.
https://elemental.medium.com/according-to-your-body-there-are-only-two-seasons-f17b9e130a70
['Dana G Smith']
2020-12-26 04:54:35.773000+00:00
['Biology', 'Health', 'Seasons', 'Science', 'Body']
3 Meaningful Ways to Begin Your Business While Working a Full-Time Job
3 Meaningful Ways to Begin Your Business While Working a Full-Time Job #2 Create an unbreakable process that keeps you accountable Photo by ConvertKit on Unsplash According to the U.S. Bureau of Labor statistics, roughly 66% of businesses fail within 10 years. If this makes you wonder whether starting a business is worth it or not, I don’t blame you. Not only is starting a business risky business, but it’s tough to manage with everything else going on in life (like a pandemic). It’s stressful paying the bills while your side-business doesn’t pay you. Especially in the early years, there’s no guarantee your business will pay off in the end. I’m here to present a question: why start a business in the first place? Well it’s kind of fun, isn’t it? Walk into a room and talk about your business and you’ll be the most interesting person in that room. Actually, let’s back that up. Hopefully you aren’t in a room full of strangers. Let’s say you’re at a socially distanced gathering, and through your mask you mutter that you have a side-hustle. It doesn’t matter what kind it is, but having a business is sexy. It’s something to strive for. It’s a respectable endeavor to pursue while making ends meet at a regular day-job. It’s not always easy, especially when you see other driven people doing what you want to do. What do they have that you don’t have? You can do what they are doing. Here are three tried-and-true ways to capitalize on your business idea, without giving up your daily bread-maker.
https://medium.com/illumination-curated/3-meaningful-ways-to-begin-your-business-while-working-a-full-time-job-f1f6d63b5287
['Ryan Porter']
2020-11-30 09:49:40.582000+00:00
['Inspiration', 'Business', 'Productivity', 'Ideas', 'Entrepreneurship']
What to Do When Your People Are Leaving
There are few things more unsettling for people-driven businesses like tech companies, professional service firms and asset managers than a spike in attrition. High attrition can easily become a downward spiral, as the exit of key people causes others to question the firm’s health and the opportunities they’ll have. Often management teams haven’t experienced this problem before, and there’s a temptation to panic. This is the second-worst thing to do… after putting one’s head in the sand and assuming the problem will just get better. As with many types of crisis, incisive focus on the root causes is critical to success. This post is a guide to getting at these root causes, but these fundamentals also have broad applicability for retaining people in all times, good and bad. If you’re a founder/CEO in this situation, you’re likely to feel under extreme pressure and may not even feel like you have the luxury to step back to read this post and work through a systematic approach. If that’s the case: Give this post to your head of People or if you don’t have one, to a trusted member of your team who can be your right-hand person in working through this crisis Immediately prioritize having five discussions with team members below the leadership team level, this week, in which you: (a) convey that you value them; (b) truly understand from their perspective their broader career aspirations, and what’s working and not working for them in their roles; © begin to think with them about the best value proposition you can offer them, looking at every element except compensation; (d) agree to continue the conversation, and agree with them that they will proactively come to you if they’re considering leaving, well before they get to the state of weighing concrete offers Schedule a 90-minute leadership team meeting for next week about a holistic response to this crisis, and 30 minutes in advance of that meeting to prep with the individual you’ve assigned to read and reflect on this post Nearly all crises of this kind are fixable, but fixes are rarely superficial. If you can truly fix the crisis, your company will get stronger. At a moment of stress, when too many people are leaving, there is a natural impulse to focus on “what’s going on at the company level that’s driving this.” While there may well be company-wide dynamics impacting attrition, it is important to first get more granular and look at causes at the level of individual people. Except when a group explicitly decamps together to join a competitor or start a new venture, people influence one another, but leave as individuals. It is useful to think about attrition as having three different kinds of causes: A. The person leaving simply has a better opportunity by their own yardstick of evaluation, which gives them substantially more of whatever it is they value. This could be responsibility, a certain kind of work, money, lifestyle, advancing a certain cause, etc. In this situation, even if their day-to-day experience were positive and even if their long-term confidence in the company were high (both of these things may or may not be true in any given case), it would be a relatively easy decision to accept the new opportunity B. The person leaving is having a negative experience in their current work. This might be (1) sufficiently unbearable that there’s a reflexive decision to leave; (2) bearable, but a factor that becomes “why put up with this” when faced with a choice to jump to another opportunity that’s equivalent or has an edge; or (3) not so much to deal with considered as an isolated incident, but a decision factor in the context of a view that the problem is endemic, and likely to be experienced over and over again C. The person leaving could potentially have a better value proposition by staying in their job, but because they are unsure about the company’s trajectory as a whole, they “discount” the future value of staying (e.g., they are worried that financial instability will put their job at risk, or they are worried that the company won’t grow enough to open up the relevant promotion opportunities, etc.). B3 and C bleed into each other in that both are about future expectations, but it feels useful to separate them out because they feel so different to the people involved These three types of attrition need to be addressed in very different ways. If there’s a lot of A, that requires focus on the question of whether there’s a mismatch between the kinds of people the company is hiring (especially in terms of what they value) and the company’s value proposition, or whether there’s simply a gap in the level of value the company can currently create for employees vs. what’s needed to be competitive. These structural problems take longer to resolve, but it’s important to recognize them as they will keep creating issues until they are fixed. B and C can potentially be acted upon quickly. Acting on B requires changing something in the context of the day-to-day work — including often the way the direct manager is engaging, or who that manager is. Acting on C hinges on reshaping expectations. Because of this, there’s a frequent trap in addressing C to overpromise at a time when credibility is low. In these circumstances, the promise doesn’t have much positive impact. In fact, the people most at risk of attrition are likely to be hypersensitive about any gaps between promises and what unfolds, so making promises is likely to further fray credibility. The only way to disentangle the different causes of attrition is to have real conversations with people who are leaving, conducted by someone they trust. If you’re worried that the fabric of trust in the company has been undermined, hire someone from outside who will immediately establish confidence — perhaps a solo consultant who will work by the hour or a credible HR professional between jobs. In these conversations, the interviewer should strive to understand reality from the point of view of the person departing and connect that back to the details of what’s going on inside the company. Thematic findings like “dissatisfaction with managers who are political, not developing people and not interested in understanding barriers to doing good work” aren’t useful — the interviewer needs to drive to actionable detail about where the problems are, at a level that enables top management to see what specific decisions they face (e.g., fire a toxic leader, move a manager into an individual contributor role). There’s a fine line regarding how to honor promises of confidentiality and at the same time get granular enough to see what’s really happening. I believe that it’s generally best to go into the gray area regarding attribution, respecting boundaries about specific things that a departing employee is clearly uncomfortable putting on the record. This makes it essential to be absolutely certain that there are never reprisals — and dealing decisively and harshly if anything that feels like a reprisal happens. As an interviewer, I have found that I can generally gain permission to put useful insights onto the record if I talk through carefully why this is valuable. While having these exit conversations, you should also prioritize parallel conversations with the people who remain. Even if you’ve hired an external person to do exit interviews, these conversations should be conducted by team members from the inside. The purpose of these conversations is: To convey that the individual is valued To find out what the individual’s current experience is, what’s working and what isn’t, what their aspirations are, and how they are thinking about their career — on the inside and potentially on the outside To figure out the best value proposition the company can offer them, in the context of the company’s actual needs To open up a channel of communication that raises the likelihood of getting advance warning if someone is considering leaving — ideally before they have an offer in hand — and getting the chance to influence the outcome If the leadership team touches every single person or at least every person critical to the company’s future performance — which can generally be done within a week or two in a company of up to 100 people if this is made a high enough priority — this creates the insights needed to build a map of key talent, understand where there are risks and what steps need to happen, and assign someone senior to stay close to the pulse with each individual who is particularly important or particularly at risk. During a time when a company is stressed, a daily stand-up meeting for top leaders usually makes sense, and that can be a forum to ensure that the team is reaching out to people at the critical times, not just to keep them but to make them ambassadors who will help retain others. As you “swarm the problem” and learn what’s driving attrition and what’s important to the people who remain, you’re likely to see that the drivers of attrition (A), (B) and © have all to some degree been at work. You can’t solve for everything at once, so decide where there’s the most leverage. For instance, for a professional services firm, the two fastest levers to pull in addressing retention at the person-by-person level are making project teams energizing and effective, and ensuring there’s an open dialogue on the project team about each key team member’s experiences, aspirations and issues. For a mid-size firm, it makes most sense to think about teams one by one, laddering up to actions at a firm-wide level if and when that makes sense, versus focusing primarily on firm-wide processes. If a firm can make eight of the top ten teams feel like they’re humming, that goes a good deal of the way toward fixing issues relating to team member experience in day-to-day work and ripples upward to increase confidence in the direction of the firm. Focusing on the team-by-team level also helps address the problem of top leaders being disconnected from the work. Ensuring confidence in the direction of the company as a whole certainly matters as well. Any actions you take toward this goal will be most impactful if they build on solid foundations at the team-by-team and person-by-person levels. At a company-wide level, what matters most in the context of attrition is to make goals concrete, transparency high, and communication cycles short. Unvarnished communication about reversals makes communication about wins more credible. Don’t be afraid to be negative or to admit your fears. If people lack confidence in the company, often this will be expressed in terms like “we don’t really know what the strategy is” or “we’ve heard that X is our strategy, but we don’t really see this translating into the day to day.” This creates a temptation to create a grand reveal that answers all the open questions about strategy and the roadmap to execute strategy. To the extent that you have truly compelling answers to all these questions, by all means reveal them. However, in contexts like these the lack of clarity usually relates to genuine unknowns that can’t fully be resolved without taking actions that require time. Acknowledge this! The more that members of the team broadly see what top leaders see, including what makes certain questions impossible or unwise to resolve now, the more everyone will sit on the same side of the table. Most people can live with issues they feel they understand, and the energy released by relieving everyone of the need to speculate about what’s really happening and why can be channeled into productive use. Anyone who will leave because they learn something true about the company’s weaknesses and unknowns will probably leave anyway. If you’ve recently lost too many people, you’re probably hiring aggressively at the same time you focus on stopping the bleeding. These new hires can themselves be a powerful lever for improving retention. Be specific about what you want each new hire to experience in his or her first 90 days and build a systematic way to make sure these experiences happen. For instance, a firm doing project-based work could resolve that within their first 90 days, new hires should: Have a 1:1 with their project leader, after they are initially assigned to the work but before too much time has passed, in which they talk about how the work at hand connects to the broader firm mission and what the project team can do to advance the firm’s broader capabilities Participating in a business development meeting with a senior member of the firm, and having a chance to ask questions and learn from that experience in terms of “where work comes from” Writing a reflection on their aspirations for their first two years, in terms of things they want to learn, ways they want to grow, impact they’d like to be part of achieving, etc. — and having discussions about that reflection with at least two more experienced / more senior people Being part of a roundtable with a few peers and at least one member of the executive team that provides an opportunity to give feedback on what they’re observed about the firm in their first few months on the job These experiences set tone in a positive way, and new hires who have experiences like this will positively “infect” their more-tenured peers with a sense that things are going right. If experienced colleagues are drawn upon as mentors of the new hires, there will be positive leverage simply from the mentors seeing that their mentees are being treated so well. This undoubtedly sounds like a lot of work — and it is. The upside of taking this approach goes far beyond turning the tide on a spike in attrition. Engaging with people in these ways creates a fabric of trust, visibility into management issues that impact operations and customers, and a powerful sense of shared focus. After pushing through a crisis, a company has an opportunity to reset expectations and to benefit from more open dialogue and greater transparency. As I’ve shared in a “sister post” on How to Retain Talent — And How to Lose People In the Right Way, creating a culture in which people can talk openly about questions of staying and leaving has tremendous long-term value. An attrition crisis shouldn’t be wasted. If leaders act decisively in the ways I’ve outlined here and learn how to deserve better retention, a company can easily emerge stronger from such a crisis, with a more engaged and better-aligned team, and with a set of practices that will keep the culture healthy as the company grows. _______________________________________________________ Niko Canner is the founder of Incandescent. We’re discovering better ways to create, build, and run organizations. For more from our blog, On Human Enterprise, subscribe here.
https://medium.com/on-human-enterprise/what-to-do-when-your-people-are-leaving-da3cf381a0fa
['Niko Canner']
2017-02-23 19:01:13.397000+00:00
['People', 'Management', 'Leadership', 'Startup', 'Human Resources']
5 Easy Ways To Make Your Food More Sustainable
In the past few decades, the topic of climate change became “popular”. There are protests around the world, powerful activists such as Greta Thunberg, international agreements and efforts in order to make the world more sustainable. A quick Google search for veganism shows an increase of 580 percent over the last five years in the vegan population. As much as we like to blame the rich and the big corporations, each person can make small changes that will contribute to our end goal, helping the earth. QUIT PLASTIC WATER BOTTLES A million plastic bottles are bought around the world every minute and the number will jump another 20% by 2021, creating an environmental crisis some campaigners predict will be as serious as climate change. WHY?- Most plastic bottles produced end up in landfills or in the ocean. Between 5 million and 13 million tonnes of plastic leaks into the world’s oceans each year to be ingested by sea birds, fish and other organisms, and by 2050 the ocean will contain more plastic by weight than fish. WHAT TO DO INSTEAD?- Use reusable water bottles. 2. BRING REUSABLE BAGS TO THE GROCERY STORE Americans use 100 billion plastic bags a year, which require 12 million barrels of oil to manufacture. WHY?- Plastic bags start out as fossil fuels and end up as deadly waste in landfills and the ocean. Up to 80 percent of ocean plastic pollution enters the ocean from land. 100,000 marine animals are killed by plastic bags annually. WHAT TO DO?- Either save or buy a few reusable bags and bring them with you when you go shopping. The average American family takes home almost 1,500 plastic shopping bags a year. 3. DON’T USE TO-GO CUPS WHY?- Half a million disposable coffee cups are littered every day. This is creating an unsightly and damaging impact on our environment and also encourages more littering. WHAT TO DO INSTEAD?- Buy reusable coffee cups and bring them to the store! Preferably light and durable ones, and wash them by hand. You would have to bring your own cup for your daily coffee somewhere between 20 and 100 times for it to have the same environmental impact as a paper one. “Most single-use items get used for a few minutes and then get discarded. I think it’s possibly the most ridiculous misuse of resources that humans have ever come up with, defying economic logic on so many levels.” —David McLagan 4. EAT SEASONAL FOODS WHY?- One of the most salient benefits to eating seasonally is that you are effectively reducing your carbon footprint. It reduces the number of miles your food has to travel before it reaches your plate. In addition, it is more cost-efficient because you are paying for those added miles. WHAT TO DO?- Buy seasonal fruits and vegetables and buy from local farmers. 5. CELEBRATE “MEATLESS MONDAY” Meatless Monday is an international movement to help reduce meat consumption to improve personal health and the health of the planet. WHY?- Some of the environmental effects that have been associated with meat production are pollution through fossil fuel usage, animal methane, effluent waste, and water and land consumption. Meat is considered one of the prime factors contributing to the current sixth mass extinction. WHAT TO DO INSTEAD?- Here are some meatless recipes you can try!
https://medium.com/climate-conscious/5-easy-ways-to-make-your-food-more-sustainable-e27e2bd2ecb9
['Julie Penn']
2020-05-18 06:13:01.111000+00:00
['Climate Change', 'Climate Action', 'Vegan', 'Sustainability', 'Food']
The Myth of the ‘Good Addict’ vs. the ‘Bad Addict’
The Myth of the ‘Good Addict’ vs. the ‘Bad Addict’ We do people a disservice when we assume their hesitance at the 12 steps is hesitance at recovery Photo: Linda D/Flickr Chances are you know something about the 12 steps of recovery. We have seen it in movies or on TV. The church basement, strong coffee, a person standing up and making a public declaration about how they have a problem. For many people, it works and it works really well. It offers support since there is the ability to socialize and be around other people who understand what it is like to use substances, and the desire to no longer use them. For some, however, the 12 steps may be inaccessible due to trauma, mental health, or other complexities that are far too often dismissed. I am an addiction therapist in St. Louis, Missouri, but as I tell everyone I work with, I have my own journey through addiction. My social work career has been devoted to working with individuals affected in some capacity by substance use. Addiction was something that was modeled for me early in life, and it’s what caused me to be in foster care for 15 years. Both of my parents were substance users, and I lost three members of my family to overdose. While 12-step programs work for some, they are not for everyone. You might already be saying, “Here he goes: He is going to bash the 12 steps.” The truth is, I have nothing bad to say about 12-step programs. The rooms of recovery have been there at many pivotal times in my life. There are components of the steps I use every day. I am not here to criticize anyone’s pathway to recovery or the tools they use to get there. This piece instead is meant to explore and further a conversation about how we view people’s ability to choose a pathway that is right for them. The myth of the “good addict” versus the “bad addict” I use this language very intentionally but recognize the increasing body of knowledge that points to the importance of person-centered language. This work is being led by Robert Ashford, MSW. His idea is that the term “addict” should not be used and should instead be replaced with “person who uses drugs.” Ashford and his team have extensively studied the use of language in reference to the recovery community. I encourage you to check out his research and learn more about the way we speak about substance use. For the sake of this piece, I will use the term most commonly used to describe the myth of the “good addict” versus the “bad addict.” According to many in society, a good addict is someone who is trying to no longer use substances. That person is doing everything in their power to get sober. They are going to meetings and making their doctor’s appointments. This is a person who is easily given praise and empathy because they are seen as realigning with society. On the other end of the spectrum is the “bad addict.” The bad addict is someone who is seen as not wanting to stop or simply cannot stop. This is supported by behaviors such as not attending support groups or following up on referrals or other appointments. They might be resistant to start on medication, or if they are on medication, they are not seen as compliant. So for this person, because they are not perceived to be making any progress toward becoming a “functioning member of society,” they are labeled as the “bad addict.” This myth is problematic on many levels because it simplifies the complex relationships people have with trauma, systems of injustice, and cycles of poverty, just to name a few. Substance use should never be thought of as simply a choice. Substance use is a multifaceted, complex mental health disorder that society sees through a moralistic lens. Choosing not to utilize the steps can often be seen as “proof” that a person is not in the appropriate stage of change and they are simply “not ready” to quit using. A shift toward healthier behaviors When a person is known to be a substance user, society seemingly strips that person of the ability to choose their own treatment, recovery, or access to harm reduction. Instead, these various interventions are boiled down to a one-size-fits-all template. The “this worked for me, so you need to do it the same way” mentality is society’s view of substance users and is bleeding into the recovery community. Many have adopted this attitude of “policing ourselves” in order to gain acceptance by mainstream society. But just as the 12 steps were a personal choice that some have made, it was just that—a personal choice. Choosing not to utilize the steps can often be seen as “proof” that a person is not in the appropriate stage of change and they are simply “not ready” to quit using. Beyond people being stripped of the ability to define a path that works for them, they are also limited in the choice of medications that are right for them. To be clear, only a handful of medications are used to treat opioid addiction and even fewer for amphetamine or alcohol use disorders. This also includes the ability to taper off the medication. I have heard from folks across the United States who complain about providers not honoring their desire to taper off of medications such as medication-assisted treatment (MAT). For many, the desire to taper is driven by internalized stigma within the various recovery communities. Although the attitude about suboxone, for example, is changing, for many it has not. Personally, I have been to recovery meetings where a person on suboxone is explicitly told not to share that fact in the meeting. Capability The 12 steps, as the name suggests, represents 12 individual steps that a person takes to shift away from unhealthy behavior in favor of healthier behavior. This work is typically done with the assistance of a sponsor who has their own lived experience of recovery. For a moment, I would ask that we consider whether all individuals are capable, or even interested, in moving beyond what is going to help them in their current situation. An all-or-nothing mentality that is often applied by some suggests that a person who does not buy into it is simply choosing to continue using. Again, this simplistic view fails to acknowledge the complex relationship people have with trauma and their ability to cope with that trauma. Disconnect with God Chances are, if a person has trauma associated with religion, they might find the 12 steps difficult. Mostly, I hear from people that the close association with God or a higher power is really hard for them to get over. It is not uncommon, however, when a person expresses this that it is seen as an excuse. You might even secretly mumble under your breath, “They are not ready to quit.” If you are that cynical, then you need to talk to someone. The truth is that the LGBTQ communities already have a complicated experience with faith-based communities. To add to that, individuals who use substances as part of their experience are pushed even further into the margins. With that in mind, is it that challenging to see why some individuals might not subscribe to anything that is perceived as having an affiliation with spirituality? Let us start by acknowledging that the person standing in front of us is the expert of their own life, health, and body. They know their own trauma far better than we will ever know. If they are telling you that it is traumatic for them, believe them. They have been in their bodies their entire life. While they might not use vocabulary that you are accustomed to, if you listen closely, they will tell you everything you need to know. Who are we to tell them that what they are saying won’t work? In your experience, the path you used worked, but that is not necessarily the gold standard. Consequently, it is easy for individuals to fall into black-or-white thinking. The all-or-nothing angle is symptomatic of our own rigidity that fed into our own substance use disorder, and we relay that to the folks with whom we are working. The problem is that drugs do not get to the core of the matter. Substances only address the immediate symptom relief. Drugs worked until they didn’t I am going to let you in on a little secret. Our mental health system in the United States is not working, and that is an understatement. Even if you have health insurance, it is not working. Often, the decision to use drugs as a coping skill is balked at, however, we are cheating the people we are working with when we invalidate their ability to choose how to relieve symptoms of mental health. It is an unhealthy coping skill, but they are methods for relieving pain, stress, and trauma nonetheless. So let’s start with the basic premise, as my friend Fred Rottnek, MD, always says, individuals make the best decision given the tools that they have available. The problem is that drugs do not get to the core of the matter. Substances only address the immediate symptom relief. From a medical and behavioral health standard, there is no medical benefit to using drugs. From a personal standard, however, it helps the person to alleviate an intense pain that can often not be acknowledged or verbalized. Substance use addresses the symptoms derived from unresolved or unrealized trauma and complicated bereavement such as anxiety or depression and allows a person to escape, if only for a moment. So in a treatment setting, or as a society, we are quick to tell people not to use drugs, but with what are we filling that void? What other tools are we giving to people? At some point, we must start to acknowledge that an individual’s ability to choose, regardless of our own belief, is something that goes to the very core of shifting toward healthier behaviors, to which I believe all people aspire. Practice makes perfect Thinking about my own experience and my social work practice, I see that coping is a skill that is built over time. While we can have conversations about genetics, social determinants, cycles of poverty, and systems theory, people learn on an individual level how to treat their symptoms. What I see daily for many people I encounter are people who never learned how to cope; that behavior was not modeled for them. In the absence of learning positive or healthy coping skills, unhealthy patterns emerged to a point that it became pathological. When we allow people to choose pathways that are right for them, we set them on a journey of discovering recovery and personal growth. Let them define what recovery is rather than parroting a definition that was force-fed to you. Once an individual defines it, walk along with them on their journey as they start to ease into new practices. We ask people who use drugs to change their attitudes and perceptions, but maybe the biggest shift needs to take place in ourselves.
https://humanparts.medium.com/people-who-use-drugs-cannot-possibly-make-good-choices-or-can-they-303e7a8085c9
['Aaron Matthew Laxton']
2020-01-14 19:36:34.650000+00:00
['Mental Health', 'Health', 'Addiction', 'Recovery', 'Addiction Recovery']
Palpable Anticipation
I can feel it That edgy, vivid sensation accompanied by anticipation Moving around inside my body Leaving imperceptible traces of icy anxiety Mixed with feverish excitement: What is it going to be this time? An incomprehensible event unfolding Before my prominent hazel eyes? Or perhaps an alarming situation That I have no control over? It may also be An earthshaking moment filled with magic A harmonious allignment of the frigid planets With the inaccessible, nameless stars — A euphoric union between all the elements Of the multidimensional Universe; Anticipation is suffocating Yet liberating in its peculiar ways Its a flicker of faith seasoned with fear And some wishful thinking added on top — All complimented By nothing else But a spark of hope.
https://medium.com/know-thyself-heal-thyself/palpable-anticipation-93429aeb6f96
['𝘋𝘪𝘢𝘯𝘢 𝘊.']
2020-12-24 17:36:30.130000+00:00
['Creativity', 'Poetry', 'Poem', 'Energy', 'Writing']
Why Startups Need To Follow the Hard Product Path
Why Startups Need To Follow the Hard Product Path Four steps to a stronger company and a higher valuation When a startup attempts to build a product-based business out of an existing service model, there will always be the lure of leaning on the service to bring in revenue. But while that easy money may support you in the short term, it’s going to suffocate you in the long run. Thanks to the mainstreaming of cloud-based processing, mobile communication, and simplified digital commerce, we’ve arrived at an age where every single service you can imagine is being streamlined, restructured and offered as a product. As we ride out the innovation cycle of Software as a Service, we’re seeing a new cycle developing you could call Service as Software. Innovation-chasing companies now offer the hiring of labor through an app — from consumer-based services like shopping, oil changes, and personal investment advice to business services like hiring, legal, and even creative services like design. How a service model evolves into a product model The modernization and automation of these traditional services begins with the streamlining of how they’re engaged, delivered, and paid for. What happens next, for those especially innovative companies anyway, are changes to the execution of the service itself. If you can achieve the same results of a traditional service using a new process, and if those results can change the customer’s behavior and expectations, you can successfully shift from a service model to a product model. That means your company gets awarded all the trappings of a product play, including a sizable bump in valuation. Unfortunately, on the way to product manna, the temptation will always be there to keep making the service money. Yes, it’s lower margin. Yes, it goes against everything your startup is trying to change. But the business is always readily available, and it’s low-hassle money. You don’t have to educate your customers to solve their old problems in new ways, you just have to show up and collect your hourly rate. This is not a new problem. The very first startup I worked for, over 20 years ago, started as a technical services firm — a custom software factory. On the side, we developed frameworks that we used to cut our coding time by up to 80 percent. Eventually, we began selling those frameworks, and training for the customer on how to create their own software. This was a move to a higher-margin product model that required a lot less talent and cost to produce. But there was always the lure of multi-million dollar projects coming in, customers who just wanted to get out of the way and were willing to pay our old hourly rates for custom work — low-margin but guaranteed easy money. The kind of money that’s hard to say no to. So how do you make sure your evolution to a product model doesn’t devolve back into a low-margin, low-valuation, highly-labor-intensive service model? Step 1: Stick to your positioning The first thing to do is decide what your company is going to be. For example, Lyft never referred to itself as a “two-sided marketplace for taxi services.” It was always a “ride sharing” company. In fact, while early Uber was calling itself a black car taxi service, Lyft’s business model was borrowed from Sidecar, which was a platform that allowed people who were going in the same direction to pair up (actual “ride sharing”). As ride sharing continued to evolve into a dedicated driver and rider system, Lyft never lost sight of the original positioning. Uber launched their own ride-sharing service, Uber X, soon after. Uber Black is now the ride sharing equivalent of Uber’s original black car taxi service. If you’re going to change the way a service is executed, don’t position yourself as a watered-down or technology-enabled version of that same service. You can get away with it for a while. You’ll definitely make investors and employees happy as the money comes in. But soon you’ll be bifurcating, essentially running two companies at the same time. And one of those companies will wind up a small competitor in a field that you had started the other company to disrupt, not to compete against. All that time spent competing is time spent not disrupting. This happened to a fellow founder friend of mine. He found himself spending upwards of 80 percent of his team’s time filling service needs related to his product in the recruiting industry. In other words, instead of converting his customers from his service to his product, he was just using his own product to execute his old service. So he took drastic action, and shuttered his service arm entirely, telling his customers they either needed to switch over to the product, or they needed to find another service provider. Almost all of them went and found another service provider. But as crushing a blow as that was, he didn’t regret it, and now he’s going to succeed or fail building the business he set out to build. Step 2: Keep the wizards behind the curtain Success doesn’t happen overnight, and neither does the transition from service model to product model. If you switch over to a product model right away, you’re going to have to do a lot of educating, a lot of onboarding, and a lot of support. What’s worse is you’ll have no idea how much education, onboarding, and support you’ll end up doing, or how to do any of it efficiently. This is where you bring in the idea of managed services. Think of managed services this way: The customer tells you what they want to accomplish with your product, your managed services team are the wizards behind the curtain, with their hands on the keyboard, until the customer is ready to do it all on their own. But managed services isn’t merely a services arm of a product company, it’s a way to fake the product until you make the product. Managed services should exist to identify the gaps between the customer and customer success. The more experience your managed services team accumulates, the better position your company will be in to anticipate and automate the tasks they repeat the most, converting those human wizards into software wizards, which can get to a better customer experience more quickly. Managed services should not be white-glove, and it should not be offered past a defined and distinct onboarding phase. The trick is to figure out when that handoff should happen and how. Because while managed services should bring in some revenue, it shouldn’t be seen as a replacement for the volume of service revenue you used to bring in. As an added bonus, managed services will keep your customers from turning out complete crap results with their initial attempts at solving their old problem with your new solution. Step 3: Sell out of the death spiral At my last startup, we had a huge hiring problem. The way we were going about hiring was producing awful results, but we were so busy trying to fill positions that we never took the time to consider changing the hiring process. That’s a death spiral. If you’re revolutionizing the way a service is executed, you’re essentially asking customers to likewise consider changing the way they do things. This is a difficult ask, as customers would rather stay with something that isn’t working than risk moving to the unknown. Once you get your customers to see their own death spiral, they can’t unsee it. They might not rush to change their behavior immediately, but the seed is planted. Like Inception. Now, you might ask yourself: How long will a customer stay with a solution that’s no longer working? And the disheartening answer, from my experience anyway, is “As long as they can.” You have to sell the customer out of the service and into the product from the beginning. That’s hard to do when you’re offering the same exact service on the side. Step 4: Don’t chase bad product business for service money The last temptation of a service model is sneaky, because it’s often disguised as a product use case. Bigger customers with deeper pockets can request all kinds of enhancements, one-offs, and special favors that they’re more than willing to pay for to make your product fit their needs. This can happen even when the customer sees the value in your product, because they’re not ready to let go of the status quo. So they’ll look for compromise, and by compromise, they mean customization. This can be a big windfall if the changes fit your roadmap, but beware the customer that assumes any of these three killer personas: They don’t find value in the product, but see it as something to work around to get the service done. This usually happens when your product fits a limited use case for them. That’s not a problem for as long as it remains true. But the moment they want to go “off-menu,” they’ll expect your product to automatically conform to their needs. They see you as their own private development shop. They’ll request so many deviations from your product’s feature set that you’ll need to build a custom version just for them. They’ll go through on-boarding but never “get on board,” expecting your team’s hands at the keyboard forever. Like I said, these are difficult scenarios to identify ahead of time, but even more difficult to deal with. It all goes back to sticking with your position and your messaging. In fact, if you follow these steps from the beginning, you’ll have a much better chance of not becoming a clone of your service industry incumbents, but succeeding or failing with the product you actually set out to build. Hey! If you found this post actionable or insightful, please consider signing up for my newsletter at joeprocopio.com so you don’t miss any new posts. It’s short and to the point. This post was originally published in Built In.
https://jproco.medium.com/why-startups-need-to-follow-the-hard-product-path-9c251b6bed58
['Joe Procopio']
2020-09-10 11:04:42.672000+00:00
['Technology', 'Product Management', 'Business', 'Startup', 'Entrepreneurship']
Generate Word Documents from Templates in Vue.JS with TypeScript
In one of my Vue project, I needed to generate word documents with different templates. I did a small R&D and came across several libraries and APIs to achieve this. All of them were not free and we have to buy a subscription which is very expensive for me. The available free libraries used an approach where we have to design the template with HTML first and then render them as a word document. This approach did not fit to my need as there were many templates and they could frequently change as per the client’s need. Finally I came across a great library ( which was the only library available on the Internet) to generate word documents from a given template that includes tags that can be replaced with contemporary data when generating the report. It was a JavaScript library but my Vue project was in Typescript. In this article, I am going to explain how to implement this feature easily with Vue.JS project in TypeScript. If you are new to Vue with TypeScript, please follow this article explaining how to create a new Vue.JS project in TypeScript. The others can proceed. Installing necessary libraries For generating the word document we use docxtemplater library by Edgar Hipp. First of all, run these commands in the console inside your project folder In addition to the main library, we need the other libraries mentioned above in this process. File saver library is used to save the final output to the local machine. Writing definition files for TypeScript Some JavaScript libraries cannot be directly used in a TypeScript project if they do not include a type definition file for TypeScript. Type definition files usually appear as <libraryName>.d.ts and you can check if there is a such file in the installed library in the node_modules folder of the project. Except the docxtemplater library, the other three libraries do not include such definition files.Therefore, we have to write definition files for them separately by our own. These files are created, named according to the structure <libraryName>.d.ts and placed inside the src folder. jszip.d.ts jszip.d.ts jszip-utils.ts jszip-utils.ts file-saver.d.ts file-saver.d.ts After creating these files in the src folder, you can use these in any component. Submitting the templates First, you have to place your template document that is composed of tags, inside the public folder of the Vue project. The tags are written inside two curly brackets. ex:- {full_name} . For this explanation, I have created my sample template and placed it inside a separate folder inside the public folder ( public → ReportTemplates → template-1.docx ) . Sample template ( template-1.docx ) All inside the {} are tags and they are replaced by the JSON dataset provided when rendering. The tag name and the property name in the JSON object must be similar to run a replacement for a tag. {#students}……{/students} inside a row of a table represents a list named “students” in the JSON object which we are going to provide when rendering. For this example, this is the JSON dataset we will provide. JSON Object Writing the component This is how my final component code appears. When I click the button, it downloads the final word document in docx format with the hardcoded dataset in JSON format. Normally, you should call your back-end API and fetch this dataset for the report. In line 82, for the loadFile function’s first parameter, you can give the path of the template relative to the public folder. Or As in line 81 (commented), you can pass a URL too. This can be from a server or a cloud storage too. JSUtils library is used here to read the binary content from the template document and prepare it in a way that can be zipped. JSZip library is used to create the zipped content . The docxtemplater library requires the template as a zipped content. That is why we use other two libraries in the process. In line 89, you can see that I have set the JSON dataset to the document. This dataset can be as mentioned earlier, from an API. In line 113, saveAs function exported in file-saver.js library, has been used to download the blob with a given file name ( here it is MyDocument.docx ). I have put some useful comments too in the above code, so that you can modify the code as you wish. Final Output After submitting the templates, creating the definition files, and writing the Vue component, we can download the nice word document below. MyDocument.docx What else can we do with this library? Only word documents with “.docx” can be generated in free version. ( commercial version supports for pptx and xlsx generation too) Free version supports the features below, which are sufficient to generate a good looking impressive reports in your project. Filling data in tables ( As in the example we discussed) Repeating a section in the word document with a given list of objects. (Loops) Conditional Rendering of tags by enabling the angular parser option. Creating a loop-list You can test these features at the Demo Section of their official page. Conclusion I explained how to use this docxtemplater library with a Vue.JS project in TypeScript. With this approach, the frequent changes of the templates can be accommodated while the tag names are kept unchanged . This is one of the most advantageous libraries I have come across so far and I am sure it will be same for you one day. If you have any issue or a question regarding this, please do not hesitate to use the comment section. Thank you for reading & happy coding!
https://medium.com/javascript-in-plain-english/generate-word-documents-from-templates-in-vue-js-with-typescript-da0f15114f5d
['Udith Gayan Indrakantha']
2020-12-29 07:59:42.427000+00:00
['JavaScript', 'Front End Development', 'Vuejs', 'Web Development', 'Typescript']
Some Thoughts About the Cognitive Revolution We Live In
Rafael Garrido Rivas | Managing Partner | everis USA The thesis and final project for my Bachelor of Science in industrial engineering was about a speech recognition system (which actually only understood numbers) that I programmed in C++ using neural networks and Hidden Markov Models. As surprising as it may sound, it was actually pretty limited in scope. I trained it using a data set collected from my entire college dorm population, which consisted of approximately 150 students that patiently stopped by my room in their spare time to say a few numbers to my system. The training processing time was kind of exasperating… it took all night to train the system by running Linux on my Compaq 486. Eventually, the system became very accurate in recognizing numbers no matter who said them. At about the same time, Deep Blue beat Garry Kasparov, the chess world champion…. for the first time! The next 3 games, Garry won… the system was not actually “intelligent,” in the same way a calculator is not intelligent, it just followed rules. Not even in my best dreams, could I have imagined that some 20 years later the same machine learning principles I used in my numbers recognizer system would be at the epicenter of the incredible cognitive revolution we are immersed in today. The pace of technological change is unprecedented. After looking at the Tensorflow website this past weekend and browsing through all of the readily available assets that the community has created and shared so far, it seems to me that what took several months to build at the time, could easily be done in less than a week today, and could be deployed with a much better performance even on mobile devices, because where else would we want to use it these days? Our civilization seems to have strongly embraced the African proverb of — “If you want to go fast, go alone. If you want to go far, go together” — and that is great news for this endeavor. For those of us who are inclined to see the glass half full, AI-powered productivity growth is a promise to lift the global economy, by enabling companies to innovate and reach underserved markets more effectively with existing products, and by allowing for the creation of new products and services over the longer term. To understand what we currently refer to as AI, its capabilities, as well as to separate common myth from reality, it is essential to know that the machine learning type that has been dubbed “deep learning,” is still based on artificial neural networks which are rooted on how the human brain works. Researchers train neural networks by adjusting the activation functions and their respective weights to get the desired outputs. With a single network layer (as I had in my number recognizer) you can only identify simple patterns, but with multiple layers you can find patterns within patterns. Current neural network systems are typically composed of twenty to thirty layers. This heightened level of abstraction is the main reason for significant improvements in machine learning and AI. There are all sorts of AI variations (supervised, unsupervised, reinforced learning, transfer learning, etc), but I won’t get into that here. Machine learning relies on a bottom-up, data-based approach to recognize patterns. It utilizes the same approach that my children used to learn four different languages, one of immersion rather than grammar memorization. Simply said, instead of coding the logic to distinguish data, you simply show the system the data and tell it what is and what is not, such that the computer builds the appropriate program. Today, there are several factors contributing to this AI-boom: Availability of massive amounts of data, the “new oil” that fuels this revolution, that come in all shapes and forms (companies, social, sensors, etc) Algorithms of growing complexity are widely available, and often open-sourced by a very active AI community. Exponential computing capacity, with more efficient graphic processing and tensor units, aggregated in hyper scale cluster and available in the cloud. This poses infinite opportunities that could only potentially be limited by our narrow imagination. The challenge is therefore not a computational one, but rather one of evolutionary change in the way we think and work. There is no longer a Moore’s Law wall to hit. Any repetitive task for which there is large amounts of data to train the machines, can and will be automated. So… this is it? Not yet… What does the future have in store for us? Experts say that all of today’s booming AI use cases revolve around applications of what they call “narrow AI,” in which machine-learning techniques are being developed to solve very specific problems. The major challenge is to develop AI that can tackle general problems in much the same way that humans can, but such “general artificial intelligence,” seems to be decades away. In the meantime, all industries are leveraging this “narrow AI” and its data to gain new insights and create advances in their field. The internet is inundated with use cases for almost every industry, and below are a few examples that are useful to understand how AI is embedded in our daily lives without us even being aware of it: Any home assistant or phone speech recognition is powered by AI. Have you noticed how it becomes progressively better at understanding even small kids when there is background noise? Have you tried to perform a search in your photo library lately? Nobody tagged your photos with a dog in them and it is all because of AI playing behind scenes again. Facial-recognition algorithms, (your phone’s face ID for example) based on deep learning, have an accuracy rate of over 99 percent. AI can alert ICU doctors of a patient health’s likelihood to worsen minutes before their vital signs would, giving doctors invaluable extra time to react. Google identifies spam emails and translates web pages into over one hundred different languages using AI. As any other AI algorithms, its trial-and-error process improves as more data becomes available, and as a matter of fact, Google has so much data to improve its algorithms that it is hard to distinguish its translations from those of a linguist. The dream of any marketer is gradually becoming true. AI allows to go from segmentation to micro segmentation by making sense of a lot of company and social data and presenting pricing and promotion options that companies had never seen before. Companies can know their customers better and be very precise about what to offer to them, when to offer it and through what channels (where). Today, retailers routinely use these “next best product to buy” algorithms, and similar use cases will flourish in other industries to enrich customer journeys. Analyzing data from sensors. AI is being used to improve business performance through predictive maintenance. In logistics, AI can optimize the routing of delivery traffic, improving fuel efficiency and reducing delivery times. Data center growth is exploding and is driven by the expansion of cloud providers (hyperscalers); AI can aggregate and analyze data quickly and generate productive outputs, which operators can use to manage density associated with computing, networking, and storage, reducing power consumption, and increasing performance. Unsupervised learning asks the machines to look for patterns in the data, with interesting use cases being the identification of cyber-attacks, terrorist threats, or credit card fraud. Have you read Michael Lewis’ Moneyball? It describes how the Oakland Athletics used an analytics and evidence-based approach, instead of the judgment of sport scouts, to assemble a winning baseball team. Today, firms are using this approach for hiring, and advances in analytics and AI have significantly improved the power and accuracy of “people analytics.” Using analytics, some companies are complementing or replacing humans in the traditional interview process for recruiting people of just about any skill. Its use is widespread to recruit part time freelancers or match them against company needs in part time employment marketplaces that plague our gig economy. A data-driven approach to spotting talent allows companies to broaden their pool of candidates and source from universities they did not go to before for lack of time. Human resource decisions ranging from recruiting and training to evaluation and retention are increasingly being assisted or driven by data and machine-learning algorithms. What does this mean to your industry or business? What are the specific opportunities? What’s the value? What are the use cases? While no solution will typically be a “silver bullet” that makes a genuinely transformative impact on its own, each of them will make your model more difficult to copy, and make it better than your competitor’s. By now, I may have spooked some readers, since the societal implications are apparent. If machines can read X-ray or MRI images as well as or better than radiologists who have years of training and experience, do we need the radiologists? According to McKinsey, about half of current work activities (not jobs) can technically be automated, and while less than 5 percent of jobs have the potential for full automation, almost 30 percent of tasks in 60 percent of occupations could be computerized. Even white collar jobs are under the threat of automation. In reality, the distinction is not between manual and cognitive skills or blue-collar vs white-collar work, but rather whether a job has large elements of repetition, and massive amounts of data are available or can be collected to train the algorithms. Jobs that are routinely, repetitive, and predictable can be done by machines better, faster, and cheaper, and will probably be done by machines, sooner or later. In and of itself, this is neither good nor bad, it is a fact. In my humble opinion, AI can make us more human. Nobody likes doing repetitive tasks. In the same way calculators allowed people to devote their time to more meaningful and complex jobs or ATMs shifted the role of bank tellers from one of simply dispensing cash to one of customer service, AI will not replace human judgment, but will be a major complementary asset that will free humans to focus on tasks that really need them. Companies will be able to serve underserved populations, and focus their expert’s time on the jobs that really require their expertise. Automation will spur growth in the need for critical thinking, creativity, complex information processing, and social and emotional skills such as communication and empathy. The so called “high touch professions” will probably be on the rise. While some jobs will be eliminated, others will change or will be created, and hopefully the change will be for the better. A trend everyone seems to agree with is that the pace of change will be proportionate to the wages at least in the United States.. The core competency we need in the future is the ability and desire to learn. AI will maybe help in telling us what we need to learn. Conclusively, the consultant inside of me suggests a couple of guidelines for the best ways to leverage AI inside organizations: As so many things in enterprise life, it boils down to leadership. Create a business-led, top-down, technology road map. The C-suite needs to create and embrace an overarching vision for how technology can enhance the companies’ performance. This will gradually change the culture as well. Align business and technology leadership on the sequence of solutions to be developed. Make sure there is a common understanding of technology so the business can “pull” for the services and for the support of technology instead of having IT “pushing” solutions. Think of what business you are in, or you can be in. Be the blurrier of your industry’s boundaries. Don’t define your competitors narrowly. Define them broadly or others will do that for you. Since data is fueling this revolution, capitalize on it, it is the source of your competitive advantage. This will entail having a data strategy and governance model to ensure data is reliable, accessible, and continuously enriched to make it more valuable. Prioritize the data domains that support an initial set of solutions. Rethink your IT platform accordingly — have a data platform at the core of your IT ecosystem and a development environment for producing software and analytics code. There is more value in well managed data than in your core business platform, which quite frankly is becoming a commodity (with some business like Allianz open sourcing it) and just another source of valuable data for your data platform. Link legacy and digital applications to the data platform through application programming. Don’t try to nail it the first time, instead have short and simple iterations as the best way to move forward. Embrace agile and its MVPs. Put the measures in place to make sure that as you deploy new technologies you make value-creating adjustments to other areas’ operating models, so your efforts are not isolated but spread across your value chain. Every technology solution should set up a new phase of operational changes. Consultants and third parties may help, but they do not scale; hire people with the skills to refine and extract value from this important resource. To overcome functional silos, companies with a COE defining approach, methods and tools, and training the wider organization, with agile cells that deliver the use cases inside the business areas, At everis, we have centers in various locations such as Chile, Brazil and Peru, adding value to our clients’ business. As companies across sectors are increasingly harnessing AI’s power in their operations, think of how you can best leverage your assets, staying true to your DNA, to embrace this cognitive revolution. Whether you have a lab, a venture, an outpost in Silicon Valley, incubators or accelerators around the world, do internal or external hackathons, or invest along with other VCs to get a peek into what’s next, or all of the previous combined, make sure that you leverage them all to jump onto the AI wagon sooner than later. This journey will take understanding, courage, conviction, and enthusiasm.
https://everisus.medium.com/some-thoughts-about-the-cognitive-revolution-we-live-in-3c2cf61a77b3
['Everis Us']
2019-02-25 15:58:32.146000+00:00
['AI', 'Technology', 'Tech', 'Artificial Intelligence', 'Cognitive Bias']
The Creative Life Is Not For The Faint Of Heart
Being a creative person sucks sometimes. “Running a start-up is like chewing glass and staring into the abyss.” -Elon Musk I know Elon was talking about running a start-up there, but I think the same principle can be applied to creatives in general. I think a lot of us get lost in the tales of massive success from writers like Jeff Goins and Michelle Schroeder. We get so lost that we never contemplate what life is actually like for THEM. Do they have any more security than we do? Did they hit a point in their career where all they had to do was basically hit a button to create money? I don’t know, but I do know this…
https://medium.com/the-mission/im-warning-you-the-creative-life-is-not-for-the-faint-of-heart-f68b327e7205
['Tom Kuegler']
2018-03-23 12:39:42.946000+00:00
['Tech', 'Life Lessons', 'Writing', 'Creativity', 'Life']
Top And Easy to use Open-Source AWS(Amazon Web Services) Tools
It’s a command-line tool, providing scaffolding, workflow automation and best practices for developing and deploying your serverless architecture. It’s also completely extensible via plugins. The Serverless Framework consists of an open-source CLI and a hosted dashboard, also they provide you with full serverless application lifecycle management. Develop, deploy, troubleshoot and secure your serverless applications with radically less overhead by using the Serverless Framework. 3. Awesome Kubernetes — 10, 818 stars Awesome Kubernetes is a curated list of awesome Kubernetes sources. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. It includes: articles case studies projects and much more 4. AWS CLI — 10, 400 stars AWS CLI is a Universal Command Line Interface for Amazon Web Services. The AWS-CLI package works on Python. The safest way to install the AWS CLI is to use pip in a virtualenv : $ python -m pip install awscli Before using the AWS CLI, you need to configure your AWS credentials. You can do this in several ways: Configuration command Environment variables Shared credentials file Config file IAM Role The aws-cli package includes a command completion feature for Unix-like systems. This feature is not automatically installed so you need to configure it manually. To learn more, read the AWS CLI Command completion topic. 5. Awesome AWS — 8, 750 stars Awesome AWS is a curated list of awesome AWS libraries, open-source repository, guides, blogs, and other resources. It contains:
https://medium.com/datadriveninvestor/top-and-easy-to-use-open-source-aws-amazon-web-services-tools-fcf91a96629c
['Mrinal Walia']
2020-12-11 06:48:12.251000+00:00
['Computer Science', 'Cloud Computing', 'Open Source', 'AWS', 'API']
It Will Take Years for People of Color to Recover From the Covid-19 Fallout
It Will Take Years for People of Color to Recover From the Covid-19 Fallout The pandemic lays bare structural inequities that need fixing across all aspect of American society A healthcare worker gives a Covid-19 test to a patient in the Covid-19 Unit at United Memorial Medical Center in Houston, Texas on July 2, 2020. Photo: Mark Felix/Getty Images Kimora “Kimmie” Lynum was “the type of kid that would brighten your day,” her cousin, Dejeon Cain, told a local news outlet in Florida. “She loved her mom, dad, grandma, uncles, and aunties.” Kimmie, who was nine years old, died last month of Covid-19, following a sudden spiking fever. Her family did not know she had the disease until it was too late, Cain said. Kimmie is one of an alarming number of people of color whose lives have been upended or cut short by the coronavirus. Black people are nearly three times more likely than whites to contract Covid-19 and more than twice as likely to die from the disease, according to a report published last week from the National Urban League. Beyond those raw statistics, people of color are suffering grossly disproportionate health and economic consequences from the Covid-19 pandemic — impacts that could take years to recover from. But for recovery to happen, the pandemic must serve as a wake-up call to the deep social and racial inequities that have left Black people, in particular, more vulnerable to the effects of the pandemic and less able to bounce back from this or any health or economic crisis. Nobody thinks it’ll be easy. “For generations, communities of color have faced vast disparities in education, in job opportunities, in income, in inherited family wealth, and in health care,” said Michelle Williams, dean of Harvard T.H. Chan School of Public Health. “The Covid-19 crisis has laid bare these issues.” With widespread Black Lives Matter protests playing out in the background, Williams and other public health experts are now contemplating the piled-on problems of the pandemic amid a time of great hope, and greater-than-ever need, for positive social upheaval. It’s all tied together, they say. “Sadly, the ill effects of this pandemic will be felt for many years after we’ve pushed back the virus,” Williams said. “Delays and deferral in medically managing noncommunicable disorders like hypertension and diabetes — conditions already disproportionally high in black communities — will only worsen.” What Covid-19 has laid bare In the United States, white men outlive Black men by about 4.5 years, on average, and white women outlive Black women by around 2.7 years. These disparities are rooted in well-known, systemic social stressors and inequities that leave Black Americans with higher rates of heart disease, diabetes, obesity, and other chronic conditions. “Being Black in America also means enduring psychological stresses linked to premature biological aging,” Williams said. “All of these health issues conspire to make viruses like Covid-19 that much more severe.” Being Black is a risk factor for poor health not because of underlying biology, but due to a host of disadvantages and disparities that start at birth. Black babies are 2.5 times more likely than white babies to die before their first birthday. Born into sickness Across the country, the heaviest industrial polluters are concentrated in areas with high minority populations. Air pollution causes asthma, heart attacks, and strokes, according to the Environmental Protection Agency, and leads to some 100,000 premature deaths a year in America. The health effects begin at birth and pile up with time, says Afif El-Hasan, a pediatrician in California who focuses on asthma. “People who’ve been living long term in areas of high pollution have reduced natural defenses in their bodies, and it makes it easier for Covid or any other infection to be invasive,” El-Hasan explained. Up to 40 million Americans face evictions in coming months, according to an analysis by the Aspen Institute; about 80% of them are Black or Latino. Recent research found anyone who spends a lifetime in a heavily polluted region of America is more likely to die from Covid-19 than people who’ve lived in clean air. There are short-term risks, too. Spikes in pollution this spring, when the EPA relaxed regulations, were shown to directly contribute to a rise in new Covid-19 infections and deaths. People don’t choose to live in bad air. Many Black people live where they do largely because of decades and centuries of policies by governments and businesses that forced them into social and geographic boxes that remain difficult to get out of. Denied homeownership and wealth The ripple effects of the pandemic extend far behind public health. About 36% of renters didn’t make rent on time in July, according to a survey by the platform Apartment List. Up to 40 million Americans face evictions in coming months, according to an analysis by the Aspen Institute; about 80% of them are Black or Latino. A surge in foreclosures would follow, the institute predicts, particularly among individual “mom and pop” property owners, who are the landlords of about 47% of all U.S. home rentals and in many cases don’t have the resources to pay their mortgages without the rental income. Today, about 71% of white Americans own a home, which has long been considered the #1 way to build wealth. The figure is 41% for Blacks and 45% for Hispanics. Lenders are 80% more likely to deny mortgages for Black applicants even today, according to a recent Zillow analysis. Mortgage denial rates are highest in neighborhoods that are predominantly Black, and in those areas, Black people are still denied disproportionately compared to whites. The resulting financial disadvantage, which carries from generation to generation and extends back to slavery, Jim Crow laws, and redlining that denied Black people mortgages starting in the 1930s, is profound. Typical household wealth, measured by the median (with half of households above and half below) is $17,150 for Black people, $171,000 for white people — 10 times as much. The wealth gap is inextricably linked to other systemic inequities, such as pollution exposure, lack of health care and, ultimately, a glaring health gap. Put simply: “The greater one’s income, the lower one’s likelihood of disease and premature death,” according to a report from the Urban Institute and the Center on Society and Health. Struggling for work Jason Hargrove, a Black bus driver in Detroit, expressed his anger in a Facebook post earlier this year after a woman on his bus coughed several times without covering her mouth. Two weeks later he died of Covid-19. Black people make up 13.4% of the U.S. population, but almost 30% of bus drivers are Black, as are nearly 20% of all food service workers, janitors, cashiers, and stockers, writes Rashawn Ray, an associate professor of sociology at the University of Maryland and a fellow at the Brookings Institution. Blacks and other people of color represent a disproportionate chunk of these “essential” and “frontline” workforces unable to work from home and tend to kids, making them exposed to the virus more than others, other research shows. “During a highly contagious pandemic like Covid-19, Black workers, and consequently their families, are overexposed,” Ray writes. “In this regard, staying home during a quarantine is a privilege.” Meanwhile, the pandemic continues to decimate the job market. As of July, unemployment stood at 9.2% for whites, 12% for Asians, 12.9% for Hispanics, and 14.6% for blacks. Black women have faced the largest job losses. Struggling with education A learning gap has long plagued U.S. K-12 schools, with Black and Hispanic students scoring, on average, two to three years behind white students on standardized tests, according to the McKinsey & Company, a research firm. The gap will likely widen if schools rely solely on online learning, which is less effective, especially for low-income students who don’t have access to high-speed internet or parents available to offer academic supervision, the company concludes in a recent report. Where classrooms and daycares do not open, essential workers are forced into impossible choices. Unlike schools, for-profit daycare centers are threatened with permanent closure, which will likely disproportionately affect people of color and the poor, and “also has important implications for income and educational inequality, racial equity, geographic equity, and a potentially significant decline in the number of mothers in the labor force,” according to an analysis by the Center for American Progress. Trying to keep the lights on People across the country are struggling to pay the bills right now. A survey of low-income households conducted in May by researchers at Indiana University found 13% were already unable to pay their energy bills the previous month, 9% had received an electricity utility shutoff notice, and 4% had had their electric utility service disconnected. “Energy insecurity is already a widespread problem in the U.S.,” the researchers write. “It disproportionately affects those at or below the poverty line, Black and Hispanic households, families with young children, people with disabilities, and those who use electronic medical devices.” They estimate that 800,000 low-income households may have recently had their electricity disconnected — based on data from this spring. The problem could get worse as the economy continues to struggle, the researchers say. And the impact could be deadly in summer heat, especially for older people, young children, people of color, and the poor. Going hungry Millions of Americans, particularly people of color live in “food deserts,” where a supermarket’s fresh produce and other healthy food are impractically far away, and local stores and fast-food chains are overloaded with junk food. The worst food deserts tend to be in dense, poor urban areas, contributing to “food insecurity,” defined as not having enough money to properly feed a family. Before the pandemic, 37 million Americans suffered food insecurity, including 11 million children. Black and Hispanic families are twice as likely to suffer food insecurity compared to white families, multiple organizations have found. The nonprofit charity Feeding America says the crisis is most acute in Navajo County, Arizona, where 43% of the population are Native American. By the end of April, food insecurity had doubled compared to previous levels, affecting more than 20% of U.S. households and 40% of households with mothers with children 12 and under, according to the Brookings Institution. Food deserts and food insecurity have been linked directly to increased rates of diabetes and other chronic diseases — risk factors for worse outcomes from Covid-19 — again, particularly among people of color. Lacking access to health care People without health insurance are less likely to visit a doctor for any reason, including to get tested for Covid-19. While just 8% of white Americans under age 65 are uninsured, the figures are 11% for Blacks, 19% for Hispanics, and 22% for Indigenous people, according to the Kaiser Family Foundation. Increasingly, health experts say the U.S. health care system is broken and in need of structural fixes to better serve marginalized communities, which will ultimately serve all Americans. “Covid-19 is a disease of communities and networks, a pathogen that floats along the infrastructures of human relations,” a small group of doctors and researchers wrote recently in the Journal of the American Medical Association. “Only by better strengthening networks and supporting all communities will anyone, and everyone, return to well-being.” They argue for everything from improving education of doctors to building community health centers that better serve local needs. “Having single-payer universal healthcare would be a tremendous help in terms of Covid specifically.” Others say the health inequities can only be properly remedied with a single-payer health care system, in which basic, essential health care is paid for by the government, similar to how Medicare works. That alone would not address all health inequities, but it would be a start. “Having single-payer universal health care would be a tremendous help in terms of Covid specifically,” Sabrina Strings, PhD, a sociologist who studies race, gender, and the sociology of medicine at the University of California, Irvine, said. “But because health disparities are fundamentally rooted in racism and sexism, single-payer health care is not going to eradicate that. Only a willingness to address the structural forms of oppression in our society will improve those kinds of inequalities.” So now what? Vigilance The very systemic nature of the underlying problems means change, if it comes, will require not just awareness by marginalized groups, but a sustained investment of desire and effort by all Americans, on a level that’s unprecedented, Strings contends. “There is a growing awareness of the many different types of ‘isms’ that are still plaguing America,” said Strings, author of Fearing the Black Body: The Racial Origins of Fat Phobia. “However, this is not the first generation to be awakened to this reality. I don’t have any evidence that the hippies of the 1960s and ’70s were less aware of these problems than Gen Z. So that gives me a little bit of pause. I want to feel hopeful, but I know we’ve been here before.” Williams, the Harvard health school dean, agrees the task is tall, requiring “radical, large-scale investment” in public health, housing, and environmental justice, among other efforts. But she remains hopeful that the Covid-19 crisis can be a springboard for systemic change. “If we remain vigilant about voting, if we hold our elected officials accountable, and if we each do our own part to help each other, hope can turn into reality,” Williams said. “It is up to us to keep pushing for social justice on scale.”
https://gen.medium.com/it-will-take-years-for-people-of-color-to-recover-from-the-covid-19-fallout-fd23282523cd
['Robert Roy Britt']
2020-08-19 05:31:02.540000+00:00
['Health', 'Racism', 'Coronavirus', 'Race', 'Healthcare']
Flexible entities with class-transformer
Hello dear readers! Introduction Each developer worked, working or will work with API for getting remote data. These data returns as some kind of models, entities or format. But We often have a lot of problems with data format and code case. Problems For example let’s make request to TMDB to get latest movie(https://developers.themoviedb.org/): { "title": "Movie name", "overview": "Movie overview", "release_date": "2020-11-02", "adult": false, "runtime": 5400 } Next We could use simple JavaScript object or class instance for storing data. const response = await axios.get( 'https://api.themoviedb.org/3/movie/latest', ... ) const data = response.data; // or class LatestMovie { // Properties constructor(params) { // Init properties from params } } But both cases has problems: Simple JavaScript object don’t give any information about stored data and force us to work with snake_case code(No JS way); Class description has a lot of boilerplate code(describing properties and initialization in constructor and also working with snake_case code in constructor); Solution We could to solve this problem fast and painless with class-transformer. This package gives us instrument for declarative classes describing, changing property names and properties transformation at init of class instance. So now We could rewrite our data class as: import { Expose, Type, Transform, plainToClass } from 'class-transformer'; class LatestMovie { @Expose() public title: string; @Expose() public overview: string; @Expose({ name: 'release_date') @Type(() => Date) public releaseDate: Date; @Expose({ name: 'adult' }) public isAdult: boolean; @Expose() @Transform(value => `${value / 60} minutes`, { toClassOnly: true }) public runtime: string; } const response = await axios.get( 'https://api.themoviedb.org/3/movie/latest', ... ) const latestMovie = plainToClass(LatestMovie, response.data); As you can see We shouldn’t write constructor and describe properties. Class-transformer do it itself with decorators: @Expose mark property as transformable; @Type describes type of property(it could be built-in package types or another class-transformer class); @Transform describes custom value transformation; plainToClass(Class, data) set data to class properties. Generalization Writing and importing plainToClass function each time may be not convenient. Let’s make code more flexible an generic: // axios.ts import Axios, { AxiosInstance, AxiosRequestConfig, AxiosResponse } from 'axios'; import { plainToClass, plainToClassFromExist } from 'class-transformer'; function createAxiosInstance(): AxiosInstance { return Axios.create({ baseURL: process.env.apiUrl, responseType: 'json', validateStatus(status) { return [200].includes(status) }, }) } export const axiosInstance: AxiosInstance = createAxiosInstance; export async function request<T>(config: AxiosRequestConfig, Model: any): Promise<T> { const response = await axiosInstance.request<T>(config); return plainToClass<T, AxiosResponse['data']>(Model, response.data); } Now we can use it as: const movie = await request<LatestMovie>({ method: 'get', url: 'https://api.themoviedb.org/3/movie/latest', }, LatestMovie); And movie object will instance of model class. That’s all! Conclusion Class-transformer gives us declarative functionality for describing models that We should transform before using. Also one model may be as property of another model and one model may extends another model. So We have JavaScript class that transform properties before using. Thanks for reading!
https://medium.com/js-dojo/flexible-entities-with-class-transformer-7f4f0fc43289
['Ildar Timerbaev']
2020-11-04 09:17:37.082000+00:00
['API', 'Typescript', 'Axios', 'JavaScript', 'Vuejs']
Great Little Last-Minute Editing Tips for Writers
Great Little Last-Minute Editing Tips for Writers is a skinny book that’s fat with great information for all writers — whether writing is your full-time job or a part of your professional responsibilities. Great Little Last-Minute Editing Tips for Writers is an addition to Carolyn Howard-Johnson’s earlier book, The Frugal Editor. The core of this little book is a list of what Howard-Johnson calls “Trip-You-Up Words” — those errors that creep into our writing either because we typed them wrong but spellchecker didn’t catch it, or we didn’t know we were making a mistake. Photo courtesy of the author What makes the book a fun read is Howard-Johnson’s style. This isn’t a dry glossary of definitions; it’s a conversation with an editor who genuinely wants your work — whether it’s a novel destined for the bestseller lists or an email to your mom — to be accurate and clear. In addition to explaining what the words that tend to trip us up mean, Howard-Johnson offers practical tips for testing yourself if you’re not sure you’re using the right word. While the target audience for Great Little Last-Minute Editing Tips for Writers is writers who are pitching their manuscripts to editors (which explains the mention of gatekeeper in the subtitle), anyone (including self-publishers, staff writers, and anyone who appreciates correct word usage) can use the information in this book. Get a copy for yourself and as a gift for the other writers in your life. This article was originally published on my site at CreateTeachInspire.com. You can reach me there or email me at jacquelyn@contacttcs.com. You might also want to read: Here’s a little more about me: Finally, here’s how to get a beautiful inspirational quote delivered to your inbox every Saturday:
https://medium.com/illumination/great-little-last-minute-editing-tips-for-writers-ee7e52c4870f
['Jacquelyn Lynn']
2020-12-24 12:50:11.692000+00:00
['Self Improvement', 'Self Publishing', 'Writing', 'Grammar', 'Creativity']
Working at the best point in React
React is a great framework and becomes more popular day by day. There are also other competitors such as Vue and Angular, and yeah both of them are great frameworks too because they have their own benefits compare to the React, but I have no experience to use either of them so we are going to focus on React only. I’m using React because it’s my first mate to meet when comes to the front-end for the first time. I love making anything on front-end such as animation, component, and making helpful tools to boost the effectiveness of daily activity as a front-end developer. I prefer not to use any library/tools out there as possible if I could make one, particularly when creating a component because it helps you to more understand your code, have originality, and freely do anything creatively without any restriction. Thanks to Facebook, React comes to save humanity by fulfilling my needs in front-end. Now, React is more powerful after React officially introduces hooks in 2019. I assume, at least you ever experience use React even only just initialize React App. Because in this article, I will share with you how my team deals with React in best practice, including hooks as a new system in React. Enjoy! The language As far I know, React only has two languages to pick, namely JavaScript, and TypeScript. Let’s see the comparison between them. Cumulate from https://smthngsmwhr.wordpress.com/2013/02/25/javascript-and-friends-coffeescript-dart-and-typescript/ There are more characteristics that I can show you actually, but let just focus on these two keys characteristics. From this information, it’s actually 50:50 to choose one of them, why? If we are talking about a big project, working on a big company as a software developer/engineer, then having scalable and maintainable codes is so important right or you don’t care about it too much so you prefer to the faster compilation time language since perhaps your application is disposable. Typescript helps a lot of JavaScript developers in order to achieve maintainability by doing type checking to ensure the integration of all systems. But on another side, TypeScript is a slow compilator since we need to do type checking compare to the JavaScript. Because of that, comparing non-typed language and typed language is not suitable in terms of compilation time, should compare typed language to the other typed language as well. If you ever experienced to use typed language before such as Java, you know how hard it is to code compared to doing so on Python or JavaScript. But in the end, with/without you realized that typed language has more bug escaping when comes to running time since it has covered the bug in the compilation. This is it! This is why typed language is more maintainable, you don’t want to your clients witness annoying bugs because of uncaptured bugs in the compilation time, right? Don’t you? Since we only have two options, and because of all these reasons, without a doubt we chose TypeScript as the main language to develop React Application even it has a slow compilation. Actually, type checking time is still a tolerance time since the main code has much computation and only does it once in production. From personal experience, actually there is no big difference between TypeScript and JavaScript in compilation time, nothing to feel, so don’t worry about it. Also, many developers currently adopt TypeScript for their application because of the same reason. In the end, perhaps TypeScript will be popular than JavaScript. Programming Style and Common Techniques There are a lot of developers who still used old techniques, particularly on array manipulation. Many a sustainable company that has JavaScript or TypeScript source codes already dumped the old techniques and also they try to adopt functional programming as possible when works on React. React itself is almost use functional programming, seen from no mutation but only constant, use composition for component hierarchical, and function-oriented since React already used hooks. In order to synchronize to the project, we also need to adopt functional programming as possible, here is how we did. Use constant We always try to use a constant as possible, because React should have no mutation at all. All variables are constant Not only just variable to be constant but also a function is a constant. Array manipulation Manipulate an array could be conducted by for-loop or where-loop, that’s a traditional way and tends to confuse when it has a complex manipulation. This is how we did it by using a filter method and a map method. a filter is used to get values that we interest in only, and a map to map each element into a new element. Snippet code of components So we do have a list of values and want to show it on the page in good looking. But before that, we need to limit the values, so what we did is filter the values and then map the filtered values. Pretty straightforward, right? Actually, there is another function in the context of array manipulation, namely, a reduce method, to extract information from each element and cumulate all of them into something. All of these array manipulation functions are actually high-order functions, which is one of the functional programming patterns. Magic Syntax for Object and Array Merging (more) JavaScript/TypeScript has a little different when comes to merging. Let’s see! const newCache = { investigationCaseFormData: { ...global.cache.investigationCaseFormData, referenceCase: newReferenceCase, referenceCaseId: newReferenceCaseId, }, } This is an example of a cache in our application, if we want to edit the cache, we could copy global.cache.investigationCaseFormData first and then edit it by adding new reference_case and new reference_case_id. const newTableData = [ ...tableData ...newData ] Snippet code of Table component in the Web. As you can see we merged two different array lists. Composition This is one of the functional programming patterns and used by React to see all components. In another side, we as a developer must support it by doing composition for all the components we have. Don’t worry about this stuff, by you knowing or not you definitely will do so when working with React. Example of the composition. It’s actually the same as we did when code pure HTML. <Title> <Box axis={Box.Axis.Vertical} crossAxis="flex-end" > <Text isBold={true} fontSize="6vw">TBCare</Text> <Text isBold={true} fontSize="6vw">PPTI</Text> </Box> </Title> Hooks: Perfect case you could use Since we use new React and TypeScript as a language, let me share with you how we use hooks in the best way. Note, to use React Hooks at least you have React 16.8. There are 5 types of hooks that we used as the following: useState —A function to define a simple state useEffect — A function to call another function (callback like) whenever there is an event (effect) useReducer — A function to define a complex state and separated from an existing component custom hook — A custom hook that we created to utilize development useContext — A function to get state from ancestor component Looking at a definition it’s never actually a good picture to understand. Let’s see how we used it. useState Normally we used useState whenever it is a just normal state, there is nothing need to tackle, just a state. Let’s see, we have a Field component, inside the Field component we need to save a value, whenever the user changes value from input then we need to change the value as well. This is how it looks like import React from 'react'; import { Box } from 'components'; function Field({ value = "", //... other props can't be shown }: FieldProps) { const [innerValue, setInnerValue] = useState(value); // ... other stuffs return ( <Box> // ... other components <StyledField value={innerValue} onChangeText={newValue => setInnerValue(newValue)} // ... other props can't be shown /> // ... other components </Box> ) } useState there accepts one parameter that is initial value and then in return gives you a value that you just initialized and the setter function of the value. How do we work on this? So we used innerValue on the input component which is StyledComponent, onChangeText will give a new value through newValue when there is new input from User, and then setInnerValue will set that newValue to innerValue. Finally, the Field component will be rendered for new value. Simple as like that. useEffect Next is useEffect, used to see an event and do something about it, it pretty much only does so. The event we are talking about here some variables that have changed. Let’s see how we used it on the Field component. import React from 'react'; import { Box } from 'components'; function Field({ value = "", //... other props can't be shown }: FieldProps) { const [innerValue, setInnerValue] = useState(value); useEffect(() => { setInnerValue(value) }, [value]) // ... other stuffs return ( <Box> // ... other components <StyledField value={innerValue} onChangeText={newValue => setInnerValue(newValue)} // ... other props can't be shown /> // ... other components </Box> ) } See a new things just added? There is a useEffect there. This is a short documentation of the useEffect: The first parameter — The function will be called when there is an event The Second parameter — List of variables is watched to see there is a change in one of them (if there is one changed then there is an event). We are using this useEffect to see if value of the Field from outside is changed, then we need to synchronize between two values (value and innerValue) by useEffect. It’s like forcibly to change innerValue from outside regardless of what innerValue looks like on Field. That’s it all about useEffect. useReducer and Custom Hook Back to the useState, I mentioned useState is a simple technique to work on simple state. Some of you wonder, how a complex state looks like? a complex state can be seen by how you set the value in different ways. Fortunately, we have one a complex state on the project, that is form state. So besides we have Field component we also need form state to validates the input. We actually wrapped useReducer in custom hooks, so here how the code looks like part by part. The custom hook (main part) Custom hook Manipulator, used by useReducer when there is a setter action. Manipulator used by useReducer to update the state I know there is a lot of going on here. I want you just focus on first image line 118–134, that’s how we used useReducer that accepts two parameters: The first parameter — Manipulator function will be called when there is action The second parameter — Initial value The manipulator is shown on the second image. Notice we used switch-case, so inside it we set the value of form in two different ways and switch-case is used to pick one of the ways. The actions are: Update new value on a specific field Update isValid on each field by validating all the fields That’s how we worked with useReducer. Not only that, but we also implement that in custom hook, useFormState, defined as a form validator. Notice we use the same prefix as other hooks, namely use-, yes it’s intentional because React suggests using this prefix as a convenient and conventional way. useContext For experienced React developers, you know how hard it does state management between two components with a big gap. For instance, you want to share states from grandfather component to your beloved child component, then one of the options is you do propagation through props from top component to the low component on the tree. It’s a bad idea since you just avoid low cohesion. Some expert suggests Redux, actually, it’s not quietly a perfect tool to tackle the problem in current React that has Hooks. Let me show you why after the explanation of useContext! Thanks to React now we have useContext, it helps you by not doing propagation through props that let your components have high cohesion. In our project, we used useContext only once, to deliver global states to all components inside the tree. This is how we defined that by useContext. import React from 'react'; import { ThemeProvider } from 'styled-components'; const theme: ThemeProps = { colors: { totallyWhite: '#FFFFFF', almostWhite: '#FAFAFA', gray: '#5F5F5F', mediumGray: '#C2C2C2', lightGray: '#EFEFEF', red: '#9A3838', green: '#0DCE66', black: '#454545', }, }; export default function App() { // Other stuffs we defined here return ( <ThemeProvider theme={theme}> // other components <Router> <Home /> </Router> // other components </ThemeProvider> } } Working on React actually can be a different paradigm to styling. On this project, we don’t use CSS or SCSS, we used styled-components since it’s easier to share common variables, and at this rate, we want to share consistent colors. So here it is, a snippet code is a pretty straightforward right? You defined your themes and then put it on ThemeProvider. Keep in a note, There is a Router component between ThemeProvider and Home for next reference. Also, this is only the first part, the provider. We are going to see another part, the consumer (last part) import React, { useContext } from 'react'; import { ThemeContext } from 'styled-components'; export default function Home() { const { colors } = useContext(ThemeContext); return ( <Box width="100%" axis={Box.Axis.Vertical} > <Content title="Statistik Kasus Per Wilayah"> <Box width="100%" padding="48px 24px" mainAxis='center' crossAxis='center' borderRadius="3px" background={colors.totallyWhite} > // ... other components </Content> </Box> ) } Simple right? you get the colors. then used it on component, at this rate we used it on Box component for the white background. Also, you notice in the provider codes between ThemeContext and Home there is Router component, we don’t need to involve the Router for propagating the states, there is no need the Router needs to know the state of colors at all, Router only knows what its responsibility on Tree. This is why useContext is used to be. Now you understand how useReducer used to be and useContext could tackle state management with a big gap. Redux is actually just useReducer + useContext, since that problem could be tackled by useContext only then that’s why we don’t need the full part of Redux to tackle the problem. That’s actually the reason why we don’t use Redux on the project because we could do pretty much everything with Hooks, thanks again Facebook! Takeaways Working on React actually is functional-oriented, since the React itself uses functional programming even though it’s not completely functional because React needs to modify DOM (mutation). I strongly suggest you learn about functional programming, it’s a good thing to learn or a new good step while developing scalable and maintainable React application. I hope you get this finish line by going through all my writing and get more sense to work better on React application, thank you!
https://nandhika.medium.com/working-at-the-best-point-in-react-40d11036190a
['Nandhika Prayoga']
2020-04-28 12:58:40.856000+00:00
['Functional Programming', 'JavaScript', 'Reactjs', 'React', 'Typescript']