title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Mindful Design: What the UX World Can Learn from Yoga
This article is reprinted from the September 2013 issue of UXPA Magazine. “Yoga will change your life.” I will never forget these words that my husband, who has practiced a tradition of meditation for 25 years, said to me when I told him I was signing up for yoga teacher training. My intention was not to become a yoga teacher, but to deepen my practice. I had practiced yoga at various times since I was a teenager, but it was only after I sought refuge from my hectic stressful life as an executive at Google and mother of two that I realized the benefits of yoga extended well beyond gaining flexibility and avoiding injury. Indeed, deepening my understanding of yogic philosophy and adopting a daily mindfulness practice were transformative in ways beyond my expectations. Most importantly, I gained a perspective that guides my ability to tend to what is ahead of me. Soon, I started to appreciate how this perspective permeates into everything I do in life. Mindfulness Defined Mindfulness is a way of paying attention to, and seeing clearly whatever is happening in our lives. The attention paid is purposeful, in the moment, without judgment. To those unfamiliar with mindfulness practices, consider what it feels like to be “not present,” perhaps when you’re on autopilot or multitasking. When we are “not present”, we fail to notice the good things about our lives, fail to hear what our bodies are telling us, or we poison ourselves with toxic criticism. Mindfulness is the opposite of that: it’s about having the time and space to attend to what is ahead of us, in spite of distractions competing for our attention and our past history that shapes how we think and perceive the world. This focused attention is a tremendous asset to designers throughout the design process and all its activities: from inspiration and ideation, to design and implementation. Being Mindful When Seeking Inspiration Design is for people; if you cannot understand people you cannot design. During early stages of design, designers often seek inspiration and stimulate innovation by building empathy with users. The act of combining empathy to understand a problem with creativity during the generation of insights and solutions is at the core of “design thinking.” By uncovering people’s latent needs, we can gain insight into ways our interactions with objects or surroundings can be made joyful. The methods used to gain empathy in user experience work are by now well established. Field research, contextual inquiry, and usability studies are frequently used to bring attention and awareness to the actions we otherwise take for granted. These unconscious but ordinary acts reveal subtle but crucial ways we adapt to a world not perfectly tailored for our needs. The designer’s work as observer, not participant or judge, epitomizes the work of an empathic mind, not an analytical mind. Empathic thinking is often easier said than done. Anthony Jack of Case Western University found that analytical thinking suppresses empathic thinking, and vice versa. There is a brain divide, so to speak, that prevents us from invoking the analytical mind and the empathic mind at the same time. As we are constantly surrounded by computers and immersed in company cultures increasingly focused on making decisions based on data, empathic thinking can become even harder to come by. Mindfulness practices such as yoga and meditation, which have been shown to increase empathy for others, offer a practical way for designers and team members to boost their ability to delve deeper into the mind, body, intuition, and feelings and integrate them into a creative expression that can be shared with the world. While mindfulness practices can be effective when engaging in design research and interacting with users, the best outcomes arise when there is a regular practice, as empathic thinking becomes a muscle that can be flexed when needed. Being Mindful When Ideating During the ideation phase, designers must embrace a divergent thinking mindset. The main goal during this phase is to generate as many ideas as possible, and we give ourselves permission to come up with a lot of bad ideas in order to generate a few good ones. Mindful design during this phase is about abandoning judgment and fear: letting go of judging ideas as “good” or “bad” while brainstorming, and letting go of the need to achieve the One Big Idea. Yoga teaches us a lot about how to be playful and abandon judgment and fear, and illustrates how mindfulness practices impact outcomes in the physical body. When we stay in the present moment, we stop comparing ourselves to others. Without the ego in the way, we are able to be with ourselves without judgment and can more effectively sink into the poses that stretch the body. When we allow ourselves to be playful and not worry about falling or reaching for a goal, we allow ourselves to experiment and try things we didn’t think we could do. When the mind is present, the shapes we make with our bodies are beautiful. If we stress ourselves to contort into various shapes, the shapes would not be beautiful, but alarming. Similarly with brainstorming, when we let go of our ego and let go of judgment and fear, we become more playful and creative. Stanford psychologists Philip Zimbardo and John Boyd found that people with an orientation toward the process of making — that is, staying focused on the act of creating, rather than the end product — develop more creative outcomes. “When we are concerned about the product of the process in which we are engaged, we worry about how it will be evaluated, judged, accepted, and rejected. Our ego is put on the line. Worries can then feed back and distort the process of creating new ideas, new visions, new products.” Mindfulness practices help us focus on the present. When we’re not worried about judgment of ideas, which comes later in the process, we can more easily relax and the creative mind can flow. As ideation becomes fun, joyful, and playful, ideas generated are similarly creative and fun. Thus it’s extremely important for designers to recognize what kind of mindset is appropriate for the stage of design they’re in. During brainstorming and ideation, where divergent thinking dominates, designers should adopt a playful, non-judging mindset. Being Mindful When Designing Solutions The thinking mind is not the creative mind. When designers are co-located with cross-functional team members to collaborate on a project, the team benefits from increased camaraderie, rapport, and trust built through frequent informal interactions. Yet, when designers are interacting with team members, conversation and negotiation invoke the thinking mind. In addition to co-locating designers with development teams, providing separate studio space for designers is an ideal way for designers to achieve the quiet contemplation necessary to connect with their creative minds, which is equally important. Designers need not only different physical space but also separate mental space to design. While engineering and marketing counterparts often seek ways to add more functionality and features into a product, designers need to strike the right balance between features and functionality without overwhelming the user. The secret of good design is knowing what to leave out. Thus mindful design during this phase is about achieving a Zen-like quality of not being particularly attached to anything, whether a feature, a specific design element or solution, or a desired outcome. Ironically, by distancing yourself from the outcome, it becomes more possible to create a great design. This notion of “non-attachment” is a fundamental yogic principle: it is a mindset where you do the best you can and what you think is right, but not allow your happiness to depend on the outcome. As a designer, when your happiness does not depend on whether a pet feature or design solution gets included or not, or whether it’s your idea or someone else’s idea that gets embraced, less is at stake, and the mind is free to be more creative, more open, and more apt to explore. Once ideation is over and it’s time to design, convergent thinking replaces divergent thinking, and attention shifts towards prototyping a few ideas to test. Mindfulness practices give us the strength to let go of the need to be perfect. Striving for perfection, after all, is about ego: perfectionism comes from the need to avoid shame and blame from creating a less-than-perfect solution. The rise of agile development practices has proven that it’s far better to invest time and energy into prototyping, testing, and iterating, than to take a waterfall approach in which plenty of time is invested in planning and designing a solution that might not actually be what works best. Be OK with testing a less than perfect design, but commit to gathering feedback and iterating to continuously improve. The Mind-Body Connection: Putting Into Practice Practicing yoga teaches you to notice what is happening in the body and respond to those cues. Subtle shifts in the mind can lead to changes in posture, energy flow, and the way one carries oneself in the body. Conversely, physical postures impact the mind as well; specific poses can induce surges of hormones that increase confidence, joy, assertiveness, etc. The next time you run a design meeting, consider doing some yoga: Help people shed their ego by having them be in the present moment. At the beginning of a design review or brainstorm, have everyone pause for a few moments to practice focusing their attention and awareness on the breath. It may seem daunting and odd to ask people to do this, but in my experience, having a few moments to pause and declutter the mind is always well received! Movement and cognition are highly related. Get people to move during meetings. Have them stand and gather around to review design mockups. Give them pens to scribble on printouts when they give feedback. When facilitating brainstorms, clearly delineate space for divergent thinking and allow all ideas to flow through, regardless of judgment. Help others overcome their fear of rejection by responding with “Yes” or “Yes, AND” instead of “Yes BUT” or “No”. Adopting a regular personal practice can help boost your design skills: Consider a daily practice of meditation to boost empathic thinking, adopt a playful attitude, and practice letting go of attachment and letting go of the fear of being judged by yourself and others. Meditation can be as simple or elaborate as you want it to be. Choose to meditate anywhere from 2 minutes to 1 hour per day. You can do this while lying down, walking, or sitting upright with the hips elevated above the knees. To help focus the mind, you can employ a variety of techniques: bring the attention back to the breath when the mind wanders; stare at a mandala, candle, or object of meditation; repeat a mantra. With practice, it becomes easier to quiet the mind and reach a calm, centered state. It is not uncommon for people to report increased creativity when they practice yoga. When there is more openness in the body, there is more openness in the mind. Do a few stretches to open the chest and shoulders at the beginning of design sprints or hackathons; this helps prime the body and mind for receiving new ideas. The hips and psoas are often tight from too much sitting and standing, resulting in having us in a constant “fight or flight” posture; open this part of the body to enter a playful state of mind. Forward folds are introspective poses that are helpful for getting to a place of quiet contemplation. We are most creative when we achieve a “relaxed but alert state”. A daily mindfulness practice, whether yoga or meditation or both, helps us practice putting the body and mind into such a state. Most important of all is to recognize that you can choose your intention and to actively make that choice. In yoga, we begin each practice by setting an intention for how we are “being” in the present moment. Set your intentions based on what matters most to you and make a commitment to align your worldly actions with your inner values. As you gain insight from meditation and reflection, your ability to act from your intentions blossoms. Similarly with design, be clear about what your intentions are with your offering, whether a product or service. Internalize your mission and values and let design be the expression of your intent. When your intentions are clear, so too are the fruits of your labor. References: 1. Anthony I. Jack, Abigail Dawson, Katelyn Begany, Regina L. Leckie, Kevin Barry, Angela Ciccia, Abraham Snyder. fMRI reveals reciprocal inhibition between social and physical cognitive domains. NeuroImage, 2012; DOI:10.1016/j.neuroimage.2012.10.061 2. Lutz A, Brefczynski-Lewis J, Johnstone T, et al. Regulation of the neural circuitry of emotion by compassion meditation: effects of meditative expertise.PLoS ONE. 2008; 3(3):e1897. 3. Jennifer S. Mascaro, James K. Rilling, Lobsang Tenzin Negi, and Charles L. Raison. Compassion meditation enhances empathic accuracy and related neural activity. Soc Cogn Affect Neurosci first published online September 5, 2012 doi:10.1093/scan/nss095 4. Zimbardo,P. G., & Boyd, J. N. (2008). The Time Paradox. New York: Free Press, Simon & Schuster.
https://medium.com/design-your-life/mindful-design-what-the-ux-world-can-learn-from-yoga-20146a763072
['Irene Au']
2016-08-17 16:53:02.364000+00:00
['Yoga', 'UX', 'Design', 'Creativity', 'Mindfulness']
How Coronavirus Re-Crowned Corona Beer as the World’s Leading Lager
CORONAVIRUS How Coronavirus Re-Crowned Corona Beer as the World’s Leading Lager Corona lager sales are up 40% — looks like I’m not the only one who thought of the joke Photo by Jirka Matousek on Flickr / CC-2.0 In July 2020, the city where I live eased lockdown restrictions. For the first time in months, I went to visit a friend. What did I take to his garden? A 4-pack of Corona lager. It seems like I’m not the only one who thought of this joke. Sales of the Mexican lager in the US surged by 28.8% in March 2020. Drink market analysts IWSR said this likely due to “tongue-in-cheek” social media memes linking the brand to the virus. Meanwhile, in the UK, sales of Corona beer are up 40% this year — the second-biggest increase of any lager brand. Why this increase? It’s a familiar brand — Corona is globally the world’s biggest lager brand. Brand consultant Henry Farr explains: “When so much of people’s lives are uncertain, consumers will look to brands they are familiar with and trust.” — Corona is globally the world’s biggest lager brand. Brand consultant Henry Farr explains: “When so much of people’s lives are uncertain, consumers will look to brands they are familiar with and trust.” It’s part of a wider trend . Sales of alcoholic beverages soared during the early stages of stay-at-home orders, and figures released in Britain show that’s continued through the year. Corona beer is leading the trend and is outperforming most of its competitors. . Sales of alcoholic beverages soared during the early stages of stay-at-home orders, and figures released in Britain show that’s continued through the year. Corona beer is leading the trend and is outperforming most of its competitors. No publicity is bad publicity. While not always strictly true, that seems to apply in this case. As stated above, market experts believe social media memes ironically linking Corona lager with the coronavirus were one of the reasons the beer saw such a large spike in sales compared to other lager brands. Sales went down at the start of the pandemic It was never a given that the link between Corona lager and the virus would lead to more sales. In the first three months of 2020, the brand had its worst quarter in decades as fears around the virus started to increase, and consumers spent less time in bars. Google search trends at the time showed that people linked the virus to the beer. What’s more, production of the beer halted for a period due to lockdown restrictions. Why do Corona beer and the coronavirus have similar names? It’s in the crown. “Corona” is both the Spanish and Latin for crown. On the label of Corona beer is a crown. Coronavirus is named because of the crown-like spikes that are seen on the virus. The World Health Organisation advises that virus names not be linked to locations, people’s names, animal species, food, or anything else that could incite blame. That’s because, in the past, diseases were named after the places they were discovered, and that led to the stigma around that location. If you’re still wondering — yes, it’s still safe to drink Corona beer (in moderation!). There’s no link to the virus other than the name.
https://medium.com/2-minute-madness/how-coronavirus-re-crowned-corona-beer-as-the-worlds-leading-lager-f5cbe2d6669f
['David Majister']
2020-12-26 14:27:13.139000+00:00
['Coronavirus', 'Marketing', 'Pandemic', 'Business', 'Linguistics']
Getting Organised to Quickly Prototype Boto3 Automation Tools in Jupyter Notebooks
A little bit of organisation goes a long way. I would like to share with you some organisational practices I have evolved that in my experience make it quicker to move from a python/boto3 prototype to a production automation tool. To demonstrate them I am going to walk through the creation of a python class that uses the AWS Cost Explorer API via Boto3 to return the total cost of an account. Maybe this particular use case is useful to you, maybe not in any event, it provides a good way to share some effective working practices. Prototype Using Multiple Jupyter Notebooks I am sure many of you are already using Jupyter Notebooks for prototyping automation tools. The way the output is shared dynamically in the notebook alongside your code makes it great for exploring functionality you may not be fully familiar with. Something that may be new to you is you can run one notebook within another. The code to do this is: %run another-notebook.ipynb Now I am not suggesting you get carried away with this functionality but I am suggesting when developing a new automation tool you use a structure of three notebooks, namely: A notebook for your functions and/or classes A notebook to test your program with %run to include your classes/ function notebook A notebook to run you program with %run to include your classes/ functions notebook You could call the notebooks: “run”, “test” and “my-class” Defining Our a Demo Tool (Application) I consider myself a “Toolmaker” hence the use of the word tool rather than application but I am sure you get what I mean. Our demonstration tool will accept: A session (you could be using access keys or assuming a role to access a target account) An end date The number of days to go back from the end date The tool will provide the total cost for the account identified in the profile for the time period defined by the inputs. For this article, I have left it there. I have not extended the design to send the answer somewhere else. You could let your imagination run wild, providing your answer to the world through an API or SNS’ing it to a more select crowd. Write a Failing Test First, we need a use case to test against. I suggest using one month's costs. You can get the total cost for the target account from Cost Explorer via the console. You will hardcode the date of the first day of the next month, the number of days in the month, and the result i.e. the cost you get from Cost Explorer. Let's say you choose the month of October (with 31 days), your code in the “test” notebook will look like this: %run my-class.ipynb ce_test_value=123.00 assert accountCost(mysession,'2020-11-01',-31).cost \ == ce_test_value,'FAILED total cost test' As intended it will fail (in many ways). The value for ce_test_value is the cost you looked up via the console in cost explorer. Put a message in to make it clear it’s the assertion that failed and which one it is in case you end up with more than one. There are two main reasons that the current test is failing: The “my-class.ipynb” notebook containing the definition of the class “accountCost” does not exist You have not define any way for the program to connect to a target account Connecting to the Target AWS Account If you are running locally I am assuming everyone is familiar with using access keys that are defined in a local ~.aws/credentials file to access accounts. My suggestion when doing this is don’t create the keys for the user you normally use to login to the console. Rather create a user specifically for the tool you are creating, create access keys for it and use those access key to create a named profile in your ~.aws/credentials file along the lines of the following: [default] aws_access_key_id=XXXXXXXXXXXXXXXX aws_secret_access_key=aaaaaaAAAAAAbbbbbbBBBBBB [my-named-profile] aws_access_key_id=YYYYYYYYYYYYYY aws_secret_access_key=cccccccCCCCCCddddddDDDDDD This will mean you can define the profile you want to use explicitly in your program rather than relying on the default. You will be able to switch between accounts within your program and if you later want to assume a role moving to this will be less disruptive to your code. For the new user you have created you are going to have to define an access policy. I think sometimes “least privilege” seems like a lot of hard work to figure out exactly what actions you need to allow. It is easier than you maybe think to find out what action you must allow. Simply create a blank policy, don’t allow anything, run your program, and generally the error message will tell you exactly what action you need to allow. In this case, I will put you out of your misery by giving you the policy you require which you can attach as an inline policy. Here it is: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ce:GetCostAndUsage" ], "Resource": "*" } ] } Generally, I would prefer using an AWS-managed policy but in the case of using the Cost and Usage API, none of the obvious candidates do the job and are in fact overly permissive for this narrow use case. So now we have a way to connect to the target AWS account we can amend the “test” code to accommodate this as follows: import boto3 # set the profile to be used profile_name='my-named-profile' my_session = boto3.Session(profile_name=profile_name) %run my-class.ipynb ce_test_value=123.09 assert accountCost(my_session,'2020-11-01', 31).cost \ == ce_test_value,'FAILED total cost test' The only thing left to do is to create the class notebook and define the class within it. Class to Get Your Account Costs from Cost Explorer Via Boto3 The class assumes input of: session date until days back You can create the notebook “my-class” and add the following code: import boto3 from dateutil import parser as p from dateutil.relativedelta import relativedelta as rd class accountCost: def __init__(self,session,mydate,mydays): # cost explorer requires region to us-east-1 client = session.client('ce', region_name='us-east-1') #convert input end date from string e_date=p.parse(mydate) end_string= \ f'{e_date.year}-{e_date.month:02}-{e_date.day:02}' #calculate start date s_date = (e_date + rd(days=mydays)) #convert start date to string start_string= \ f'{s_date.year}-{s_date.month:02}-{s_date.day:02}' #get cost from cost explorer api response = client.get_cost_and_usage( TimePeriod={ 'Start': start_string, 'End': end_string }, Granularity="DAILY", Metrics=[ "UnblendedCost", ] ) #total up days between start and end date total_cost=0.0 for day in response['ResultsByTime']: total_cost=total_cost+float(day['Total'] ['UnblendedCost']['Amount']) #return result as string with two decimal points self.cost=f'{total_cost:.2f}' Parsing the date makes the solution a little more robust in that it will accept a variety of date formats, standardising them in the program. Putting it all Together Having saved the “my-class” notebook you can go back to the “test” notebook and now run you run it you should get no errors. Note you will need to take into account that in order accommodate the technique used to fix the float to two decimals your test value needs to be converted to a string ( f’{ce_test_value}’ ). If in fact, you get nothing this means the assertion passed with the result of instantiating your object equals that of the value you retrieved using the console from Cost Explorer. Now you can create the “run” notebook and yes at this stage it will look suspiciously similar to the “test” notebook. Here is my version: # set the profile to be used profile_name='my-named-profile' my_session = boto3.Session(profile_name=profile_name) %run my-class.ipynb print(accountCost(my_session,'2020-11-01', 31).cost) The benefit you have gained is that not only have you tested your class but you can also try things out in the “test” notebook as you develop your prototype further before incorporating them into your “run” notebook. I hope you found this approach useful in stimulating your own thoughts on how to approach your next tool building exercise. Any thoughts or suggestions please feel free to reach out to me.
https://stuart-heginbotham.medium.com/getting-organised-to-more-quickly-prototype-boto3-automation-tools-in-jupyter-notebooks-8c1850e53047
['Stuart Heginbotham']
2020-12-11 14:38:21.493000+00:00
['Python', 'Boto3', 'AWS']
What is Production System in Artificial Intelligence?
A production system is based on a set of rules about behavior. These rules are a basic representation found helpful in expert systems, automated planning, and action selection. It also provides some form of artificial intelligence. In this article, we will talk about the production system in artificial intelligence in the following sequence: What is Production System? Features of Production System Control/Search Strategies Production System Rules Classes of Production System Advantages & Disadvantages Production System in AI: Example What is Production System? Production system or production rule system is a computer program typically used to provide some form of artificial intelligence, which consists primarily of a set of rules about behavior but it also includes the mechanism necessary to follow those rules as the system responds to states of the world. Components of Production System The major components of the Production System in Artificial Intelligence are: Global Database: The global database is the central data structure used by the production system in Artificial Intelligence. The global database is the central data structure used by the production system in Artificial Intelligence. Set of Production Rules: The production rules operate on the global database. Each rule usually has a precondition that is either satisfied or not by the global database. If the precondition is satisfied, the rule is usually be applied. The application of the rule changes the database. The production rules operate on the global database. Each rule usually has a precondition that is either satisfied or not by the global database. If the precondition is satisfied, the rule is usually be applied. The application of the rule changes the database. A Control System: The control system then chooses which applicable rule should be applied and ceases computation when a termination condition on the database is satisfied. If multiple rules are to fire at the same time, the control system resolves the conflicts. Features of Production System in Artificial Intelligence The main features of the production system include: 1. Simplicity: The structure of each sentence in a production system is unique and uniform as they use the “IF-THEN” structure. This structure provides simplicity in knowledge representation. This feature of the production system improves the readability of production rules. 2. Modularity: This means the production rule code the knowledge available in discrete pieces. Information can be treated as a collection of independent facts which may be added or deleted from the system with essentially no deleterious side effects. 3. Modifiability: This means the facility for modifying rules. It allows the development of production rules in a skeletal form first and then it is accurate to suit a specific application. 4. Knowledge-intensive: The knowledge base of the production system stores pure knowledge. This part does not contain any type of control or programming information. Each production rule is normally written as an English sentence; the problem of semantics is solved by the very structure of the representation. Control/Search Strategies How would you decide which rule to apply while searching for a solution for any problem? There are certain requirements for a good control strategy that you need to keep in mind, such as: The first requirement for a good control strategy is that it should cause motion . . The second requirement for a good control strategy is that it should be systematic . . Finally, it must be efficient in order to find a good answer. Production System Rules Production System rules can be classified as: You can represent the knowledge in a production system as a set of rules along with a control system and database. It can be written as: Deductive Inference Rules Abductive Inference Rules If(Condition) Then (Condition) The production rules are also known as condition-action, antecedent-consequent, pattern-action, situation-response, feedback-result pairs. Classes of Production System in Artificial Intelligence There are four major classes of Production System in Artificial Intelligence: Monotonic Production System : It’s a production system in which the application of a rule never prevents the later application of another rule, that could have also been applied at the time the first rule was selected. : It’s a production system in which the application of a rule never prevents the later application of another rule, that could have also been applied at the time the first rule was selected. Partially Commutative Production System : It’s a type of production system in which the application of a sequence of rules transforms state X into state Y, then any permutation of those rules that is allowable also transforms state x into state Y. Theorem proving falls under the monotonic partially communicative system. : It’s a type of production system in which the application of a sequence of rules transforms state X into state Y, then any permutation of those rules that is allowable also transforms state x into state Y. Theorem proving falls under the monotonic partially communicative system. Non-Monotonic Production Systems : These are useful for solving ignorable problems. These systems are important from an implementation standpoint because they can be implemented without the ability to backtrack to previous states when it is discovered that an incorrect path was followed. This production system increases efficiency since it is not necessary to keep track of the changes made in the search process. : These are useful for solving ignorable problems. These systems are important from an implementation standpoint because they can be implemented without the ability to backtrack to previous states when it is discovered that an incorrect path was followed. This production system increases efficiency since it is not necessary to keep track of the changes made in the search process. Commutative Systems: These are usually useful for problems in which changes occur but can be reversed and in which the order of operation is not critical. Production systems that are not usually not partially commutative are useful for many problems in which irreversible changes occur, such as chemical analysis. When dealing with such systems, the order in which operations are performed is very important, and hence correct decisions must be made at the first attempt itself. Advantages & Disadvantages Some of the advantages of the Production system in artificial intelligence are: The system is highly modular because individual rules can be added, removed, or modified independently because individual rules can be added, removed, or modified independently Separation of knowledge and Control-Recognises Act Cycle and A natural mapping onto state-space research data or goal-driven onto state-space research data or goal-driven The system uses pattern directed control which is more flexible than algorithmic control than algorithmic control Provides opportunities for heuristic control of the search of the search A good way to model the state-driven nature of intelligent machines of intelligent machines Quite helpful in a real-time environment and applications. Now, let’s have a look at some of the disadvantages: It describes the operations that can be performed in a search for a solution to the problem. There is an absence of learning due to a rule-based production system that does not store the result of the problem for future use. due to a rule-based production system that does not store the result of the problem for future use. The rules in the production system should not have any type of conflict resolution as when a new rule is added to the database it should ensure that it does not have any conflict with any existing rule. Production System in Artificial Intelligence: Example Problem Statement: We have two jugs of capacity 5l and 3l (liter), and a tap with an endless supply of water. The objective is to obtain 4 liters exactly in the 5-liter jug with the minimum steps possible. Fill the 5-liter jug from the tap Empty the 5-liter jug Fill the 3-liter jug from the tap Empty the 3-liter jug Then, empty the 3-liter jug to 5 liter Empty the 5-liter jug to 3 liter Pour water from 3 liters to 5 liter Pour water from 5 liters to 3 liters but do not empty Solution: It is possible to have other solutions as well but these are the shortest and the 1st sequence should be chosen as it has the minimum number of steps. With this, we have come to the end of our article on the Production System in Artificial Intelligence. I hope you understood what is the production system and how it is used for controlling a global database easily. If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, DevOps, Ethical Hacking, then you can refer to Edureka’s official site. Do look out for other articles in this series which will explain the various other aspects of Deep Learning.
https://medium.com/edureka/production-system-in-ai-7cc2b453aa47
['Sahiti Kappagantula']
2020-11-04 14:28:18.313000+00:00
['Production', 'Rules', 'Deep Learning', 'Artificial Intelligence', 'AI']
Quantifying hard retinal exudates using Growing Neural Gas algorithms
Just as in the previous posts, I’ll leave the rigorous maths for the companion notebook, and stick to explaining the general idea. Just like Self-Organising Feature Maps, GNGs are iterative algorithms. However, unlike SOFMs, they do not require any initial specification of the number of neurons — as the name suggests, GNGs are growing, and new neurons keep getting added as long as the algorithm is running. Each iteration begins by picking a data point from the training set. Because GNGs generalise very well to an arbitrary number of dimensions, it is common to speak of these in terms of 𝛿-length vectors v. The neuron nearest to v, called the best-performing unit (BPU) in analogy to SOFMs, is moved closer to v. All neurons directly connected to the BPU are also moved closer to v. Determine the second-best performing unit (SBPU). If the BPU and SBPU are connected, set the age of this connection to zero. If they are not connected, connect them. Then increment the age of all other edges emanating from the BPU. If an edge has an age larger than the maximum age Amax, delete the edge. If this results in ‘orphan neurons’ (neurons with no edges connecting them), these are also deleted. Every λ iterations, the neuron with the largest cumulative error (sum of distance from each data vector v over each iteration) is identified as the worst-performing unit (WPU). Insert a new neuron halfway between the WPU and its worst-performing neighbour and delete the original edge between the WPU and its worst-performing neighbour. Iterate until some boundary condition, such as maximum number of iterations is reached. It’s simple to understand how this algorithm works, but it’s worth spending some time on thinking about why it works. As you might have noticed from the examples above, this method creates a partitioning of the space where the data is distributed, and does so by approximating a Delaunay triangulation (indeed, in his original paper, Fritzke referred to the graph generated by a GNG as an ‘induced Delaunay triangulation’). The idea of a growing neural gas algorithm is that unlike a SOFM, which requires some idea of how many neurons are required to represent the data, GNG determines where the model has been performing worst so far, and refines that area. This eventually results in a model that grows not uniformly but rather to expand the size of the graph where it can no longer cover (quantise) the data with the given resolution (number of neurons). Using GNG to count clusters In the first introductory Part to competitive neural networks, I have already introduced a use case for GNGs, namely as quick and efficient vector quantisation algorithms that create decent approximations of images. In the following, we’ll be looking at something slightly different, namely counting distinct objects and quantifying their sizes. Hard and soft exudates on a fundoscopy image from the DIARETDB1 data set (Kauppi et al., 2007). The DIARETDB1 data set by the research group of Kauppi et al. at Lappeenranta University of Technology contains 89 digital fundoscopy images, that is, images of the fundus of the eye, of five healthy volunteers and 84 people with some degree of diabetic retinopathy. In diabetic retinopathy, a complication of diabetes that affects the small blood vessels of the retina, long term inadequate blood glucose control leads to vascular damage, microaneurysms and exudates, where lipids (causing bright yellow hard exudates) or blood (resulting in pale, diffuse yellow soft exudates) have accumulated on the fundus. In the following, we’ll be using GNG to quantify these abnormalities. The DIARETDB1 data set contains ROI (Region of Interest) masks, but those merely outline areas that show a particular clinical feature. Can we use Growing Neural Gas to count how many clusters of hard exudates are present in the regions of interest? You bet! Isolating the ROI using consensus masks: the consensus mask of at least two experts’ votes (bottom right) is generated from the original ROI annotations (bottom left). This mask is used to isolate the region of interest from the fundoscopy image (top left), resulting in a masked image (top right). We begin with some image processing, namely by refining the area of interest. Each image was labelled by four experts, which created a mask. We can threshold the mask so as to require consensus by a given number of experts, a trick widely used in annotated research imagery (scroll to the bottom if you’re unfamiliar with it!). Then, we use the relatively prominent bright yellow colour of hard exudates to convert them to data points the GNG can begin to characterise (for the nitty-gritty, do refer to the companion notebook, where some of the added tricks, including some morphological transforms, are explained).
https://medium.com/starschema-blog/growing-neural-gas-models-theory-and-practice-b63e5bbe058d
['Chris Von Csefalvay']
2019-01-30 19:31:16.403000+00:00
['Machine Learning', 'Neural Networks', 'Artificial Intelligence', 'Computer Vision', 'Data Science']
Goku: Building a scalable and high performant time series database system
By Rui Zhang | Engineer, Storage & Caching Co-authored by Jinghan Xu, Jian Wang & Tian-Ying Chang, Engineers At Pinterest, developers rely on Statsboard to monitor their systems and discover issues. A reliable and efficient monitoring system is very important for development velocity. Historically, we’ve used OpenTSDB to ingest and serve metrics data. However, as Pinterest grows, the number of services have also increased from hundreds to thousands, generating millions of data points every second, and growing. While OpenTSDB worked fine functionally, its performance became degraded as Pinterest grew, causing operational overhead (ex. serious GC issues and crashed HBase quite often). As a solution, developed Goku — our in-house time series database with OpenTSDB compliant APIs written in C++, to support efficient data ingestion and expensive time series queries. Two-level sharding with Goku Time Series Data Model Time Series Data Goku follows OpenTSDB’s time series data model. A time series is composed of a key and a series of numeric data points over time. key = metric name + a set of tag key value pairs. E.g., “tc.proc.stat.cpu.total.infra-goku-a-prod{host=infra-goku-a-prod-001,cell_id=aws-us-east-1}”. data point = key + value. Value is a timestamp and value pair. E.g, (1525724520, 174706.61), (1525724580, 173456.08). Time Series Query Each query consists of part/all of the following:, metric name, filters, aggregators, downsampler, rate option, in addition to start time and end time. 1) An example of a metric name is, “tc.proc.stat.cpu.total.infra-goku-a-prod”. 2) Filters are applied against tag values to reduce the number of times series are picked up in a query or group, and aggregated on various tags. Examples of filters Goku supports include: Exact match, Wildcard, Or, Not or, Regex. 3) Aggregator specifies the mathematical way of merging multiple time series into a single one. Examples of aggregators that Goku support include: Sum, Max/Min, Avg, Zimsum, Count, Dev. 4) Downsampler requires a time interval and an aggregator. The aggregator is used to compute a new data point across all of the data points in the specified interval. 4) Rate Option optionally calculates rate of change. For details, see OpenTSDB Data Model. Challenges Goku addresses many of limitations in OpenTSDB, including: 1) Unnecessary scan: Goku replaces OpenTSDB’s inefficient scan by an inverted index engine. 2) Data size: A data point in OpenTSDB is 20 byte. We adopted Gorilla compression to achieve 12x compression. 3) Single machine aggregation: OpenTSDB reads data onto one server and aggregates while Goku’s new query engine moves computation closer to storage layer that enables parallel processing on leaf nodes before aggregating partial results on root node. 4) Serialization: OpenTSDB uses JSON, which is slow when there are too many data points to return; Goku uses thrift binary instead. Architecture Storage Engine Goku has employed Facebook Gorilla in memory Storage Engine to store the most recent data from the past 24 hours. Brief introduction of the storage engine. If you want to know the details, please check the Gorilla paper and its GitHub repository. As illustrated above, in the storage engine, Timeseries are divided into different shards called BucketMap. For one Timeseries, it’s also divided into buckets whose duration can be configured (internally we use a 2-hour bucket). In each BucketMap, every time series is assigned one unique id and linked to one BucketTimeSeries object. The BucketTimeSeries holds the most recent modifiable buffer bucket and storage ids to immutable data buckets in BucketStorage. After the configured bucket time, data in BucketTimeSeries will be written to BucketStorage and become immutable. To achieve persistence, BucketData are written to disk as well. When Goku restarts, it will read data from disk into memory. We use an NFS to store the data which enables easy shard migration. Sharding & Routing We use a two-layer sharding strategy. First we do hashing on the metric name to determine which shard group one Time Series belongs to. We follow with hashing on the metric name + tag key value sets to determine which shard in that group the Time Series is in. This strategy ensures data will be balanced across shards. Meanwhile since each query only goes to one group, the fanout remains low to reduce network overhead and tail latency. In addition, we can scale each shard group independently. Query Engine Inverted Index Goku supports query by specifying tag key and tag values. For example, if we want to know the CPU usage of one host host1, we can send a query cpu.usage{host=host1}. In order to support this kind of queries, we implemented an inverted index. (Internally it’s a hashmap from search term to a bitset.) The search term can be either the metric name like cpu.usage or tag key value pair like host=host1. Having this inverted index engine, we can quickly do AND, OR, NOT, WILDCARD and REGEX operations, which has also reduced many unnecessary lookups compared to original OpenTSDB scan based querying. Aggregation After retrieving data from storage engine, comes the step of aggregation and construction of final result. We initially tried OpenTSDB, using its built-in query engine. The performance degraded heavily as all the raw data need to go on network and also those short lived objects cause a lot GC. So we replicated OpenTSDB’s aggregation layer inside Goku. We also pushed the calculation as early as possible minimize the data on the wire. A typical query flow is as follows: A query from Statsboard client (Pinterest’s internal metric monitoring UI) goes to any proxy goku instance The proxy goku fanout the query to related goku instances within the same group based on the sharding configuration Each goku reads inverted index to get related time series ids and go on fetching their data Each goku aggregates the data based on query, like aggregator, downsampler and etc Proxy goku does the second round of aggregation after gathering results from each goku and returns to the client Performance Compared with previously used OpenTSDB/HBase solution, Goku performs much better in almost all aspects. Here is another latency graph focusing on high cardinality queries before and after using Goku. What’s next Disk-based storage for long-term data Goku ultimately will support queries longer than one day. For longer term queries like one year, we don’t put as much emphasis on what happened at one second, but rather look at the overall trend. Therefore, we’ll do downsampling and compaction to merge hourly buckets into longer term buckets, which reduces the data size and improves query performance. Goku Phase #2 — Disk based: Data includes index data and time series data Replication Currently we have two goku clusters doing double writes. This setting gives us high availability: when there are issues in one cluster, we can easily switch traffic to another. However, because the two clusters are independent, it’s hard to ensure data consistency. E.g., if writes to one succeed while fail on the other, data will become inconsistent. Another drawback is failover is always cluster level granularity. We’re working on log based intra-cluster replication to support master slave shards. This will improve read availability, preserve data consistency and failover in shard level granularity. Analytics Use Case Analytics is widely needed for all industries and Pinterest is no exception. Questions like experiment results and ads campaign performance are being asked every minute. Currently we mainly use offline jobs and HBase for analytics purpose, which means no real-time data, and a lot of unnecessary pre-aggregations. Because of the time series data nature, Goku could easily fit it and provide not only real time data, but also on demand aggregation. We’ll continue exploring the use cases for Goku. If projects like this are of interest to you, check out our Careers page! Acknowledgements: Huge thanks to Brian Overstreet, Wei Zhu from visibility team and , Paul Bindels and Chiyoung Seo for helping rolling out Goku and the design advices.
https://medium.com/pinterest-engineering/goku-building-a-scalable-and-high-performant-time-series-database-system-a8ff5758a181
['Pinterest Engineering']
2018-09-14 21:37:53.010000+00:00
['Big Data', 'Engineering']
Create Interactive Dashboards with Panel & Python
Do you want to create flexible and powerful dashboards with Pure Python? In this tutorial, I will go through creating a simple and interactive dashboard with Panel . We will use Jupyter notebook to develop the dashboard and will serve it locally. Panel is an open-source Python library that lets you create custom interactive web apps and dashboards by connecting user-defined widgets to plots, images, tables, or text. Basic Interactions in Panel The easiest way to create interaction with any dataset or plots in Panel is to use the Interact function. It automatically creates an interactive control and also can be a flexible and powerful way to create dashboards. Let us see a simple example using panel Interact . Here we create a function that returns the product of a number, and we call Panel to interact on the function. import panel as pn def f(x): return x * x pn.interact(f, x=10) Panel Interact Now, we have an interactive control, where we can drag a slider, and the product changes as we change the x values. Panel interact function is easy to use and works well with controls, data tables, visualization, or any panel widget. As this is a high level, you do not see what is going on inside, and the dashboard layout's customization requires indexing. However, it is a clear and robust starting point. If you want to have more control of your dashboards and customize it entirely, you can either use reactive functions or callbacks in Panel. In our dashboard, we will use depends more powerful function. Panel Components Before we go through creating the dashboard, let us see the three most essential components in Panel that we will use in this dashboard: Pane: A Pane allows you to display and arrange plots, media, text, or any other external objects. You have almost all Python plotting libraries functionality here so that you can use your favorite plotting library (Matplotlib, Bokeh, Vega/Altair, HoloViews, Plotly, Folium). You can also use markup and embed images with Pane. Widget: Panel provides a consistent and wide range of widgets . You have different types of widgets, including options selectors for single and multiple values (i,e. select, radio button, multi-select, etc.) and type based selectors (i.e., slider checkbox, dates, text, etc.). Panel: A Panel provides flexible and responsive customization to dashboard layouts. You can have either fixed-size or responsive resizing layouts when you are building dashboards. In Panel, you have four main types: Row — for horizontal layouts Column — for vertical layouts Tabs — selectable tabs layout GridSpec: for grid layout. In the next section, I will create a Dashboard with interactive controls, data visualizations, and customizing the dashboard layout. Flood Dashboard with Panel In this example, we will use a global flooding dataset — Global Active Archive of Large Flood Events — from the flood observatory unit in Dartmouth. Let us read the data and explore first before we begin constructing the dashboard.
https://medium.com/spatial-data-science/create-interactive-dashboards-with-panel-python-9ac13c84b227
[]
2020-12-15 16:58:57.154000+00:00
['Python', 'Dashboard', 'Data Visualization', 'Data Science', 'Panel']
Stop Filling Up All the Empty Space in Your Life
“Grace fills empty spaces.” -Simone Weil I’ve been on a road trip with my son and his girlfriend. It’s been amazing and I’m glad I have a relationship with a young adult son who wants to spend double-digit hours in a car with his mom. Like all good vacations, it’s been filled with car music and reminiscing conversations, packed with things to do — this trip complete with an evening concert at the Red Rocks Amphitheater in Colorado. Several bucket list items have been checked off and I am enjoying the foray from the beaten path. I’ve been overdue for some inspiration and a much needed time away from my daily routine. There is one part of my routine, however, that I miss and need daily. It’s my quiet time. I live in a season in my life where I have the luxury of extended periods of time and quiet — a luxury I never had working full time or caring for young children. I work for myself so I choose the times I want to interact and guard the times I set aside to be isolated and alone. On a typical day, I spend at least half of it in silence other than the tapping on my keyboard as I write like I am now or the barking beagle who spots a barn cat where she shouldn’t be. I do not turn on the television, radio, news shows, or background music. I need silence because I need to listen. Silence provides the gaps in life and the stillness my mind craves. The spaces matter too When I am bombarded with constant input my head feels full and stuffed from ear to ear. Thoughts are not my own (and certainly not my muse’s) and my thinking is muddled. The act of receiving good words to write is not about seeking them but listening to them. I listen, I pay attention to a sentence that shoots fully formed into my mind, I take it down and then listen for where it wants to go next. It’s margin. it’s empty space. It’s white space, whatever you want to call it, but what I’ve found is that grace fills those empty spaces. I have no editorial calendar. I have ideas and story starters, but in between the ideas and a finished written product that connects with readers is a vast open space that only grace can fill. When you think about it, grace is the filler for all we lack. It fills when relationships sustain hurt and a gap is created. Invited, grace can bridge the folded arms and hurt hearts and bring us back to each other. Grace fills the space between the love of a child and parent where words are difficult and perspectives are so polarized they seem almost insurmountable. Grace tethers us without words. It keeps us from floating away from each other. But grace requires space in which to manifest itself. It’s a lot like letting go. When we refuse to figure out our next step, it is revealed to us. I once heard a preacher explain theological grace as what fills the gap between God’s ideal for mankind (love without judgment, forgive without measure, and give without reluctance) and our feeble attempts to be like Him. Wherever you fall short, grace fills the gap — no matter how wide or deep or how far you feel you’ve fallen from the ideal. It is fluid, it is changing, it is versatile, connecting dots and hearts when language cannot match up. Grace is a love language that cannot be perfected, only received and practiced. Grace needs empty spaces to show up and so often we lack grace because we try to fill the gaps on our own. We pour in pride — I am right you are wrong, move toward me. We try to bridge with nagging, coercing, manipulating and controlling but the gap only widens, creating unbreachable chasms. We despair and forget to ask for grace. There is a reason Jesus always asked anyone who approached him with a variation of the question, “What do you want?” Not because he was making them beg, and not to embarrass them, but to reveal that on their own they could not undo the wrong, or tip justice in their favor, or fix a broken body. By asking for grace we reveal the truth that grace is always what we all need. Photo by juan pablo rodriguez on Unsplash I believe that the reason we are a noise-filled, driven society that constantly pumps words into our ears and images into our eyes is because the oldest trick of life still drives us. We’re too proud to sit with gaps. We believe, beyond the obvious evidence we face daily, that we can fix what’s broken, master our inadequacies, and control our lives. It’s a chimera we stubbornly cling to with boastful words and try to show up when what we need to do is shut up and sit down. Maybe the reason we’re so afraid of empty spaces, silence, unanswered questions, and blocks of unproductive time is because we still can’t receive grace. Let grace be enough Grace is bantered about in certain circles but unless we take time to allow it into our lives it’s just a thing we say without real application. If grace is enough to fill the empty spaces why do we stuff and cram so much into all areas of our lives? Words, opinions, activities — we surround ourselves with energies that in the long run drain instead of fill and lift. We see any moment of downtime as a lapse in productivity and productivity is just a fancy word for life on the hamster wheel. But it’s addictive and shiny at first and fills our need to think we are important and, well, important people don’t need grace. The closets are full, calendars crammed, mental space stuffed with nonstop stimulation and information. We’ve forgotten how to be still, our children never experience the freedom of boredom, we book from end to end and write in all the margins. Where can grace live? “I think we need to learn how to tolerate more empty spaces.” — Sarah Ban Breathnach Maybe we don’t really need grace if we can fill all the gaps on our own. The be more, hustle-to-get-there lifestyle doesn’t account for grace, does it? It screams you were made for more, set more goals, raise the bar. You don’t need grace when you’re the craftswoman of your own life. You do need grace when you’re allowing yourself to be transformed and shaped into the image of a holy God. Grace and space are prerequisites and ongoing requirements in the journey of transformation. God isn’t impressed with resumes. “Do not consider his appearance or his height, for I have rejected him. The Lord does not look at the things people look at. People look at the outward appearance, but the Lord looks at the heart.” (1 Samuel 16:7 NIV) Check your gaps, cracks, and spaces. Instead of trying to fill and caulk them, explore what it would be like to acknowledge them and then invite Grace to fill them.
https://medium.com/koinonia/stop-filling-up-all-the-empty-space-in-your-life-56c14a9be1b
['Mary Gallagher']
2019-06-25 01:27:09.895000+00:00
['Creativity', 'Writing Tips', 'Intentional Living', 'Self-awareness', 'Life Lessons']
Keeping the Heat On
When it comes to emissions and energy use, we have a tendency to point the finger at large companies and governments to make changes, often falling back on the argument that there is no point in us changing until they do. The importance of individual actions is typically overlooked in the climate conversation but the choices we make in areas like transport, diet and household energy use have enormous impacts. In order to facilitate true societal transformations, individuals need to be at the heart of it. How we use energy in the home contributes significantly to global emissions. In the UK, 40% of total emissions come from households and around half of household emissions are from electricity and heating. Although these choices are so important, they remain an area that lags behind. Switching to green energy providers, installing smart meters to monitor consumption, and upgrading current technologies are all ways to reduce our individual footprints. However, this is easier said than done. Many net-zero plans by towns, cities and countries rely on the assumption that buildings will be retrofitted. A 2015 study in Michigan focused on offering to retrofit 7,000 eligible households. The process was beneficial to energy efficiency and free, yet less than 1% of households accepted the offer. Even after significant efforts to inform households of the benefits and zero costs, acceptance of the offer only increased to 6%. These results are interesting because they introduce doubt about useful incentives to drive behaviour change. In some circumstances — like subsidies for electric vehicles in Norway — financial benefits act as a useful incentive, but it’s not always the most important factor. Another US energy efficiency study illustrates this point. Researchers tested the use of different messaging strategies on energy consumption over a year. Some households were sent messages that gave insights into how much money could be saved, others were sent messages about the impact of energy usage on the environment and children’s health. Those that received messages about monetary benefits consumed the same amount of energy. Meanwhile, households receiving messages about how the pollutants from energy consumption could lead to respiratory illness in kids reduced their consumption by 18 to 30%. Again, this legitimises the view that making climate change a personal and emotional issue helps to facilitate behavioural change. A similar approach was taken by research in the UK. In 2008, energy performance certificates (EPC) were introduced across the EU whenever a building was built, sold, or rented. Buildings were given grades (A-G) based upon a score between 1–100. Energy efficiency labels followed a colour-coded system that showed the current efficiency compared with potential, as well as the given grade. When the researchers analysed 16,000 homes, they found that many properties were clustered at the lowest score of certain categories rather than having a normal distribution as expected. For example, they noticed many buildings had a score of 55, the lowest possible score for the D grade. This was especially prominent in homes that had recently been on the market. They concluded that many sellers had made small energy efficiency changes to move from an E grade to a D, with the expectation this would be reflected in higher property value. In all these studies, it shows that better education is required to understand energy efficiency and this should certainly be a major focus. They also give psychological insights into how incentives drive behaviour. Whether that’s an immediate financial benefit, higher expected returns in the future from selling your house or learning how health can be improved. It’s very useful to know why people make the choices they make and what would entice them to change. Having said that, it’s especially difficult to influence people’s home energy use. It’s much easier to understand the impact of your behaviour when you drive or fly, or when you eat certain foods, compared with when you heat or cool a building. They are much more tangible. We can see the smoke from car exhaust pipes and the fuel tank being filled when we sit on the runway, and we can touch the foods we eat. With heating, we flick a switch or turn a dial, and the temperature changes via pipes we almost never see. To improve this, some have suggested providing showrooms to introduce consumers to smart heating, developing a database to show real-world energy performance and creating tools to compare up-front and running costs to better inform the public. Regardless, we can’t turn our back on individuals as changemakers. The control we have over our habits is enough to make a significant global change. Taking the initiative in these areas could open the door for larger changes at the corporate and government level to follow our lead.
https://medium.com/swlh/keeping-the-heat-on-51d32d79bfdb
['Marcus Arcanjo']
2020-04-05 18:33:30.375000+00:00
['Energy', 'World', 'Psychology', 'Climate Change', 'Environment']
5 Great MLOps Tools to Launch Your Next Machine Learning Model
5 Great Tools for MLOps 1. MLflow Photo from MLflow. With tools such as MLflow, data professionals can now automate sophisticated model tracking with ease. MLflow debuted at the 2018 Spark + AI Summit and is yet another Apache project. MLflow allows data scientists to automate model development. Through MLflow, the optimal model can be selected with greater ease using a tracking server. Parameters, attributes, and performance metrics can all be logged to this server and can then be used to quickly quarry for models that fit particular criteria. Airflow and MLflow are quickly becoming industry staples for automating the implementation, integration, and development of machine learning models. Although MLflow is a powerful tool for sorting through logged models, it does little to answer the question of what models should be made. This is a bit more of a difficult question because depending on your model, training may take a sizable amount of resources, hyper-parameters could be unintuitive, or both. Even these problems can, in part, be automated away. 2. Pachyderm Photo from Pachyderm. Managing your data pipelines, models, and data sets is a complex process with a lot of moving parts. Pachyderm aims to simplify that process and make it both traceable and reproducible. Pachyderm is a data science platform that combines end-to-end pipelines with data lineage on Kubernetes. This platform works on enterprise-scale to add the foundation for any project. The process starts with data versioning combined with data pipelining, which results in data lineage and ends with deploying machine learning models. It not only tracks your data revisions but also the associated transformations. Furthermore, Pachyderm clarifies the transformation dependencies as well as data lineage. It delivers version control for data using data pipelines that keep all your data up to date. 3. Kubeflow Photo from Kubeflow. Kubeflow is a machine learning platform that manages deployments of ML workflows on Kubernetes. The best part of Kubeflow is that it offers a scalable and portable solution. This platform works best for data scientists who wish to build and experiment with their data pipelines. Kubeflow is also great for deploying machine learning systems to different environments in order to carry out testing, development, and production-level service. Kubeflow was started by Google as an open source platform for running TensorFlow. So it began as a way to run TensorFlow jobs via Kubernetes but has since expanded to become a multi-cloud, multi-architecture framework that runs entire ML pipelines. With Kubeflow, data scientists don’t need to learn new platforms or concepts to deploy their application or deal with networking certificates, etc. They can deploy their applications simply like on TensorBoard. 4. DataRobot Photo from DataRobot. DataRobot is a very useful AI automation tool that allows data scientists to automate the end-to-end process for deploying, maintaining, or building AI at scale. This framework is powered by open source algorithms that are not only available on the cloud but also on-premise. DataRobot allows users to empower their AI applications easily and quickly in just ten steps. This platform includes enablement models that focus on delivering value. DataRobot not only works for data scientists but also non-technical people who wish to maintain AI without having to learn the traditional methods of data science. So, instead of having to spend loads of time developing or testing machine learning models, data scientists can now automate the process with DataRobot. The best part of this platform is its ubiquitous nature. You can access DataRobot anywhere via any device in multiple ways according to your business needs. 5. Algorithmia Photo from TechLeer. Lastly, one of the most popular MLOps tools is definitely Algorithmia. This framework uses artificial intelligence to productionize a different set of IT architectures. This service enables the creation of applications to use of community-contributed machine learning models. Besides that, Algorithmia offers accessibility to the advanced development of algorithmic intelligence. Currently, this platform has over 60,000 developers with 4,500 algorithms. Founded in 2014 by two Washington-based developers, Algorithmia currently employs 70 people and is growing rapidly. This platform not only allows you to deploy models from any framework or language but also connect to most of the data sources. It is available on both cloud and on-premises infrastructures. Algorithmia enables users to continuously manage their machine learning lifecycles with testing, securing, and governing. The main goal is to achieve a frictionless route to deployment, serving, and management of machine learning models.
https://medium.com/better-programming/5-great-mlops-tools-to-launch-your-next-machine-learning-model-3e403d0c97d3
[]
2020-10-14 16:09:05.304000+00:00
['Machine Learning', 'Big Data', 'DevOps', 'AI', 'Programming']
4 Lessons Running Taught Me About Myself
Photo by Andrea Leopardi on Unsplash “Pick ‘em up, put ‘em down.” That’s an oft-repeated line in the book “The Long Walk” by Stephen King, writing as Richard Bachman. For the unfamiliar, the book is set in a Hunger Games-esque society where all boys, from the age of 16, can enter into The Long Walk. There’s a generous prize for doing so, but the punishment for failure is death. How long is the walk? Until there’s only one walker left. Let me be clear, I’m under no illusions that my runs are comparable. After all, I can stop and go home whenever I like. That’s precisely what makes my runs challenging — by having no external motivation to keep going, the decision to push on or call it a day is entirely mine. So, I think of that line regularly: pick ‘em up, put ‘em down. Pick ‘em up, put ‘em down. One foot in front of the other. Again and again. I like this little mantra. First, it’s instructional — one foot in front of the other, repeat. Second, it’s easy — I can put one foot in front of the other with no thought, and hey, am I actually tired, or am I just a little bored and want to quit? Third, it’s a reminder that I’m in control. The commitments we make to ourselves are often the ones we’re the quickest to break. The new diet, waking up early, reading more books, all are rapidly discarded at the slightest hurdle — real or perceived. I’m guilty of this. I have broken many a promise to myself. Sometimes I wonder where I’d be today if I’d stuck with them, which can be demoralising or motivation to stick with what I’m doing now. How running is changing that I’m in the early stages of training for my first marathon. This is an interesting time for me because, on the one hand, I’ve got no choice but to train. On the other hand, it’s almost 10 months away, so the pressure to execute the plan perfectly isn’t really there. I’m operating on the basis that the earlier I start, the better off I’ll be. So, I do have a plan, and I’m following it. On my last run, as I was in my third and final mile for that day, I had a sudden realisation. Running wasn’t just giving me physical health benefits, it was teaching me who I am, too. 1. I like to quit. Motivation porn, self-help books, inspirational stories, all of these are around us 24/7 and as we consume them, it’s easy for us to believe that we, too, have what it takes to succeed. That like the author or speaker delivering their story of courage, we would also rise up to a challenge, or in the face of adversity, becoming who we are destined to be. Alas, no. In my case, when my legs start to ache, I slow down. Maybe I’ll walk. Is that the start of a stitch in my side? Better take it easy. Ugh, another half mile? I’ll just take a breather now. But those motivational books and videos can be helpful. I draw on some of the more powerful lessons and say to myself, “Each of these moments is an opportunity for me to improve or stagnate. If I push on, I grow.” So, with the ache growing in my quads, I keep going. Pick ‘em up, put ‘em down. 2. I have more to give. In his autobiography Can’t Hurt Me, David Goggins talks about The Governor. He explains that this is our brain’s way of keeping us comfortable, and it does it by telling us we’re maxed out, to stop, in order to avoid injury. But, he says, the Governor raises its head when we’re only at 40% of our capacity — or in other words, when we think we’ve done all we can, we haven’t even dug half of our well. This instantly became one of the most profound lessons for me. It seared itself into my brain, and I recall it almost every time I’m exercising. I was in the gym the very next day after reading it, performing an uncomfortable exercise called a banded hip ladder. You take a resistance band, step on it and pull the other end up to your chin. Then you take 10 large steps to the right, and 10 large steps to the left. Then 9, 8, 7, all the way down to 1. There is no rest until you’ve finished that final 1. My governor used to start complaining when I was on 8, so I’d rest, pause, and shake out my legs. Following Goggins’ advice, I simply told myself, “This is my governor. I have plenty left.” And what do you know? I plowed right through that ladder, going from 10 to 1 straight through with no breaks. I now do the same thing on my runs. 3. The blocks are mental. I’m aware that boredom plays a huge part in my desire to give up. For example, sometimes if I’m running a loop that requires me to do the same street twice, I’ll think, “Oh no, I can’t do that road again.” Or everything will feel like a huge effort. “I’ve still got a mile and a half left?” “I’m only halfway?! To hell with this.” Again, I have a choice in these moments: stagnate, or grow. Running makes this decision very stark because it’s literally in the moment. Of course, we’d all choose to grow over time. To read more books in a year. To move the needle a little bit. But in the actual moment, right now, keep going or give up, do you truly want to get better or do you want to wimp out? Given that our only real competition is ourselves, this question fires me up. I will not give up and lose to myself. I’ll keep going, thank you very much. And as the legs keep moving like determined pistons, I realise that I could do this extra distance all along — my mind was just telling me otherwise. 4. Growth is addictive. Except for injury, nobody gets home from a run wishing they hadn’t done it, feeling worse than before, or like they wasted their time. On the contrary, we feel great and accomplished. And for me, as my lungs feel cleansed and my legs feel thoroughly used, I check my stats. A little further than last time, or a little quicker, or a simple reflection on how fun it was. And then? Well, I start looking forward to the next run, of course. Pick ‘em up, put ‘em down. Repeat.
https://medium.com/runners-life/4-lessons-running-taught-me-about-myself-1f8e71d3f07b
['Richard White']
2020-12-14 19:22:24.475000+00:00
['Fitness', 'Running', 'Health', 'Inspiration', 'Motivation']
How Good Is Amazon Translate?
As part of my Medium article An Overview of Amazon Translate, one of the questions is how good is Amazon Translate? It isn’t possible without a lot of time to check all 55 supported languages, so I am focusing on English to Spanish; Spanish to English; English to French; and, French to English. I used one of the paragraphs in this article for this text. The paragraph reads: "Another example is real-time translation for support and help-desk services. Users are not limited by their knowledge of the language your corporation uses. A Spanish speaking user (for example) can type in Spanish, the text is translated to English using Amazon Translate and displayed to the support agent. The support agent can then respond in English and the text is then translated to Spanish. Any text communication could be processed using Amazon Translate before sending it to the desired audience." This text was translated to both French and Spanish and provided the reviewers. English to French The French text as translated by Amazon Translate reads: "Un autre exemple est la traduction en temps réel pour les services d'assistance et d'assistance. Les utilisateurs ne sont pas limités par leur connaissance de la langue utilisée par votre entreprise. Un utilisateur parlant espagnol (par exemple) peut taper en espagnol, le texte est traduit en anglais à l'aide d'Amazon Translate et affiché à l'agent de support. L'agent de support peut alors répondre en anglais et le texte est ensuite traduit en espagnol. Toute communication textuelle peut être traitée à l'aide d'Amazon Translate avant de l'envoyer à l'audience souhaitée." This text was provided to the translator to translate to English. This way we can compare how close the translation is to the original English text. The translated text was provided to the reviewer who provided this English translation: "Another example is translation in real time for the assistance services and assistance. Users are not limited by their knowledge of the language used by your business. A user that speaks Spanish (for example) can type in Spanish, the text is translated to English with the help of Amazon Translate and displayed to the support agent. The support agent can then respond in English and the text is then translated to Spanish. All text communication can be processed with the help of Amazon Translate before being sent to the desired audience." If we create a visual comparison of the original and translated English, we can see the differences. Comparing the English to French Translation If you read both the original and the translated versions, you will see they read about the same, despite the highlighted differences. The translator had the following observation about the English to French translation: “I find that the translation service did quite a good job. In terms of general understanding, it translated the meaning of the English passage well and a French speaker is able to understand the general idea of the paragraph and know what is being talked about. Certain technical terms (like assistance services which I’m assuming was meant to be Help Desk?) did not translate correctly. Usually in French we use the term “centre de soutien” or “centre d’assistance” is also used. Other than this, I found that the translation was quite accurate and easy to understand.” [1] This highlights an important point. When writing to translate into a different language, it is important to be clear and concise. It is also important to validate the translation of specific terms like “help-desk”, can translate to another language, but may not be as effective as using the term appropriate for that language (which is where the Custom Terminology feature of Amazon Translate can be valuable). French to English This test involves having the reviewer create a sample paragraph in their language and provide the English translation. The source language text was translated into English and compared against the provided translation. The provided French text is: Bonjour, j’ai acheté un abonnement à votre site web la semaine passé et j’ai quelques problèmes avec mon compte. Je ne me souviens pas de mon mot de passe. Existe-t-il un moyen de réinitialiser mon mot de passe et de récupérer accès à mon compte? After processing this text through Amazon Translate, the English text reads: "Hello, I bought a subscription to your website last week and I have some problems with my account. I don't remember my password. Is there a way to reset my password and recover access to my account?" The English version provided by the translator is: "Hello, I bought a subscription to your website last week and I have some problems with my account. I do not remember my password. Is there a way to reset my password and regain access to my account?" There is a very high degree of similarity between these two translations. The visual comparison showing the highlighted differences is here. Comparing the French to English Translation The differences between the two are almost insignificant. Using “don’t” instead of “do not”, and “recover” instead of “regain” are good examples of how the translation can be different, but the actual communication would be very efficient. English to Spanish After processing by Amazon Translate the Spanish text reads: Otro ejemplo es la traducción en tiempo real para servicios de soporte y asistencia. Los usuarios no están limitados por su conocimiento del idioma que usa su empresa. Un usuario que habla español (por ejemplo) puede escribir en español, el texto se traduce al inglés mediante Amazon Translate y se muestra al agente de soporte. El soporte puede responder en inglés y el texto se traduce al español. Cualquier comunicación de texto podría procesarse con amazon Translate antes de enviarla a la audiencia deseada. I sent the translated text to the translator for her to translate back to English. Her translation reads as: Another example is real-time translation for support and assistance services. Users are not limited by their knowledge of the language your company uses. For example, a Spanish speaking user can write in Spanish, the text is translated to English using Amazon Translate and shown to the support agent. The support agent can respond in English and the text is then translated into Spanish. Any text communication could be processed via Amazon Translate before sending it to the desired audience. If we put the two English paragraphs beside each other, we can highlight the words which differ. Differences between Amazon Translate and the Translator There are a few differences between the original English version and the translation from Spanish as indicated by the highlighted words. The translated text has 5 different words, 1 in the wrong place (“for example”), and one word which existed in the original text, but missing from the translation. Spanish to English This test involves having the translator create a sample paragraph in their language and provide the English translation. The source language text was translated into English and compared against the provided translation. The translator provided this Spanish text for processing through Amazon Translate: Como parte de sus sesiones temáticas, el CIDI definió temas estratégicos que incluyeron experiencias de alianzas público privadas; los desafíos y las oportunidades de alcanzar una educación de calidad; experiencias para reducir la pobreza y promover la inclusión social; la implementación del Marco de Sendai y del Plan Regional en la Reducción del Riesgo de Desastres; y el papel que puede jugar la OEA para apoyar a los Estados a alcanzar una energía asequible, confiable y sustentable. Asimismo, se exploraron alianzas con la Fundación Panamericana para el Desarrollo y la Fundación para las Américas. The English text after translation is: As part of its thematic sessions, CIDI defined strategic themes that included experiences of private public partnerships; challenges and opportunities to achieve quality education; experiences to reduce poverty and promote social inclusion; implementation of the Sendai Framework and the Regional Plan for Reducing the Disaster Risk; and the role that the OAS can play in supporting States to achieve affordable, reliable and sustainable energy. Partnerships were also explored with the Pan American Development Foundation and the Foundation for the Americas. I provided the English text back to Victoria for comparison with her translation. The translator’s version of the Spanish paragraph provided is: As part of its thematic meetings, CIDI identified strategic topics, which included experiences with public-private partnerships; challenges and opportunities in ensuring quality education; experiences in poverty reduction and fostering social inclusion; implementation of the Sendai Framework and the Regional Plan of Action for Disaster Risk Reduction; and the potential role of the OAS in assisting states in ensuring access to affordable, reliable, and sustainable energy. Partnerships were also explored with the Pan American Development Foundation and the Trust for the Americas. If we put the two English paragraphs beside each other, we can highlight the words which differ. Differences between Amazon Translate and the Translator It looks from the highlighted words like Amazon Translate didn’t do a great job at translating the sample paragraph from Spanish to English. There are 22 different words in the translated Spanish than what was provided by the translator. If you read the translator’s version first and then read the Amazon Translate version, you will see they are closer in meaning than it would appear. Conclusion Amazon Translate did pretty well in the translation samples we examined. This was by no means an exhaustive study of the capabilities of amazon Translate. The intent was to demonstrate that while there are syntactic differences in the translated text, the message appears to be presented. That being said, whether or not to use Amazon Translate for any specific project should depend on your unique situation. These factors should be considered in your decision: When you need near real-time translation, it is appropriate to use Amazon Translate. When 100% accuracy is less important than ”getting the point across”, Amazon Translate makes sense. For long, complex documents where accuracy is more important, the use of a professional translator is more appropriate. Text involving cultural specific language, events, or history, may be more appropriate for a professional translator. Text which is ”short-lived”, such as a chat session or e-mail, is suitable for Amazon Translate. It is important to consider that issues could arise because people may not always type what they mean. Text which is intended for storage in an archival system or used for reference by others may benefit from a professional translator. Be careful of redundant phrases. Be clear and concise. Validate the translation of specific terms like “help-desk”, can translate to another language, but may not be as effective as using the term appropriate for that language. Acknowledgments Thank you to Chanelle Dupuis, for her assistance with the English-French part of this evaluation. Ms. Dupuis is currently working toward her Ph.D. in French Studies at Brown University, and is fluent in French, Spanish, Portuguese, and English. Thank you to Victoria Martinez Adalid, for her assistance with the English-Spanish part of the evaluation. Ms. Adalid is a Professional English-Spanish Translator. The contributions made by these professionals helped greatly with the content in this research. References [1]: Email Conversation with Chanelle Dupuis, July 2020. A Deep Dive into Amazon Polly A Five Overview of AWS Transcribe Amazon Comprehend Amazon Polly: Bringing Audio to my Medium Articles Amazon Translate Overview Introduction to Amazon Translate Digital Training What is Machine Translation? Rule Based Machine Translation vs. Statistical Machine Translation About the Author Chris is a highly-skilled Information Technology, AWS Cloud, Training and Security Professional bringing cloud, security, training, and process engineering leadership to simplify and deliver high-quality products. He is the co-author of seven books and author of more than 70 articles and book chapters in technical, management, and information security publications. His extensive technology, information security, and training experience make him a key resource who can help companies through technical challenges. Copyright This article is Copyright © 2020, Chris Hare.
https://medium.com/swlh/how-good-is-amazon-translate-8e9f08b41789
['Chris Hare']
2020-10-23 14:03:13.976000+00:00
['Translation', 'Machine Learning', 'AWS', 'Artificial Intelligence']
There Are Only Two Reasons Writers Struggle to Build an Audience
I used to write guest posts for a personal coach in Hawaii. We’d talk on the phone every week. “Touching base” is the buzzword, but those calls gave me great stories and anecdotes to add to the stuff I wrote for her. The pay was great, but you know what sucked? The crickets. Because once money becomes a non-issue, we want to feel pride. I don’t mean Maslow’s hierarchy of needs, because that’s been debunked years ago, but there’s a pecking order to things. You know? Top of the list is food and shelter. Which means money. But once we have enough money to exist? Other things come into play. Like pride in our work. There’s people who can do work they’re not proud of day after day, but I think it eventually eats your soul. Something inside you dies if you wake up every day and do work you feel absolutely no pride in. I was writing these great stories, but I was writing to crickets and it was ticking me off. Yes, it was nice to get paid. But I wanted to be read, too. One day I asked for her login to Google analytics. And then I got kind of mad at her. So I phoned. I asked her why I’m writing guest posts at a site that gets almost no conversion when she’s a writer for another site that converts at 10x the rate? She said because the site I’m writing for has 250K readers and the other site only has 30K. I laughed like a crazy person. She was the crazy person. Are you serious? Does it matter if they have 250K readers if you get 5 reads? Isn’t it smarter to write for the site with 30K readers, where you get 100 or more reads? Since I had login to Google anyway, I set up goal tracking. Know what else we discovered? All the people that became clients came from the little site. Not the big one. Win win. She got clients. I got people reading my writing. No writer wants to publish to crickets. When push comes to shove, there’s only 2 reasons writers struggle to build an audience. If you’re struggling to build an audience, it’s one or both of these. There Are Only Two Reasons Writers Struggle to Build an Audience Doesn’t matter where you’re writing. Doesn’t matter if it’s your blog or Medium, some other writing site or even guest posting. If you’re struggling to build an audience, it’s one of these two things. 1. Lack of reach People can’t read your post if they don’t see it. If a tree falls in the forest, you know? If you want to build an audience the first thing you need is reach. You can’t build an audience if people aren’t seeing your writing in the first place. They can’t read what they don’t even see. Circulation doesn’t promise reach Sometimes, we make the mistake of thinking lots of followers means good reach. Of course we think that — that’s how advertising has been sold forever. Like when magazines say “reach 250K readers” and stuff like that. On the internet, the number of readers/followers means nothing. Whether they’re responsive is far more important. That was the mistake my client made. She thought it was smarter to write for the site with bigger circulation. Except, they weren’t reading. When I dug in, I could see why. People had to “sign up” to be able to submit stories. It was just a giant aggregator and all those people weren’t there to read. They were there to submit. Wow, that sucks. Same happens here. There are publications with 100K readers or more. Some of them mass publish posts, which means you scroll off the homepage so fast you might as well not have been there. When a story published in those pubs does get reads, it’s usually because the writer has a strong following. Topic might be an issue, too People read what interests them. If you write about the mating habits of Abyssinian rabbits, the audience might not be very big. But on the other hand, they might be avid readers, which really helps. Hey — another weirdo that likes that thing I like! You know? If you write about self growth or relationships, there’s a bigger audience for your writing, but also more competition. Know what else kills? When you write about so many topics that some of them alienate people that would like some of your writing. For example, if you write about a topic that appeals to more men, but then you write feminist posts that tick off the men that would like your other posts. That’s probably a lame example. The point is that it’s worth thinking about how our topics intersect. It’s a bit of a juggle to find your own sweet spot. Mostly, the issue is that we’re afraid to let our freak flag fly. We’re worried about “fitting in” and not being perceived as strange. Which is a shame, because it’s the things that make you different than everyone else that make you stand out from the crowd. Titles, people. Titles! It’s easy to say a publication or site doesn’t get good response, but the real trouble might be that your titles just suck. Maybe they “are” seeing it, and you “are” getting reach, but you don’t know it. Because they don’t click. I mean, honestly. One day I saw two posts. One was called “The Tree. A Poem” (or something like that) Underneath it was a post called My Husband Left Me For Another Woman. She Can Have Him. I Just Want My Dress Back.” You want to put money on which got more clicks? I’m not voting for clickbait. But interest. You know? You at least have to be interesting. Make a promise your story will deliver on for gosh sakes. 2. Lack of connection This one is easiest to see at Medium, but it happens everywhere. People are seeing your stuff. You have reach. They might even be clicking. But then you lose them. And you’re not sure why. Well duh. Why do you think? Oh, they left because it was so good? Come on. On Medium, that shows up as a low read rate. Like, your piece has a 75% click rate, but 39% read rate. What does that tell you? Easy. The title got them. The writing lost them. Weak opening is the biggest reason you don’t connect If you watch comedy, you’ll know that sometimes a comedian is facing a tough audience. An audience that doesn’t laugh easy. On the internet, it’s always a tough audience. At least until you find your people. Your people will read anything you write. But finding your people? It’s hard. Even harder if you start weak or boring. Lots of writers sort of “build up” to the good part. That doesn’t work. You need to cut all the build up. Get right to the good part. Because if you don’t, most people won’t stick around long enough to get there. Did the writing deliver what the title promised? Most people have no clue what clickbait really is. They think interesting titles are clickbait. lol. Nope. Guess again. Real clickbait is when the title grabs your interest, but the article is a total letdown. Great title, crap read. That’s where clickbait got the name. It’s like — oh, you just wrote an awesome title to get the click, but the piece was total crap. Buzzfeed used to do that. But they figured out it was hurting them long term, because people got wise to it. Think of your title as an interesting synopsis of the content, ideally with a promise to the reader. Here’s what you get if you read this. And then keep the promise. Weak writing skills Having a great story and telling it well aren’t remotely the same. Some people think talking about writing skills means the kind your teacher approved of. lol. No. As the quote says — easy reading is damn hard writing. It’s not about prepositions and sentence structure or any of that. It’s about being interesting. It’s about grabbing interest and hanging on to it. People love to say we should write how we talk. Except most of us have some weird tics when we talk. We stuff in extra words. We drag things out. We get off track. Add fluff and filler. You can get away with that in person. People don’t turn around and leave if your story is boring. On the internet, they do. Slow pace, fluff words and rambling are the most common mistakes. Click. Gone. Hell, they already have 5 other tabs waiting. The solution isn’t to write more. It’s to read more. We’re kind of like sponges. We soak up stuff we’re repeatedly exposed to. If you read stuff that keeps you reading, eventually you’ll learn to improve your own writing. Can I tell you a dirty little secret? If you’re looking for an easy way to make money, writing isn’t it. Go sell something. I promise you, it’s easier. People who write? They write because they can’t not write. It’s in their blood. As Gaga said — we’re born this way. Truth is, if you’re struggling to build an audience, it’s probably a bit of both. It helps to know that. Yes, there are likely places you can become stronger. But don’t forget to look at how the places you’re writing factor in, too. If you can improve reach just a bit, and get a little better at your craft, the growth does happen. That’s a promise.
https://medium.com/the-partnered-pen/there-are-only-two-reasons-writers-struggle-to-build-an-audience-15868eea87c4
['Linda Caroll']
2020-12-04 18:47:05.261000+00:00
['Creativity', 'Writing Tips', 'Self', 'Advice', 'Writing']
Pattern Recognition With Machine Learning
Components of a Pattern Recognition System A pattern recognition system needs some input from the real world that it perceives with sensors. Such a system can work with any type of data: images, videos, numbers, or texts. Having received some information as the input, the algorithm performs preprocessing. That is segmenting something interesting from the background. For example, when you are given a group photo and a familiar face attracts your attention, this is preprocessing. Preprocessing is tightly connected with enhancement. By this term, researchers understand an increase in the ability of a human or a system to recognize patterns even when they are vague. Imagine you are still looking at the same group photo but it is 20 years old. To make sure that the familiar face in the photo is really the person you know, you start comparing their hair, eyes, and mouth. This is when enhancement steps into the game. The next component is feature extraction. The algorithm uncovers some characteristic traits that are similar to more than one data sample. The result of a pattern recognition system will be either a class assignment (if we used classification), or cluster assignment (in case of clustering), or predicted values (if you apply regression).
https://medium.com/better-programming/pattern-recognition-with-machine-learning-49de621426b6
[]
2020-10-30 15:09:31.101000+00:00
['Machine Learning', 'Speech Recognition', 'Artificial Intelligence', 'AI', 'Programming']
5 Minute Writing Exercise Part 2
Have you ever tried writing in the dark? I like it because it helps me to not edit while I am writing. Writing and editing work best when done separately. When I was taking art classes in college, our professor had us start each class by doing two 5 minute drawing exercises. The first was to spend five minutes continuous contour line drawing. The second was blind continuous contour line drawing. Continuous contour line drawing means to draw the outside edge, the contour, of something without lifting the pencil. Blind continuous contour line drawing is the same but without looking at the page. Our professor explained how important it was to warm up before starting a drawing; how important it was to keep your hand moving. Writing exercises are just as important as drawing exercises. The goal is the same: keep your hand moving. Another important lesson I learned in drawing class was to first sketch my drawing without erasing. To erase when my sketch was complete. These writing exercises are teaching you to first draft your story without editing. To edit when your draft is complete. In lesson 1 you learned about lizard brain and monkey mind. In lesson two, we are going to (temporarily) blind monkey and let lizard roam free. In this lesson, I would like to challenge you to write in the dark. It is also acceptable to write in the dim. It just needs to be dark enough so that you cannot easily see the words you have written. It can be light enough to see the lines on your paper. The tools you will need for this exercise are the same as in lesson 1: Notebook Ink pen Timer Blindfold or dark room If you are very disciplined (I am not) you could forgo the blindfold and just close your eyes. My eyes always creep open seemingly of their own accord. So I sit in a darkened room or use a blindfold. It is important to remember that this exercise is about writing without editing. It isn’t about what you write. It is about how you write it. You may not even be able to read all of what you have written. I often cover words with other words getting to the end of the line and beginning a new line under the first is a challenge when you can’t see. It ends up being an illegible mess. Those line drawings, and especially the blind line drawings looked nothing like the subject being drawn. But the uninterrupted contour drawing isn’t about the picture. It’s about the process. This exercise is about keeping your hand moving. If you can’t see what you’ve written you can’t go back and erase, cross out, or in any way change it. You will get in the habit of writing this way (without simultaneously editing, you don’t need to get in the habit of writing in the dark!) Blind writing is about practicing. Practice makes perfect. Except, I do not believe any artist truly achieves perfection. Every successful artist continues to learn and grow, continues to hone their craft. I recommend doing three 5 minute blind writing exercises before you begin your writing project. Just as with lesson 1, between 5-minute sessions, get some circulation going. Clap your hands, jump up and down, sing lalalala, whatever you need to do to get oxygen flowing to your brain. After you have taken your three deep breaths, open your eyes and set your timer. You may want to set the timer for 6 or 7 minutes, depending on how long it will take you to get blindfolded or turn off the light. Next, all you have to do is begin writing. If you find it hard to start, you can start with a repeating phrase. “In the dark” or “I am in the dark” Write this phrase without looking over and over until something else pops up. This is what my blind writing exercise looked like this morning (what I could read of it, anyway): I am in the dark I am writing in the dark my nose itches this blindfold is weird. I used to believe I could make myself completely flat under the covers. I would hide from my mother and get very upset when she found me. I thought she must be a witch. In the dark in the dark My mind wanders in the dark. Tell me what you don’t know. what don’t I know?it is too dark. As you see from the example above, when I got stuck I simply went back to the phrase “in the dark.” What you can’t see are the lines that I wrote one on top of the other. Something about hide and seek, a closet, and Bloody Mary. Do blind writing two or three times and then do an “I am” exercise. And remember, these are warm-ups. It can be fun to focus on these 5-minute exercises, but you DO still have a writing project to get to. Again, please don’t throw away your exercises. We will come back to them for later exercises. If the darkness and writing take you to a painful place, please do not force yourself to continue. Take a break. Take your time. This is your rodeo. Let’s make it the best rodeo ever. One more quick note before you get started: I would love to hear how you are doing with the writing exercise/warm-up lessons. Feel free to leave a response here when you are finished. Are you ready? Close your eyes. Breathe deeply and slowly three times. Open your eyes. Set your timer. Blindfold your self or turn out the light. And…GO! Write for 5 minutes, get up, and jump around. Rinse, repeat.
https://medium.com/the-writers-bookcase/5-minute-writing-exercise-part-2-f29b7f9538c8
['Jonica Bradley']
2019-11-08 11:53:30.729000+00:00
['Writing', 'Work', 'Editing', 'Self', 'Creativity']
Three Types of Business Opportunities with Machine Learning
In practice, of course, a given ML application may fit more than one of the above impact categories. An ML algorithm that can analyze documents, for instance, might be reasonably said to increase employee efficiency by helping them process documents faster. But it might also be viewed as a breakthrough since it allows the company to offer new document analysis products it never could before. Or consider a machine learning algorithm that detects security threats (like guns or knives) in an image generated by a baggage scanner. It may reduce costs by reducing the need for a person to watch a screen constantly. It may increase efficiency by finding more threats. And it may represent a breakthrough by finding threats that humans may have never been able to detect. Furthermore, “breakthroughs” need not necessarily be associated with new products. What if the breakthrough enables a new kind of insight for the C-suite? Such reporting may lead to strategic choices that ultimately lead to increased revenue or lower costs, even if the immediate impact is not as clear. Business impact thinking is not meant to rigidly constrain the way you think about machine learning projects. Think of it instead as a useful guide. You should certainly be aiming in the direction of at least one business impact — but you should be flexible enough to know when you need to change the way you communicate your goals and frame your success. It’s not unheard of for a machine learning project to begin with a goal of reducing costs by automating away entire jobs. But this isn’t always easy. Even if 80% of an employee’s work can be eliminated, the remaining 20% may be stubbornly un-automatable. This doesn’t mean the project was a failure. If the humans can focus on their 20% of the task while being free of the other 80%, they can take on new business. This could represent a massive business efficiency gain — a huge win when the business impact is viewed properly.
https://medium.com/machine-learning-in-practice/three-types-of-business-opportunities-with-machine-learning-2a73c92f9056
['Robbie Allen']
2019-11-25 14:12:38.854000+00:00
['Machine Learning', 'Artificial Intelligence', 'Business', 'Entrepreneurship', 'Technology']
Flutter vs React Native vs Native: Deep Performance Comparison
Flutter vs React Native vs Native: Deep Performance Comparison Let’s compare FPS, CPU, Memory, and GPU performance of popular mobile development tools on everyday life tasks. The story behind the research inVerita and its mobile development team continuously dig into the performance of cross-platform mobile solutions available on the market to answer the question which technology is best Flutter or React Native(or Native) for your product, maybe even career, that’s how Flutter vs React Native vs Native Part I emerged. Yes, it was quite controversial as one can state we weren’t using React Native to perform multiple calculations daily — that might be the case — but in this case, CPU heavy tasks are better performed by Flutter or Native apps. That’s why in this article we decided to research the performance of UI which has a much bigger impact on a daily user of mobile apps. Measuring UI performance is complex and it requires an engineer to implement the same functionality in the same way across every platform. We went with a GameBench as a global testing tool to leave no doubts and make sure we stay objective (it doesn’t change the fact that we truly love Flutter in most aspects:) and still run lots of React Native and Native projects ). GameBench has a lot of space for improvements, but we managed to put every app into a single testing environment with its help which was our goal. Source code is open so please experiment and share your thoughts with us if you wish. UI animations mostly use different tools across different platforms so we narrowed everything to libraries supported by every platform (but one case) or at least we did everything we could to accomplish that. Test results can be different and depend on your approaches to the implementation, we believe that you, as a potentially true expert of specific technology can push your specific set of tools to the limits where it outperforms our numbers and we are happy if you do. Now, let’s have a look at the cases. Hardware info: For our testing purposes, we were using an affordable Xiaomi Redmi Note 5 and iPhone 6s. Repo link: Source code Use case 1 — List view benchmarking We implemented the same UI on Android and iOS with the use of Native, React Native, and Flutter. We also automated scroll velocity with the use of RecyclerView.SmoothScroller on Android. On iOS and React Native we used an approach with timer and programmatically scrolling to position. On Flutter, we used ScrollController to smoothly scroll over the list. In each case, we had 1000 items in the list view and the same scrolling time to reach the last list element. In each of these cases, we used image caching with different libs per platform. More details could be revealed in the source code. Third-party libraries used in this case: iOS Loading and caching images — Nuke Android Loading and caching images — Glide React Native Loading and caching images — React-native-fast-image Android — GPU tests results are not supported by the benchmark (unfortunately, with the devices we have, and we have many:)) ) Test results All tests have shown approximately the same FPS. Android Native uses half as much memory compared to Flutter and React Native. React Native requires the most significant CPU exploitation. The reason is the use of JSBridge between JS and Native code that incites the waste of resources on serialization and deserialization. Regarding battery exploitation, Android Native has the best outcome. React-native is lagging behind both Android and Flutter. Running continuous animations consume more battery power on React Native. iPhone 6s test Test results FPS. React Native results are worse than those of Flutter and Swift. The reason is the inability to use IoT compilation on iOS. Memory. Flutter almost matches native in Memory consumption but is still heavier on CPU. React Native falls far behind Flutter and native in this test. Difference between Flutter and Swift. Flutter is actively using CPU when iOS Native is actively using GPU. Reconciliation in Flutter increases the load on the CPU. Use case 2 — Heavy animations test Nowadays most phones running on Android and iOS have powerful hardware. In most cases using usual business apps, no fps drops could be noticed. That’s why we decided to do some tests with heavy animations. Heavy enough to get fps drops. We used vector animations animated with Lottie on Android, iOS, React Native and adopted the same animations to use with Flare on Flutter. Testing animation with Lottie for Android, iOS, React Native, and Flare for Flutter. Lottie for Android Test results Test results Android Android and React Native have similarities in their performance. It’s obvious because Lottie for React Native uses Native means (16–19% CPU, 30–29 FPS). Flutter’s outcome is such a surprise, though it screwed up a bit during a performance. (12% CPU and 9 FPS). We discovered that removing one specific animation from the grid increases FPS up to 40% on Flutter. We suppose Flare is heavier and not optimized for this kind of task that’s why Flutter got such an FPS drop. Blame this one: Lottie for Android 3. Android requires the least amount of memory (205 Mb); React Native needs 280 Mb and Flutter requires 266 Mb. 4. Cold app start. According to this indicator, Flutter is the leader (2 seconds). For Android Native and React Native, it takes around 4 seconds. iOS iOS and React Native outcomes on this test are almost the same as Lottie for React Native uses native means. Flare and Flutter are not going to stop surprising. Flare definitely has a way to go :D iOS Native requires the least amount of memory (48 Mb). React Native needs 135 Mb and Flutter requires 117 Mb; Cold app start. According to this indicator, Flutter is the leader (2 seconds). For iOS and React Native it takes around 10 seconds; Take note: we used a different library for this case with Flutter which is much heavier compared to those we used for other platforms and it might be the reason for fps drops. Use case 3 — Even heavier animations test with rotations, scaling and fade. In this test, we compared performance while animating 200 images. Scale rotating and fade animations are executed at the same time. 200 images Test results Android Native showed top performance and most efficient memory consumption. Flutter showed just enough performance to work comfortably but twice more memory expenses comparing to Native. React Native —showed a low performance on this case. Test results iOS iPhone 6s is powerful enough to not drop fps in all 3 cases. Native used fewer resources and GPU was used mostly. React Native used mostly CPU for rendering while Flutter used GPU. React Native used a bit more memory. Summary For usual business apps with minor animations and shiny looks, technology does not matter at all. But if you’ll do some heavy animations keep in mind that Native has the most performance power to do it. Next, come Flutter and React Native. We would definitely not recommend using React Native in a very CPU heavy operation, while Flutter is a great fit for such tasks from both CPU and Memory standpoint. The tool you pick depends on your specific product and business case. In case you are looking to develop a single-platform MVP — use native means, but keep in mind that Flutter apps can be built both for mobile, web and desktop environments and it feels like Flutter might become a King of cross-platform development in not too distant future, as even today Flutter created a very decent competition for native development tools, especially if your development budget is not too stretched but you are still looking for the decent performance of yours app across different platforms. We face the fact that there might be many factors impacting implementation and benchmarks of each technology, and many of you who might be true experts of a specific platform can squeeze much more out of the beloved set of tools. We tried to bring as much transparency into the process as we could by creating a single environment for each app to get tested and a single set of tools to measure the performance, and I hope you liked the result. Yet again our mobile and Flutter teams are happy to receive and carry all the burden of your feedback and suggestions. You can also check out this article here.
https://medium.com/swlh/flutter-vs-react-native-vs-native-deep-performance-comparison-990b90c11433
[]
2020-08-30 08:36:27.896000+00:00
['Flutter', 'Software Development', 'Programming', 'Startup', 'Mobile App Development']
Understanding Cosine Similarity And Its Application
Utilisation Cosine similarity has its place in several applications and algorithms. From the world of computer vision to data mining, there is lots of usefulness to comparing a similarity measurement between two vectors represented in a higher-dimensional space. Let’s go through a couple of scenarios and applications where the cosine similarity measure is leveraged. 1. Document Similarity Photo by Annie Spratt on Unsplash A scenario that involves the requirement of identifying the similarity between pairs of a document is a good use case for the utilisation of cosine similarity as a quantification of the measurement of similarity between two objects. Quantification of the similarity between two documents can be obtained by converting the words or phrases within the document or sentence into a vectorised form of representation. The vector representations of the documents can then be used within the cosine similarity formula to obtain a quantification of similarity. In the scenario described above, the cosine similarity of 1 implies that the two documents are exactly alike and a cosine similarity of 0 would point to the conclusion that there are no similarities between the two documents. Here’s an example: Document 1: Deep Learning can be hard Document 2: Deep Learning can be simple Step 1: First we obtain a vectorised representation of the texts Document 1: [1, 1, 1, 1, 1, 0] let’s refer to this as A Document 2: [1, 1, 1, 1, 0, 1] let’s refer to this as B Above we have two vectors (A and B) that are in a 6 dimension vector space Step 2: Find the cosine similarity cosine similarity (CS) = (A . B) / (||A|| ||B||) Calculate the dot product between A and B: 1.1 + 1.1 + 1.1 + 1.1 + 1.0 + 0.1 = 4 Calculate the magnitude of the vector A: √1² + 1² + 1² + 1² + 1² + 0² = 2.2360679775 Calculate the magnitude of the vector A: √1² + 1² + 1² + 1² + 0²+ 1² = 2.2360679775 Calculate the cosine similarity: (4) / (2.2360679775*2.2360679775) = 0.80 (80% similarity between the sentences in both document) Let’s explore another application where cosine similarity can be utilised to determine a similarity measurement bteween two objects. 2. Pose Matching Gif from https://github.com/CMU-Perceptual-Computing-Lab/openpose Pose matching involves comparing the poses containing key points of joint locations. Pose estimation is a computer vision task, and it’s typically solved using Deep Learning approaches such as Convolutional Pose Machine, Stacked hourglass, PoseNet etc. Pose estimation is the process where the position and orientation of the vital body parts and joints of a body are derived from an image or sequence of images. In a scenario where there is a requirement to quantify the similarity between two poses in Image A and Image B, here is the process that would be taken:
https://towardsdatascience.com/understanding-cosine-similarity-and-its-application-fd42f585296a
['Richmond Alake']
2020-09-15 03:16:51.844000+00:00
['Machine Learning', 'Artificial Intelligence', 'Education', 'Computer Vision', 'Data Science']
5 Things To Know About The Future Of Jobs
Photo by Alex Knight on Unsplash The COVID-19 global economic recession is deepening existing inequalities across global labour markets and reversing the gains in employment made since the Global Financial Crisis of a decade ago. Emerging technologies continue to reshape labour markets, and those trends have only accelerated with the onset of a new recession. Millions of workers worldwide are facing significant job uncertainty. Data from the International Labour Organization (ILO) has shown that during the first half of 2020, real unemployment figures jumped to an average of 6.6%, with an estimated loss of working hours equivalent to 495 million jobs in Q2 2020. The OECD predicts that unemployment rates could double by the end of the year. Now in its third edition, the Future of Jobs Report 2020 maps the jobs and skills of the future, tracking the pace of change and direction of travel. Here are some of the key findings: 1. COVID-19 has had a lasting effect The COVID-19 pandemic has accelerated the arrival of the future of work. The Future of Jobs Survey finds that 50% of employers will accelerate the automation of their work, while over 80% are set to expand the digitization of their work processes. That means that some jobs that have been lost will never come back, and those that do will require new ways of working and new skills 2. Automation continues to increase The Future of Jobs Report projects that by 2025, the hours of work performed by machines and people will be equal. Around 85 million roles are set to be displaced by automation — primarily across manual or repetitive roles spanning both blue-collar and white-collar jobs — from assembly factory workers and accountants. 3. New jobs will emerge Despite the accelerated disruption to jobs, the report also predicts that 97 million new jobs of tomorrow will emerge by 2025. The most in-demand roles in future job markets include Data Analysts and Scientists, AI and Machine Learning Specialists, Robotics Engineers, Software and Application developers as well as Digital Transformation Specialists, Information Security Analysts and Internet of Things Specialists which can be broadly clustered in 10 emerging jobs clusters explored in the report. 4. The most in-demand skills are a mix of hard and soft skills The most in-demand skills of the future will include working with people, problem-solving and self-management skills such as resilience, stress tolerance and flexibility. This increase in required self-management skills is clear as workers face a range of pressures to adapt to new, more digital ways of working. Product Management, Digital Marketing and Software Development Lifecycle are among the core set of specialized skills required for emerging professions. Reskilling for the roles of the future will require a time investment ranging from three weeks to five months. 5. Human capital is increasingly important Employers are convinced in the value of building human capital — with 66% believing they will get a return on investment from training employees within a year. Data from the past five years shows that workers often actually don’t need the perfect skill set to transition into new roles. The scale of the challenge is significant, with employers looking to internally redeploy half of their workers. Meanwhile, some 40% of the average worker’s skills will need to be updated to meet the demands of future labour markets. Employers are facing this challenge broadly on their own — only 21% can tap into government funding to deliver training programmes. The Future of Jobs Report 2020 is a call to action to accelerate a Reskilling Revolution across economies. It highlights the increasing urgency of supporting displaced and at-risk workers as they navigate paths towards the “jobs of tomorrow.” The current moment provides an opportunity for leaders in business, government, and public policy to focus common efforts on allowing workers to thrive in the new economy.
https://iamibrahim.medium.com/5-things-to-know-about-the-future-of-jobs-20430ea4c2d4
[]
2020-12-28 15:21:53.126000+00:00
['Entrepreneurship', 'Covid 19', 'Future', 'Startups', 'Jobs']
My Life Has Been Derailed by my Health. I Want to Get it Back.
My Life Has Been Derailed by my Health. I Want to Get it Back. But everything feels so slow. How I’ve spent the better part of three months, except less artfully. Photo by Yuris Alhumaydy on Unsplash The headache started on a Friday night. It was different from my usual headaches. More sudden, for one, and spread out from in front of my right ear to the back of my skull, when I typically get migraines centered over my eyes. Being an anxious person, I immediately assumed the worst, but the pain wasn’t bad enough for me to truly believe something was Really Wrong. I took some ibuprofen and went to bed. The pain was still there the next day, and the next. I went to the doctor, then I went to the doctor again. New symptoms started to emerge: the right side of my face felt a little numb. I was getting more headaches all over my head. My neck and shoulders hurt. Theories were thrown around: Your ear is too clogged with wax, and it’s causing a headache; your Eustachian tubes are blocked, that’s what’s making your face numb; maybe you have a specific kind of migraine, try this medication. Nothing worked and no diagnosis felt right. Then I started getting pain on my left side too, in the same area, in front of my ear. My headaches were daily and often got in the way of functioning normally. I ended up in the emergency room at one pint, where I had a CT scan. They found an unspecified “anomaly.” I thought that must be the source of the pain, but then I had an MRI. Great news, your brain looks fine! There’s nothing abnormal! You’re not dying! Then why do I feel so awful?
https://medium.com/swlh/my-life-has-been-derailed-by-my-health-i-want-to-get-it-back-19b973aaec1a
['Grace Moore']
2019-08-04 01:03:42.815000+00:00
['Mental Health', 'Health', 'Chronic Pain', 'Chronic Illness', 'Life']
How to build a Computer Vision Game in Python?
I grew up in the (19)90’s. Before the internet, smartphones, next generation game consoles. Hell, before augmented and virtual reality. I Know, I’m old… At least I got to experience the physical world. Snap! (No, we didn’t have Snapchat either…). We would just communicate an approximate location, time for re-entrance and we were ready for takeoff. That was fun. But you know what else was fun? A Master System II. Brief history of gaming consoles A long, long, looooong time ago, gaming consoles looked like this: Games were cartridges that you had to insert in the console. Well… Not all of them. One game came pre-installed (thank G.od they didn’t open an inquiry for abuse of dominant position). A game that captured hours of my life. Alex Kidd in Miracle World This game was a master piece. Indeed a true miracle world. You were playing a small character, Alex, on a mission to defeat an illegitimate tyrant (plot). No impeachment, just tons of different levels. What was particularly amazing in this game, was the diversity of its gameplay. You were jumping around breaking bricks, collecting money, swimming under water trying to avoid the fish and octopus’ tentacles. Why not taking a boat? Well you could buy one. Or a helicopter, a motorbike, a flying cape… Instagram life before Instragram! Let’s play Rock Paper Scissors… Alex Kidd was afraid of nothing. Except rock, paper and scissors… At the end of almost every level, you had to play a game of Rock Paper Scissors. I have never played more intense games of RPS than in Alex Kidd. But that was before. … on live video… Technology is amazing. While I reminisce hours of games played on a TV that weighted probably more than me, “nowadays kids” are playing in virtual worlds, without cables. Let’s meet in the middle. The program I created uses computer vision and deep learning to play a game of rock paper scissors on a live video stream (in this case from a webcam). Sounds impressive… It’s not. In fact, it is really simple! … using OpenCV and Python OpenCV is a very complete computer vision library that pretty much lets you do everything you need. Plenty of tutorials are available online to get you started (I can recommend some if needed — hit me up in the comments). That’s what I used to capture and process the images. I used Tensorflow and Keras for the deep learning part. The workflow What do you need to play this game? A hand. And that’s it. The program simply needs to recognize the gesture from the hand, compare it against the opponent and output the result (win, loss, draw). 1 — Identifying the hand You could come up with a hand tracking algorithm, but I like simplicity. So I created a region of interest which is where the user has to put its hand and do the gesture. It is less flexible and “idiot proof”, but it has the merit of simplifying the code. We have narrowed down where the action is going to take place. But we still need to “capture the action” (meaning identify the hand). For that, there are several different approaches. The first one that I tried was to use a histogram of oriented gradients. If you don’t know what that is, no worries, this is not the approach I ended up with. I just wanted to brag a little. I used a simple ‘‘background subtraction’’. If you subtract the background of an image you are left with the foreground. And this is what we want here: the hand is (supposed to be) the only thing moving in the region of interest. 2 — Recognizing the gesture Once we have isolated the hand, we can start thinking of strategies to recognize what gesture it is making. I decided to use deep learning, because it is cool to use deep learning for everything and anything. And it is also incredibly easy. Count with me: I recorded a few hundred images of each hand gesture (20 seconds), augmented that dataset (20 seconds) and trained the model (5 minutes). Deep learning in under 6 minutes! NB: data augmentation consists of randomly modifying the images I created to create new ones, a little bit different (rotated by some angle, zoomed in or out by some percentage etc.). You can do that directly when training the model but I like to do it separately so 1/ I can visualize the modification that I’ve generated, 2/ keep track of the pictures (for reproduction) and 3/ memory space was not an issue here. 3 — Output the results Once the program has recognized what gesture the person is doing, you just need to implement the rules (paper beats rock, rock beats scissors, scissors beats paper). I hard-coded the logic because it is much simpler. Because nobody wanted to play with me, I simulated an opponent. At the end, this is what you get. Look at how much fun I’m having!! How does it work? Let’s break it down: 1 — The program takes the first image coming from the stream and consider it as the background (we hypothesize that only the hand will be moving in the region of interest); 2 — Then takes each frame and subtracts the background (this approach also works for diverse skin tone); 3 — Takes the result of 2/ and feeds it to the deep learning model in order to identify the gesture; 4 — Compares the gesture to the opponent’s according to the rules; 5 — Outputs the image and results information (win, draw, loss). How can it be improved? If that game doesn’t get me a job as a developer at Blizzard, I dont know what will! But if I wanted to spend more time and really improve it, I would: 1 — Use a hand tracking system to remove the need of a preset region of interest; 2 — Improve the deep learning model (design, but also the training data — including additional hand gestures); 3 — Improve the opponent (either by creating a reinforcement learning agent or by creating more logic like “if player played two X in a row, play Y” — the best strategy is to play randomly, but it is more exciting to build an opponent that seemingly has a strategy that you try to guess); 4 — Improve the graphics and effects of the play screen (count down for each new game, celebratory confetti when winning etc.); 5 — Other ideas you have! Have fun! If you made it until here, congratulations, you defeated all the bosses (Boredominus, NotFuninator and ObviousItIs). As a reward, you can find the code on my github. Please feel free to reach out if you have any questions, but the code is very simple so I’m sure you’ll be fine! Ok, here’s your real reward: play Alex Kidd in Miracle World on PC for free. See you on the other side!
https://medium.com/analytics-vidhya/how-to-build-a-computer-vision-game-in-python-a6b064ce72be
['Thomas Taieb']
2019-10-12 06:11:30.341000+00:00
['Machine Learning', 'Python', 'Gaming', 'Deep Learning', 'Computer Vision']
Aphantasia: My Mind’s Eye is Blind, But I Dream in Ideas
My son Nicholas was thirteen when I read Temple Grandin’s autobiography Thinking in Pictures and learned two things that changed my life. The first was that Nick was autistic. He didn’t have ADHD, which he’d been misdiagnosed with at age six. He didn’t have bipolar disorder, which he’d been misdiagnosed with at age nine, after three months in a mental hospital. I put down Thinking in Pictures and knew my kid had autism. And that he was going to be okay. And I also learned that there was something very, very strange about me. Gradin talks in her book about thinking in pictures to the extent that words are her second language. I THINK IN PICTURES. Words are like a second language to me. I translate both spoken and written words into full-color movies, complete with sound, which run like a VCR tape in my head. When somebody speaks to me, his words are instantly translated into pictures. Language-based thinkers often find this phenomenon difficult to understand, but in my job as an equipment designer for the livestock industry, visual thinking is a tremendous advantage. — Temple Grandin I am a language-based thinker, alright. I’m so language-based that I don’t think visually at all. I don’t have a mind’s eye. Or, I do, but it doesn’t see. It — I don’t know. It conceives. My mind’s eye ideates; it’s not a movie projector. The realization of just how different that is from other people was mind blowing. I’m not huge on labels, but there’s a newish term for the kind of extreme language-based thinking I’m talking about. Aphantasia. I don’t like it, because it ‘a’ means lacking and ‘phantasia’ means imagination and that’s not right. On lots of levels. I don’t lack an imagination. It just operates non-visually. When I close my eyes and try to remember my mother, for example, here are strong sensory memories and there are words. And feelings. But I don’t get a picture. So, I asked my husband, and he thought I was being ridiculous. You know what your mother looked like. I do, of course. I look like her. Only she was blonder and I have brown eyes. I know that I’m built like her, my face has the same shape. I can bring up very brief flashes of very specific details. A pair of jeans she liked to wear that had belonged to my brother. The way her pale blue eyes sometimes shifted back an forth rapidly, a little tick that she didn’t know she had. I remember them so strongly that they feel like they’re just on the edge of visual memory, but I can’t bring them all the way up. I’ll remember something very, very specific like that — the yellow rose with the pink edges that my daughter and my sister and I saw at the rose garden in Portland a couple of weeks ago — and for a few seconds it’s almost there. I’ll almost see the inside of the gift shop. I’ll certainly smell it, overpoweringly rosey. And I can feel the silk scarves between my fingers. Taste that rose-smell on the back of my tongue. But I can’t see it. I can’t see my mother. It doesn’t upset me, because I haven’t lost the ability. I have never been able to visualize anything. So I don’t miss it. Other sensory memories are stronger. I can still hear her voice, for instance. I remember exactly what she smelled like. She wore Charlie Perfume all her life, and face powder always makes me remember her. I made shepherd’s pie last night and it tasted just like the recipe she used to make it for me every year on my birthday. It doesn’t matter that I don’t actually have the recipe. I just know. I do make visual connections. For some reason, I constantly see women who remind me of my sister. I’m not sure if that’s some under-developed part of my brain trying to fire off or what. But it’s always some small detail, not an overall appearance. The shape of the bridge of a nose or the way someone holds themselves will make me see her in someone who otherwise doesn’t look anything like her. I don’t even dream in pictures. This is the hardest thing to explain. A friend once asked me if I just dream of the words. Like do I just see them scroll past? Well. No. That would be a picture of the words, wouldn’t it? It’s not auditory, either. It’s not like someone reading to me or narrating my dream. Although I wake up with a memory of the words. It’s more like I dream in story. My dreams are fully formed, and I often remember them vividly. Like the one where I opened my washing machine and my great-grandmother’s head was sitting on the agitator. We had a coversation about life. I didn’t see it. I just — knew it. The idea of it. It wasn’t a nightmare. I wasn’t scared. Maybe I would have been, if I’d seen my Nana’s head on a washing machine agitator instead of just had the idea of it. After I watched a documentary about a woman whose teenage son murdered her toddler daughter, I spent two nights having intense dreams about being caught in impossible situations. It isn’t uncommon for me to have dreams that continue on for several nights. I often dream of being chased. It’s my one recurring nightmare. And in dreams, I know that I’m riding a bicycle that won’t go fast enough or I’m running down a hall that keeps stretching. I can’t see those things, anymore than I can see what’s happening when I read a book. But I feel them. I wake up knowing they happened, the same way I know that I know that went to Wal-Mart yesterday. Maybe because I don’t dream in pictures, I tend to remember my dreams more than anyone else I know. I wonder sometimes if my brain is like a hard drive. Because I’m not filling it up with video all night long, there’s more space to record. Strangely, the only time I ever dream in pictures is if I fall back to sleep in the morning, after a night where I didn’t sleep well. Very rarely, I’ll have a few minutes of crazy, visual dreams that completely freak me out. They unsettle me enough that I try not to let it happen. I have a hard time connecting names with faces. I read, and adored, Bone Gap by Laura Ruby a couple of years ago. It’s about a boy who has face blindness. I don’t have face blindness, but I recognized myself in that character. I have a terrible time connecting a face to a name. It’s embarrassing. People I absolutely should recognize, whom I’ve had significant experiences with, I can’t recognize visually if enough time has passed for me to have lost the connection between their faces and their names. The second I hear their names, everything snaps into place like a rubber band. Because I’m a language-based thinker and their name, not their face, is what I need to make the connection. I have lots of information about the people I’ve come in contact with on file in my mind. There’s nothing really wrong with my memory. It’s just that I don’t have a visual file. So I know what they look like and I have words like brunette, dark eyes, freckles, glasses, tall, round face — but lots of people fit that description. I need their name to unlock the file. My husband, on the other hand, can see an 80-year-old in a movie and instantly place them as a child actor. It’s like his superpower. He never forgets a face or a name. We moved last year to the little town where he grew up. He moved away after he graduated from high school, but there are dozens of people still here who he hasn’t seen in thirty years and he’s in his glory, seeing people on the street that he instantly recognizes and waiting for them to place him. Just like the boy in Bone Gap, if someone has a very distinguishing feature, I have an easier time. For instance, when my daughter starts with a new soccer team there are always a few girls I can remember more easily. They stand out for some reason. Maybe one girl has red hair. Or one is much taller than the rest. Maybe someone wears a knee brace or has a birthmark. A couple might be friends with my daughter and I know them already. I’m able to attach their names to them fairly quickly. And there are always some girls I never can distinguish from each other. My brain just sees a gaggle of brownish ponytails, all the same height and build, wearing the same uniform. They’re too similar to each other in appearance for me to make a visual connection between them and their names unless I really get to know them. I have to use their uniform number to know who they are, sometimes for years. My creative work is language based. I’m a novelist, which requires me to create a world for other people. For the most part, I don’t actually have to work too hard to write description. I’ve certainly read enough of them, so maybe that helps. I’ve just learned how to do it. I know what I want a setting to feel like and that’s my starting point. I focus on what I want my reader to feel. I don’t struggle to see things when they’re in front of me, of course. And I know what things look like. If I want to describe something specific, sometimes I’ll look up photographs. The one problem I have is with remembering which details I’ve included in my work. I have to write them down. I keep a sort of story bible where I can keep those notes. Otherwise, I’m likely to forget the details I’ve written. Both of my daughters are artists. So is my sister, my aunt, my grandmother. I am spectacularly non-artistic. I think I have a decent eye for things like design and style, but I struggle to create anything. Mostly, because I don’t have the patience for it. I can’t picture the end result. It’s impossible to create something when you can’t imagine what you’re creating. Or at least it feels impossible. Stories are different. Grandin talks about visual and language-based thinkers. Maybe the flip side of not being able to think visually is that I’ve got a hyper-developed ability to be a language-based thinker. My mother taught me to read when I was three, because my need for stories was insatiable and she couldn’t keep up. I learned quickly. I went into school with standard five-year-old skills — except that I could already read. I have a very vivid memory of standing on my grandmother’s back patio when I was eight or nine years old and having an epiphany. I could read anything. All of the words in the whole world were mine. They belonged to me. Even weird, long scientific words that I didn’t understand, I could sound out. I could read them and they were mine. I was filled with such an intense feeling of my own personal power. There’s nothing wrong with my imagination. It works just fine, thank you. I sometimes think it’s too big for my body, actually. I am often overwhelmed by ideas. Bowled over by them. I get so excited about them — mine, other people’s, it doesn’t even matter. Because I have no problem talking about my ideas, I can tell you all about how I think they’ll play out. How I imagine they’ll culminate. And, oh man, if you give me the chance, I’ll go on and on about your ideas, too. I have a much harder time figuring out all the little steps between here and there. I get lost in them. I can usually manage the next step. If I’m lucky. I have to have a kind of crazy number of strategies and systems in place to keep me on track, or I would never (I mean never, ever) finish anything. It’s hard for me to even realize sometimes that I’ve gone off track. I’m sometimes taken by surprise when other people are shocked by my productivity. It isn’t that I’m such a go-getter. It’s that I have two modes. Either I don’t get anything done or I get everything done. Because either I’m using my systems that bypass the fact that I can’t visualize anything, or I’m not. I usually chalk that up to being extremely right brained, but maybe I’m extremely right brained because I can’t actually visualize my path. You know how Olympic athletes talk about visualizing themselves going through their event, step-by-step, and winning? I can’t do something like that. At least, not literally. I’m good with concepts and ideas, but I can’t close my eyes and meditate on an actual picture of myself doing anything. In fact, meditation is almost painfully impossible for me. At least the kind where you close your eyes and visualize — I don’t know, your happy place or a quite meadow or something. I only see black and my brain starts swimming with thoughts and ideas that have nothing to focus them. They batter me. If I’m going to focus, I need to open my eyes and actually see something. Because I can’t visualize, I actually have to do it. I can learn by reading. And I can learn by doing. But I can’t learn by listening at all, unless I also take extensive notes. Sometimes my husband will try to explain something to me. Give me directions, maybe. Or tell me how to do something he’d like me to do for him. I can get maybe two turns in and then I just shake my head and hold up my hand to make him stop. It’s useless to go any further. I won’t remember. And if I let him keep going, it will only confuse me. If I’m going to remember something, I have to write it down. I don’t usually have to go back to my notes, interestingly. But I have to actually write them down. Somehow the physical act of writing them transcribes them into the language center of my brain, and then they’re there and I can access them. We moved to Pennsylvania in November and it took me months to not need my GPS to get home from the grocery store or to take my daughter to school. It got to the point of ridiculousness and I know that people thought I was doing it on purpose. And I get it. It seemed like there was no possible way I couldn’t find my way home from the grocery store — it’s half a mile. But I couldn’t. I turned the wrong damned way every single time. I struggled with the sudden change from desert sun to lake-effect gloom and spent a lot of that winter as a passenger. Plus, I didn’t have the few visual cues I’d gotten used to after years of living in the same place. It wasn’t until I pulled out of my funk and started driving myself that I was able to finally figure out my way around. We live in a visual world. And I guess it’s a little weird to be wired in such a way that I just don’t function very well within it. Here are some things that I’ve found that help me navigate our visual world a little more easily. My kids think it’s hilarious that I’m so low tech. I need almost everything to be analog if it’s going to work for me. I can’t visualize how something will work out, so I have to write it down, sketch it out, plan it first. With a pencil on paper. Old school. Step-by-step directions work best for me. If I want to learn how to do something, I seek out someone who teaches in a way that doesn’t skip steps, expecting me to be able to make a leap that my non-visual mind might completely miss. I capitalize on the things that I’m good at. I’ve built an entire life, including a thriving career, around being a language-based thinker. Temple Grandin, incidentally, did the same thing with her extreme visual thinking. She turned her ability to intensely visualize into a career building livestock equipment that revolutionized that industry.
https://shauntagrimes.medium.com/aphantasia-my-minds-eye-is-blind-84d98be6d249
['Shaunta Grimes']
2019-09-16 16:33:00.308000+00:00
['Health', 'Life', 'Mental Health', 'Self', 'Life Lessons']
You Wash Your Hands. Can Washing Your Nose Help Fight Covid-19?
You Wash Your Hands. Can Washing Your Nose Help Fight Covid-19? Learn an easy technique that scientists are testing in the fight against coronavirus. Executive summary: Soap is effective in killing the coronavirus. You wash your hands to stay safe. Maybe washing your nose, where the virus lives, is something we should be doing. This essay explores that idea, shows how to do it, and reviews some science behind it. Credibility info: I am a head and neck surgeon, and I have a degree in physics. My understanding of these topics is above average. I use the techniques described below at least once daily. The rationale The CDC recommends hand washing as one good way of protecting yourself from the coronavirus. They recommend plain soap and water! The CDC says to use hand sanitizer if you don’t have access to soap and water! The reason is the same as the reason that you put dishwashing soap in a greasy frying pan: soap breaks apart the grease. The coronavirus’s shell is a “lipid layer,” which is grease. Soap breaks apart the shell of the virus and promptly kills it. When the virus gains entry into your body, it’s usually through the nose, and the virus lives and replicates in the back of the nose and behind the nose. Why not wash that part of your body with soap, too?! Interlude: the basics of how to wash your nose Many people who suffer from sinusitis know that they get some relief by doing “nasal rinses,” which is washing the internal nose with salt water. The salt water is put in a small plastic squeeze bottle. You squeeze the water into one nostril, and the water drains out the other nostril. Take 60 seconds to watch this video from a company that makes a convenient nasal rinse bottle. I know, it looks bizarre, but the technique is thousands of years old, and it’s not uncomfortable, even the very first time you do it. You can pick up that rinse bottle at any drug store. Now back to the science The idea of washing the inside of the nose with soap made so much sense, that researchers at Vanderbilt University began a clinical trial at the National Institutes of Health to see whether washing the nose could provide some benefit to coronavirus patients. In the clinical trial, the soap is Johnson’s Baby Shampoo, one-half teaspoon, mixed in the plastic nasal rinse bottle. Participants in the trial use the nasal wash the same way you saw in the NeilMed video. Where does the water go, and how does it get out the other side?! First, let’s get oriented to this diagram: This person is looking to the right. You see the nose, lips, and chin. The palate separates your mouth from your nasal cavity. If you press up, with your tongue, on the roof of your mouth, you can feel your palate. The back part of the palate is called the “soft” palate, because it can move, and the very back end of the soft palate is the uvula, the little thingie that hangs down in the middle, when you look in the back of your mouth. You have two separate sides inside your nose, left side and right side, generally outlined in black in the diagram. Behind the nasal cavity area, outlined in green, is a room where the air from the two nasal sides can mix together — there’s no divider between the two sides back there. Picture the green room as a big closet, with two separate hallways opening into the closet side-by-side. When you start pouring water in one side of your nose, the water will flow back toward the closet, and start to fill the closet behind the nose. When the water level gets high enough, a bit above the floor of the closet, the water will start to flow back out of the closet, down the hallway that is not dumping the water in, and the water comes out of the other side of your nose. The “closet” behind the nasal cavities is called the nasopharynx, pronounced “nay-zoh-fair-inks.” In this diagram, I have shaded it in blue. We think that the virus typically first enters the body through the nose. Further, it is in the nasopharynx, and in the back of the nasal cavities, that the virus is thought to hole up and replicate when you first get infected. That’s why we put the Q-tip in soooo far when you get a coronavirus test: we want to get the Q-tip way back to sample fluid in the nasopharynx. That’s also why we’re washing this area of your body with soap, in addition to washing your hands. An improvement on the technique We want the soapy water to touch all parts of your nasopharynx. So, instead of leaning over the sink as you saw in the video, do your nasal rinses with your head tilted up from a neutral standing position, about ten or fifteen degrees up, as if you’re tilting your head a bit to see something on the ceiling. Why does tilting up work, you ask? The explanation is nerd heaven, and I’ll give it to you, along with a few other improvements, in the optional-to-read bottom section of this essay. More on mixing up the washing solution The NeilMed video talks about using distilled water (cleaner than tap water), and putting NeilMed’s powdered salt preparation into the bottle. Your body’s fluids are salty; the inside of your nose likes it better if your washing solution is salty. Here’s what I do: get a gallon of distilled water and put in eight teaspoons of table salt. Make sure it mixes completely. Use that water for your nasal rinses. If you want to do it one rinse bottle at a time, it’s one-half teaspoon of salt in the rinse bottle. NeilMed suggests heating the water a bit. Room temperature is about 72 and your body is about 98. I’ve tried heating and not, and I don’t notice much difference. Don’t make the water hot and hurt yourself! The clinical trial suggests one-half teaspoon of Johnson’s Baby Shampoo for each bottle. That works out to about two pumps of a typical 27-ounce baby shampoo pump dispenser. You can measure it for yourself. After you pump in the shampoo, put your thumb on the top of the bottle and swirl it around to mix the shampoo in. Get the water swirling like a little tornado. Don’t shake it: you’ll just foam things up. If you swirl for only a couple of seconds, and then hold the bottle up to a light, you’ll see strings of shampoo still swirling and dissolving. You have to swirl for about thirty seconds to get the shampoo to dissolve all the way. (Interestingly, if you heat the water to ninety degrees, the shampoo dissolves in about five seconds.) At first, I found the baby shampoo a touch irritating. So for a week or so I only used one-quarter teaspoon of shampoo — one pump — instead of the recommended one-half teaspoon. Now it doesn’t bother at all. More on doing the rinse You can lean over the sink to wash and not spill a drop, even on your first try. But if you do it with your head tilted back ten degrees, all of the water will run down your shirt. You can do it in the shower, or fold a thick bath towel and push it under your chin and in front of your neck to catch the drain-off. Don’t squeeze the bottle hard. You don’t need a high flow rate. Just squeeze slowly, and let the chore take longer. That way, you’ll spend more time with the soap solution bathing your tissues.
https://medium.com/carre4/you-wash-your-hands-can-washing-your-nose-help-fight-covid-19-48398536e14c
['Steven Denenberg']
2020-12-23 17:54:25.302000+00:00
['Health', 'Life', 'Covid 19', 'Self Improvement', 'Science']
I’ve Gone from Skinny, Sick and Nearly Dead to Vibrant and Energetic
I’ve Gone from Skinny, Sick and Nearly Dead to Vibrant and Energetic Here are the 6 energy lessons I learned that you can implement. Image supplied by author In my mid-20s I was stuck in bed for six months straight. The doctors had no idea what was wrong with me. When I finally recovered from whatever it was, I went back to eating junk food and consuming large amounts of alcohol. I had to have a nap before going out on Saturday night. Even with a nap I’d start falling asleep at the nightclub while my friends danced the night away. Nothing made sense. In 2015 I had a near-miss with cancer. Again, the doctors had no idea what was going on. The tumor the size of a golf ball lodged in my guts, scared the shit out of me. I decided to choose health and energy. Now, this wasn’t some desire to eat healthy food and brag about it on Instagram by taking glorious filtered photos of every bowl of food I ate, inspired by Buddha, to make people feel terrible about their own lives. What I wanted above all else was to have energy again. The kind of energy you have as a kid that allows you to run around the playground for hours as if it’s nothing. Here’s what I did to go from anorexic, sick, and nearly dead to full of life and energy again. Take a Walk like You’re Living in Japan In 2011, I became incredibly frustrated with life. I’d tried everything to get my energy back and stop the destructive thoughts floating around my head. The only thing I intuitively did was went for daily walks. They didn’t help… at first. One night, before I left the startup behind that I loved very much (forever), I grabbed my old school iPod and went for a walk. I started listening to what became the first dose of self-help I’d ever consume. The walk was supposed to last 30 minutes. I ended up walking late into the night, past midnight. I walked down the main road. I walked down the side of the highway. I was yelling loudly and chanting like the audiobook told me to. People came out of their homes half asleep and told me to shut up. I couldn’t hear them. I kept walking. These late night walks became a thing. I used them to walk and listen to self-help stories. It wasn’t a success habit. It was a survival mechanism. Walking led me to new insights. Walks became a way to escape my reality and reflect. Walking became an activity to listen to someone else’s wisdom. Walking led me to the insights that saved my life. Danish philosopher Søren Kierkegaard said: “Every day, I walk myself into a state of well-being & walk away from every illness. I have walked myself into my best thoughts, and I know of no thought so burdensome that one cannot walk away from it. But by sitting still, & the more one sits still, the closer one comes to feeling ill. Thus if one just keeps on walking, everything will be all right.” Writer, Kaki Okumura, says Japanese people don’t go to the gym and exercise or do super cool CrossFit. They just walk. Walking created a different kind of life-changing magic for me. Use walks to think. Use walks to escape the voice inside your head that won’t shut up. Experiment with a Whole Food Plant-Based Diet The three keys to this diet are: Natural sugar (honey/dates) instead of refined sugar Less salt No oil (including Olive Oil) I originally tried a vegan diet and realized most of my friends that were doing the same were kidding themselves. They’d think they were healthy, and they’d then go and eat a bucket of fried chips with high-fructose tomato sauce from Lord of the Fries and lie to themselves some more. They still felt sick, and many of them remained overweight. I learned that the food you put into your body creates your energy levels. A whole food plant-based diet allows you to recreate your childlike energy levels. Your taste buds change Many people try and switch their diet and find it hard to do. Things don’t taste the same. I had the same problem. I learned that you have to allow your taste buds time to adapt. When your tastebuds are jazzed up on years of sugary drinks and fried food, a sudden change to healthy food tastes bland. Now I’ve been off processed food for so long, a mango tastes better than any type of ice cream ever could. Even carrots taste sweet. Restaurants are accommodating People close to me have been reluctant to try a whole food plant-based diet because they go out to restaurants a lot. They’re worried there will be no menu options. This is going to blow your mind: you can just ask the kitchen to meet your dietary requirements. You may think asking the kitchen to cook you something that is whole food/plant-based would be annoying. You know what chefs tell me? They love it when they get requests from customers like me. It forces them to think on the fly and come up with entirely new ways of doing things. An ‘ask’ to cook a dish a different way is a chef’s dream. It gives them an excuse to improvise rather than follow the rules of the menu. A lot of restaurants will help you stick to your diet. All you have to do is ask. You can even ring before your booking and negotiate what options they can cook for you if you want certainty. Eat at home Salt and sugar are how restaurants make food taste good. It’s every chef’s secret weapon. If you want to eat better, all you have to do is eat at home. When you eat out you have no idea what’s in your food. When you eat at home you control the ingredients. And you save lots of money too. One meal out buys you like three home-cooked meals. Eat for energy, not for pleasure This was a subtle change to how I approached food. Your default way to eat is to eat for pleasure. You get pleasure through the taste of your food. A simple way to change your eating habits is to change why you eat. If you select food based on how it makes you feel, everything changes. You select food based on the energy it can give you, not the short-term pleasure of taste. Get More Sleep than You Need I was a serial under-sleeper. Getting enough rest will change your energy levels. I tried adding 30 minutes more to however much sleep I was getting. I prioritized rest rather than let whatever time I had left over determine the length of time spent sleeping. You need more sleep than you think. You may be awake for 16 hours, but how energetic are you during those hours? It’s better to have hours where you feel alive, than lots of hours awake where you feel tired and like a zombie. Give up Alcohol at All Costs Alcohol dehydrates you. If there’s one thing you don’t want to do it’s be dehydrated. Alcohol is a great way to screw up your body. It silently ruins your insides and messes with your mental state. I was a tired, depressed little boy on alcohol. Marketing has lied to us and told us to drink alcohol to relax. The truth is alcohol is poison. There are much better ways to relax and seek pleasure than alcohol which destroys your energy levels. Drinking Water Stops You Being Dehydrated (Which Takes Away Your Energy) 75% of Americans are dehydrated. I, too, didn’t drink enough water. When you’re dehydrated you lack energy and get headaches. All I did was replace canned drinks and alcohol with water. Water is the liquid of life. Drink more of it. Plug Everyday Energy Leaks In 2011, I was full of energy leaks. I was like a walking, talking Titanic with a hole in my head, leaking energy all over the place. Anger, frustration, shame, revenge, and things you can’t control in life will rob you of your precious energy. I started focusing on what I could control. I chose to forgive those who wronged me even though they didn’t deserve it, so I could gain back my mental clarity and peace of mind again. It’s not worth giving up your energy for petty things.
https://medium.com/the-ascent/ive-gone-from-skinny-sick-and-nearly-dead-to-vibrant-and-energetic-244d1c019920
['Tim Denning']
2020-11-29 00:12:55.632000+00:00
['Health', 'Fitness', 'Self Improvement', 'Productivity', 'Food']
AWS Lambda Event Validation — from Zero to Hero
So, you’ve started your serverless journey. It’s new and exciting and there’s lots to learn. You begin with your first AWS Lambda function. Everything looks fine and it just works, your Lambda gets an input event and produces output. However, problems tend to arise when unhandled exceptions and failures are encountered. These prove to be rather expensive, when not dealt with properly, as they can cause unexpected bugs, security issues and costly Lambda retries. In this blog, we’ll discuss how to parse event schemas correctly and how to handle event validation exceptions. I’ll focus on Python, but these guidelines & tips are applicable to any other programming language. Problems? What problems? Let’s observe the Lambda handler below. The Lambda receives the event parameter, which is a Python dictionary. If at this stage you access the dictionary without checking its validity, for the majority of Lambda invocations you will be fine. However, in some cases, the event dictionary might not have the ‘input’ key or a list, or the list won’t have at least 2 items (we check for index #1). An exception will be raised, and it won’t be caught. The first problem is when an exception isn’t caught in Lambda, AWS triggers Lambda retries by default (5 times by default), which will fail again and again. Since you pay for execution times, this can really add up. The second problem is in cases where an exception isn’t thrown, but the values are invalid. Your program could suffer from “minor” side effects, like invalid program integrity, undefined or invalid behavior bugs and even security issues. The third problem is that events are updated or changed by services regularly (especially AWS services) and the event dictionary can contain values which your Lambda didn’t expect. Your code will fail, and that’s ok, but it should fail in the “right” way. What if I told you that you can solve all three problems by combining validation and input constraint checks with one simple library?
https://medium.com/cyberark-engineering/aws-lambda-event-validation-from-zero-to-hero-2ca950acd2ea
['Ran Isenberg']
2020-08-09 06:10:26.992000+00:00
['AWS Lambda', 'Python', 'Validation', 'Software', 'AWS']
Using pyspark with Jupyter on a local computer
Installing Spark on Linux This manual was tested on version 2.2.0 but should work on all versions. I’m assuming you already have Python and Java installed. In order to install Spark on your machine, follow the next steps: Download the tar.gz file from Apache website (I’m assuming you are downloading to /opt): wget https://www.apache.org/dyn/closer.lua/spark/spark-2.2.1/spark-2.2.1-bin-hadoop2.7.tgz extract the file and create a soft link to the folder: tar -xvzf spark-2.2.1-bin-hadoop2.7.tgz ln -s spark-2.2.1-bin-hadoop2.7 spark Verify the py4j version (we’ll need it to connect Spark and Jupyter): ls -1 /opt/spark/python/lib/py4j* | awk -F "-" '{print $2}' The output I got was 0.10.4, we will use it later as <<PY4J_VERSION>> Verify the Python path you use: which python The output I got was /home/nimrod/miniconda/envs/compass/bin/python2, We’ll use this as <<PYTHON_HOME>> After completing the above, create a kernel json file: mkdir -p ~/.local/share/jupyter/kernels/spark2-local Edit the file: vim ~/.local/share/jupyter/kernels/spark2-local/kernel.json And add the following content (don’t forget to replace the two placeholders): { "display_name": "spark2-local-compass", "language": "python", "argv": [ "<<PYTHON_HOME>>", "-m", "IPython.kernel", "-f", "{connection_file}" ], "env": { "SPARK_HOME": "/opt/spark/", "PYTHONPATH": "/home/Nimrod/dev/theGarage/:/opt/spark/python/:/opt/spark/python/lib/py4j-<<PY4J_VERSION>>-src.zip", "PYTHONSTARTUP": "/opt/spark/python/pyspark/shell.py", "PYSPARK_SUBMIT_ARGS": "--master local[*] --driver-memory 3g --executor-memory 2g pyspark-shell", "PYSPARK_PYTHON": "<<PYTHON_HOME>>" } } Now, you should be able to observe the new kernel listed in jupyter kernelspec list or in the jupyter UI under the new notebook types. Example of The new kernel in the Jupyter UI The current problem with the above is that using the --master local[*] argument is working with Derby as the local DB, this results in a situation that you can’t open multiple notebooks under the same directory. For most users theses is not a really big issue, but since we started to work with the Data science Cookiecutter the logical structure of the file system puts all the notebooks under the same directory. This will cause an issue every time we want to work simultaneously on multiple notebooks. I’ve looked for a very long time on finding a solution and finally after a very long time Amit Wolfenfeld found the solution quite fast. The first step is to install postgresql and make sure it runs ! In order to allow pySpark to use postgresql we need the JDBC drivers, download them from here and save them into /opt/spark/jars/. Next, change the user to the postgres ( sudo su postgres ) and run psql : CREATE USER hive; ALTER ROLE hive WITH PASSWORD 'mypassword'; CREATE DATABASE hive_metastore; GRANT ALL PRIVILEGES ON DATABASE hive_metastore TO hive; \q After you ran the command, make sure to restart the postgreql service. The last step is to create a file under the config directory in Spark (Assuming you followed my suggested paths above, the command should be: vim /opt/spark/conf/hive-site.xml Add the following content to the file: <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:postgresql://localhost:5432/hive_metastore</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>org.postgresql.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>mypassword</value> </property> </configuration> That’s it ! now you can start as many pyspark notebooks you may want.
https://towardsdatascience.com/using-pyspark-with-jupyter-on-a-local-computer-edca6ae64bb6
['Nimrod Milo']
2017-12-21 16:19:53.188000+00:00
['Spark', 'Python', 'Data Engineering']
Dystopian ≠ Realistic
Pessimism’s easier to sell these days. Boomers are the last generation to have witnessed an extended period of growth in quality of life. Millennials have watched their world be overtaken by wave after wave of technology (some of it helpful), information (some of it accurate), and political warfare (some of it lethal), all accompanied by levels of income inequality not seen since medieval times, against the lowering final curtain of the climate apocalypse. Small wonder they are more inclined to trust those who describe a future in which our creations have, like Dr. Frankenstein’s, become our nemesis. Those of us who have lived somewhat longer know how many ways our world is cleaner, safer, and more just than what we had not all that many years ago. Yet we also, worn down by the increasingly brazen predations — and dictatorial ambitions — of a “president” who mocks the very concept of good breeding, may be tempted to succumb to cynicism and despair. Why not climb onto the bandwagon of dystopia? Is that not the only realistic way to view the coming times? Some say yes, but this pandemic scares me, and I want to fight it any way I can. No, not the COVID-45 pandemic. I’m talking about the mental viruses, the pandemic of malignant myth that has been spreading largely unchecked now for several decades. We are meaning makers. That more than any other thing defines us. Human beings are the stories that we tell ourselves and one another. Science fiction gave us tales of people solving problems. Typically, the hero was presented with a challenge so new that his scientific gadgetry was useless and he had to fall back on the first and greatest of all human tools: his mind. More specifically, his imagination. His ability to think outside the box. To survive, he had to overcome his old assumptions. But he was from a race of problem solvers, dreamers who had beaten all the odds to rise beyond the limits of a world oppressed by gravity and air pollution. Homo Sapiens took on all comers, bug-eyed monsters from beyond the Milky Way with unknown weapons and technologies. We were boldly going where no one had gone before, confident that whatever might be out there, we were equal to the challenge. Our adversary now is more insidious than Asimov’s or Roddenberry’s villains. It is dystopia itself, the absolutely unsupportable conviction that my greatest enemy is you. Instead of telling one another stories of how people work together, now we practice sneering cynically at jokes whose punchlines all remind us that what matters is the bottom line. We have bought into the notion that the game is zero-sum — no one can win unless somebody loses. We’ve been sold a bill of goods that tells us might makes right, and only fools look out for anyone but number one. We’ve been conned into believing that the problems of humanity cannot be solved because the problem IS humanity. They tell us we have no choice but compete for all of the resources necessary just to stay alive, because there’s not enough of everything to go around. Bull. Shit. We aren’t the problem. We are the solution. We are — all of us — resources, value generators. We are capable of feeding all of us, clothing all of us, providing healthcare, education, and opportunities for all of us. The only scarcity is artificial, the scarcity produced by 90% of the planet’s resources being held by 1% of its people. That’s who wants us to believe the reason we can’t have nice things is because our neighbors had too many babies. That’s who wants us to give up our dreams of real prosperity for everyone. And that’s who’s being served by writers who waste their imagination telling stories proving why we shouldn’t bother using ours. If we want to build a better future, we must first imagine one. Better yet, imagine many. My MFA thesis is going to be about a man who builds a single-structure habitatation for 3000 people. He invests in them, providing them with food and education, helping them start businesses, enabling a community to grow and thrive together. No glassy-eyed collectivists, but individuals who know their neighbors are their friends. That first arcology becomes the model for many more, which are the means by which humanity prepares for the inevitable climate disaster, and after cleansing centuries emerges with new wisdom and a new conception of what “human” beings are. That’s my story, and I’m sticking to it. Maybe you can write a better one. I hope you will. But I believe that writers are the ones who can and must reclaim our future. And the only way to do it is to tell some better stories.
https://medium.com/technology-hits/dystopian-realistic-1ddb066bfc13
['Edward Robson']
2020-12-13 18:29:59.688000+00:00
['Society', 'Philosophy', 'Humanity', 'Science Fiction', 'Writing']
The Infection Connection: Vitamin D, Cold Exposure, and the Immune Response.
The Infection Connection: Vitamin D, Cold Exposure, and the Immune Response. Why our flu is seasonal and what we can do about it Why do we get sick in winter? Because our immune systems are compromised by Vitamin D deficiency. by Thomas P Seager, PhD and A.J. Kay Everyone knows that winter is “flu season,” and most people think we are predisposed to sickness because the outdoor temperatures get cold. But cold temperatures are not directly responsible for the annual swath of infections. In fact, the opposite is true. Exposure to cold is a huge boost to the human immune system. A study conducted jointly by the US and Canadian Armies illustrates this phenomenon by describing how deliberate cold exposure can double the number of natural killer (NK) cells in the bloodstream (Figure 1, Castellani et al, 2002). Natural Killer cells are a critical type of white blood cell that act as our bodies’ ‘first responders’ to all kinds of cell damage, including that from viruses, bacterial infections, and even cancer. Figure 1, Castellani et al, 2002 Given the immune system’s ramped-up response to cold exposure, the question remains ‘What is the underlying mechanism for the “seasonal stimulus” (Cannel et al. 2006) on the rates of influenza infection?’ According to Dr. John Campbell, a physician in the United Kingdom with a popular video blog about coronavirus, the answer is ‘Vitamin D deficiency’. The best available science agrees with him. Current dietary recommendations for Vitamin D were established back in the 1960s at levels just above those required to prevent rickets (Papadimitriou, 2017). At that time, when combined with the then-typical lifestyle rife with sun exposure and a diet of whole, unprocessed foods, the recommendation of a lower supplemental dose was sufficient for most people. Today, our lives are different. We live primarily indoors, wear sunscreen when we do go outside, and our highly processed diets are insufficient to provide us with dietary levels of Vitamin D that we actually need — especially in the winter. And we do need them. Increased levels of Vitamin D confer a myriad of benefits (Holick, 2007) including: - reversal of metabolic syndrome - increased bone density - prevention of Type 1 diabetes - protection against common cancers, and - reduced risk of infectious disease The evidence with regard to infectious disease is particularly strong — and timely (Adams & Hewison, 2008). For example, Figure 2 below highlights the inverse relationship between influenza deaths and levels of sun (UVB) exposure in Norway (Juzeniene et al., 2010). The more sun people get (and more Vitamin D is synthesized by their bodies), the fewer people die of the flu. Juzeniene et al., 2010 Even more compelling is the fact that populations in tropical latitudes show no such seasonality in flu infections — because they don’t experience the same variance in levels of sunshine. Figure 3: Temperate latitudes show seasonal variation in viral infection rates that correspond to changes in solar irradiance. Tropical latitudes do not, because the solar variation is minimal (Cannel et al 2006). At Northern latitudes, when the sun is at its lowest angles in the sky and the days are shortest, the UVB solar radiation that reaches the surface of the Earth is insufficient to initiate Vitamin D synthesis in the skin. Consequently, most people in the United States suffer from a Vitamin D deficiency in their bloodstream, especially during winter months. To make matters (immunologically) worse, the colder temperatures that prevail in winter cause people to cover more of their skin and stay indoors more often, which deprives them of the immune-boosting exposure to the cold winter weather. Our modern environments have created a comfortable existence away from the prophylactic powers of the natural environment by encouraging indoor habitation during the winter months. In addition, clustering people in close quarters with warm, moist air — the conditions we contrive in order to maximize comfort and minimize exposure to the cold of the winter — also maximize the survivability and transmissibility of the influenza virus. It’s no wonder we get sick in the winter given that we are deprived of the Vitamin D that our immune systems depend on to stay active, and limit our exposure to the cold that the winter months confer, and then we enclose ourselves indoors, giving pathogens access to countless vulnerable hosts. Unfortunately, our susceptibility extends to all kinds of pathogens, not just the flu. Deaths from the flu are not primarily attributable to the flu itself, but from secondary infections like bronchitis or pneumonia or from systemic complications due to underlying disease processes. One of the great clinical advantages of cold exposure is that the immune-boosting benefits extend to the opportunistic infections that can accompany influenza infection (Gruber-Bzura 2018). Given the simplicity and affordability of Vitamin D supplements and deliberate cold exposure, there’s no reason why we should have to experience a “flu season” at all. Getting enough Vitamin D in the winter months, as well as practicing intentional and repeated cold exposure, can give your immune system the boost it needs to minimize your susceptibility to bacterial and viral infections — as well as other types of cell damage — year-round.
https://medium.com/morozko-method/the-infection-connection-vitamin-d-cold-exposure-and-the-immune-response-51a0dc76a685
['Morozko Forge']
2020-05-03 13:24:30.179000+00:00
['Health', 'Infectious Disease', 'Cold Water Immersion', 'Cryotherapy', 'Wellness']
The Literal Pain Of Working From Home
PHOTOGRAPHED BY JESSICA GARCIA. By: WHIZY KIM Complaining about work is normal. In fact, it’s an utterly ordinary thing to do even if you generally enjoy your job. We might groan about a tough project on a tight deadline, an incompetent coworker or an unempathetic manager, low pay, and long hours. But there’s a kind of work-related reality that hasn’t been grumbled about enough: the physical pain of working a desk job, which has only intensified during these long months of working from home. Some of this silence might have to do with the fact that we feel lucky to be able to work from home at all. You may wonder if it’s insensitive to be talking about the pain from working a cushy white-collar job while other workers are actively facing danger. The great COVID work-from-home era has, however, only highlighted how unhealthy our work habits and setups are. We’ve read the many detailed breakdowns of why sitting for prolonged periods of time is unhealthy, but it’s another thing to actually combat that on a daily basis when you have a million tasks to complete and barely enough time to scarf down lunch. Our homes never asked to be offices and many of our cramped apartments were ill-prepared to serve as a workspace, but we’ve made do, whether by purchasing new office furniture out of pocket or by using the bar cart as a desk. We’ve noted that the typical stiffness from a long day of work has evolved into pulsing, radiating pain and wondered, Is this normal? Can I just keep ignoring it and hope it’ll go away? Why do I feel like such a cave-dwelling, vitamin D-deprived fleshbag these days? What follows is not medical advice of any kind. It’s simply a space to vent and share the very real ways sitting and working can lead to chronic pain that you struggle with long-term. As two weeks turned into two months turned into almost a whole year, the makeshift nature of our WFH setups has underscored the fact that crouching over a desk and squinting at a screen for over eight hours a day is no way to live. Ahead, R29 readers share their range of WFH ailments, and what they’ve done to address them. Tamar, 27 Boston “I have a sore lower back and back spasms. The pain feels like I have a metal spike inserted into my lower back. A few weeks ago while bending over to pick something up off the floor, I felt a spasm followed by pain,” Tamar says. “It lasted for a few days and I couldn’t bend down, drive, or sit or stand comfortably. I had to cancel a meeting that was supposed to be in-person, and I had difficulty sitting at my desk to type up reports.” Tamar has switched up her work locations quite a bit while remote. “I used to work at the dining room table, then switched to the couch,” she says. “After my back pain, I started sitting in an office chair that I already owned. I try to minimize sitting for long periods of time on the couch or at a dining table. I use a heating/ice pack if I start to feel any pain. I also learned a few exercises from a friend who is a physical therapist.” “[Before WFH], I already owned a cheap Target desk that I’ve used occasionally with a dining chair. After experiencing the back pain, I went to my parents’ house and brought back a pristine, barely-used desk chair from high school. I had to buy risers for my desk out-of-pocket because the chair didn’t fit under the desk. I use a random book to prop up my laptop so that it’s at a good typing and eye level.” Lori, 38 New York “I’ve always had a tight neck and shoulders from the desk job I’ve worked the last 15 years. Since I started working from home, that’s gotten worse, and I’ve started having tension headaches almost daily around my eyes and the base of my neck,” says Lori. “The largest issue is the pain I’m experiencing in my right hip. At first, it only bothered me later in the day, but now it hurts while I’m sitting at my desk, throughout the day, and on weekends. It’s kind of a dull ache that never goes away. I find myself stretching out my workday so I can take longer breaks while still getting my work done, which makes the day drag on. It’s also affecting my quality of sleep. I’m waking up if I’ve been on my right side too long.” “I work at a table I have set up in my spare bedroom,” she continues. “I went into my office and snagged my desk chair and a small rolling table — not sure they know about the table, but they know where I live and I’m not going anywhere! Since we might be home until next fall, I’m considering buying a standing desk that I’ll pay for out of pocket.” “I work for the state, so even if we weren’t in a bad budget situation, they wouldn’t pay for that kind of thing,” Lori says. “I was at my sister’s for almost a week and noticed the pain was almost gone even after sleeping on a couch and carrying around a heavy baby, so it’s definitely my desk setup. I need to change it or I worry I’ll do permanent damage. I’m an active person — biking, snowboarding, hiking — and don’t want this to prevent me from doing the things I love outside of work.” “I’ve gone to my chiropractor for adjustments and we did a session of pilates on the machine to stretch my hips,” she shares. “I’ve been given physical therapy exercises too, but I’m not great at doing them at home on my own. I also use over-the-counter pain meds, apply heat, and CBD cream.” Max, 32 Mexico “I get headaches and dry eyes in the afternoon but keep working until night — not daily, but when I need to work on a project instead of routine work,” says Max. “I have stomach issues because sometimes I forget to eat if I’m really inspired or busy.” She notes that she already suffers from gastritis, esophagitis, and colitis. Max’s also noticed that her neck is stiff as soon as she wakes up these days, but it becomes worse around late afternoon. “Most of this doesn’t really affect my work, but when the headaches get too intense, I need to stop earlier,” she says. She usually works from her home office. “It’s a corner desk that came with the house we’re renting, smaller than I’d like but functional,” Max says. “My husband is making me a new one but hasn’t had the time.” She does admit that sometimes she works from her sofa and “a very few times from bed.” “For the stomach issues, I try to eat better and also take my medication. For the headaches, I sometimes take something for the pain, but not frequently since my stomach is very sensitive. For the neck and back I do nothing, really — I sometimes massage my neck and shoulders during the day or in the shower.” Alexandra, 31 Pennsylvania “I have pain in my neck, mid back, low back, hips,” says Alexandra. She says she suffers from sciatica and piriformis syndrome. “Sitting all the time has really messed me up. Being at home means I move around less and don’t take breaks to walk for coffee or lunch — my office is downtown so I’d usually take advantage of that.” “I’ve always had neck and shoulder issues, but never lower back — and now my lower back is a mess from lack of proper posture and a proper office chair. I didn’t want to splurge,” she says. “My eyes also get incredibly tired and dry because it’s all screens, all the time. I’m an attorney, and my days used to be broken up with walking to court or other offices for depositions, and now it’s all on Zoom. I also find that just sitting on my pelvic area hurts.” “I do switch up locations depending on how I’m feeling,” Alexandra says, but adds that she’s currently 19 weeks pregnant. “I go from my desk to the bed to the couch and back to my desk again. I go to a chiropractor once a week and I try to stretch through the day.” “I paid out of pocket sometime in June or July for a desk that folds up, so it can be stored out of the way when needed. It’s currently in our guest room, which will be a nursery,” she shares. “I did expense an extra monitor so I could double screen with my laptop, but that was recent.” Christy, 37 Georgia “I have neck pain and stiffness, as well as lower and mid back pain,” says Christy. “My coccyx bone also hurts despite having a super thick chair cushion. My eyes also get dry and hurt from staring at the computer screen all day.” “I recently moved to a new home and now have an entire room devoted to my home office. I bought a new desk and a new office chair, but the chair isn’t really comfortable — and I bought all of it out of pocket,” she says. “I’ve been to the chiropractor, take ibuprofen daily, and went to my doctor recently and received steroid injections and a steroid pack for inflammation. She also gave me a prescription for low-dose Flexeril,” Christy says. “I also have fibromyalgia, chronic fatigue syndrome, and Ehlers-Danlos syndrome, so I already have a lot of chronic joint pain and medical issues. But I’m very thankful to be able to work from home full-time.” Kaajal, 24 New York “I have lower back pain and stiff hips. The hip pain has been the most challenging, as it’s often painful to even sit. Sometimes the hip pain radiates into my butt and lower back, making it difficult to walk,” says Kaajal. “I work mostly on my kitchen table, as this is where I’ve set up a second monitor but will move to a desk in my childhood bedroom, or the bed when I feel like I need to lie down,” she says. “I chose the kitchen table because I felt too isolated being in my room all day, and there’s an island in the kitchen with bar stools that can function as a standing desk. I’ve been working mostly there and trying to stand two to three hours a day to help with the pain.” Kaajal was both surprised and not surprised by the pain. “I was aware that sitting for extended periods of time shortens hamstrings and hip flexors, which can contribute to lower back pain, but as someone who is young, active, and in good health, I didn’t expect this to impact me as much as it did.” “I didn’t have the money to splurge on a high-end desk chair that I may not be able to use when I move back to a New York City apartment, so instead I got an ergonomic back pillow to attach to my chair, which has been helping me sit up straighter,” she shares. “I also found walking and being active helps so I wake up at 7 a.m. to walk for 60 to 90 minutes — I’m trying to get 10,000 steps. I also do 20 to 30 minutes of stretching or yoga or foam rolling at night while watching TV, just to work the kinks out.” “Unfortunately, my company has confirmed no stipend or reimbursement for office furniture and supplies,” she says. Tami, 23 Nigeria Tami has been working from home since April when her area went into lockdown. “After I lost my job, I started freelancing and found a remote job, so I’ve been at a makeshift desk/office at my parents’ since then,” she says. “I fell down a flight of stairs three years ago, and my back has never been the same,” she continues. “The combination of on/off back pain from that and sitting at a desk for hours on end has probably made my condition worse. I feel the pain mostly in my lower back, and it almost feels like it’s coming from my spine. There are days when my back will seize and I’ll be unable to walk upright, sometimes having to crawl just to get to the bathroom.” She says this particular pain has eased a little bit lately. “My dad also bought me an ergonomic chair, which has helped. I feel back pain most acutely after two hours at a desk, so I have a set of alarms on my phone that remind me to stretch,” she says. “I also have recurring knee pain from a childhood illness, and having my knees hang over the chair but not quite reaching the floor has been causing discomfort. The pain feels like my knee is being pulled out of its socket, and my knees start aching after a couple of hours. I elevate my leg on a stool to help with this.” “I work in my parents’ study most often and switch between the sofa and bed on slower days,” she continues. “I’m currently renovating the basement into an apartment, so I’m designating a proper work area there as well as investing in proper work furniture.” She says she wasn’t really surprised by the worsening WFH-related pain. “I thought I would be better about taking breaks,” she says. “I have pretty bad posture from all the phone usage and watching Netflix in bed, but with work, I’m even worse because once I get ‘in the zone’ I can be at my desk for up to three hours.” “There have been quite a few mentions of home office stipends at different team meetings, but there are some constraints that have made management put it off,” Tami says. “A lot of my friends are in school or back in physical offices, so they’re a lot more physically active than I am,” she says, noting that because of this, it’s harder for them to understand her pain. “With those that I can complain about this to, the ones who get it, they usually just remind me that I’m a workaholic — and everyone else is good at taking breaks.” *** Originally published at http://www.refinery29.com
https://medium.com/refinery29/the-literal-pain-of-working-from-home-96cb2183cc8c
[]
2020-12-16 23:30:26.499000+00:00
['Health', 'Working From Home', 'Jobs', 'Wellness', 'Covid 19']
How to Master the Art of Storytelling as a Data Scientist?
How to tell a story? We do hear and see a lot of stories around us. More or less, stories have 3 things in common. You have characters, conflict, and conclusion in every story. For a data scientist, there are 3 fundamental parts of a story: Identifying the problem: First of all, it’s necessary to identify the problem for your audience. You need to tell them how you are going to collect your data, what are the different sources for that, or maybe you need to check that if there are some ready-made data that you can use to experiment with. Alongside that, you need to make sure that your data is not skewed or biased or something like that. Also, you must have documented strategies to remove bias or remove other anomalies in the actual collected data. “Errors using inadequate data are much less than those using no data at all.” — Charles Babbage It is necessary for you to outline a clear line-of-action for the audience at this stage so that they can actually understand both the problem and a roadmap for the solution. Presenting the solution: After you’ve identified and explained the problem to your audience and stakeholders, you’ll need to present the solution. Your audience may ask you different questions about the data collection process, exploration or how did you model the problem depending on the competency level of the audience. “Invite your Data Science team to ask questions and assume any system, rule, or way of doing things is open to further consideration.” — Damian Mingle If they are technical they will be more focused on the engineering side of things. However, if they are non-technical they might be interested in knowing the solution which is more cheap, quick, and easy to understand. Impact of the solution: One of the important part of the storytelling as a data scientist is the ability to relate your solution to the final impact. There can be various impacts of a solution. It may be some predictions that can save revenue for a company, or it can be finding an optimal path for a user to reach a certain place. It can also be an analysis of a climate change issue using Machine Learning that can help in the conservation of the environment. “Hiding within those mounds of data is knowledge that could change the life of a patient, or change the world.” — Atul Butte Actually, it depends on the type of problem that you are solving and the impact it can have on the life of different actors in the scenario. Actors are the people who are directly or indirectly impacted by your solution in one or more ways.
https://medium.com/towards-artificial-intelligence/how-to-master-the-art-of-storytelling-as-a-data-scientist-1a72eda10f54
['Saeed Ahmad']
2020-12-24 15:57:01.542000+00:00
['Data Science', 'Machine Learning', 'Artificial Intelligence', 'AI', 'Careers']
A Look at the Long-Lasting Java and Big Data Relationship (With a List of Resources Data Scientists Can Use for Java Learning)
Photo by ev on Unsplash Data science is one of the hottest (if not the hottest) jobs of the XXI century. The number of CS students and business science majors who want to know how to analyze insights is growing at a wild rate. Right now, “Intro to Data Science” is the fastest-growing class at Berkeley. When it comes to Harvard, “Introduction to Statistics” was another hot pick among undergraduates — a change stimulated by the growth of big data and data science. We all understand that there’s no end in sight when it comes to data production. Since the 2000s, we have been creating terabytes of data contributing to the worldwide data deluge. In 2021, the need for people who could make sense of all this accessible information is drier than ever. That’s why the demand for data scientists has spiked dramatically. If you are a computer science enthusiast eager to brand yourself as a big data analyst, you might be confused about what the right starting point is. In this post, I’ll explain why I believe that learning Java is one of the most reasonable decisions a data scientist can make and share some helpful resources to fuel your learning. Data Science Is Here to Stay: 10 Reasons to Learn Big Data Unfortunately, if you come by a tech forum or data science-related Reddit thread, it’s painfully common to hear claims like “Data science will become obsolete in 20 years”. I’d say, there’s no empirical evidence of this happening anytime soon — rather, as BD and data analytics advance, new applications of these technologies emerge. Here are ten applications for big data that can be an excellent motivation to start learning it even if you work in a field with no direct connection to engineering or computer science. Targeting customers . Brands and corporations have long discovered the power of BD and aim to make the most out of the information customers share on websites and social media. As for the political world, big data emerged as a killer weapon in reaching out to voters and promoting senate or office candidates. . Brands and corporations have long discovered the power of BD and aim to make the most out of the information customers share on websites and social media. As for the political world, big data emerged as a killer weapon in reaching out to voters and promoting senate or office candidates. Optimizing corporate internal processes . The growing number of company and talent managers rely on big data to work productively. They use tracking tools and sensors to get access to employee efficiency insights and rely on ML and BD algorithms to make sense of this information. . The growing number of company and talent managers rely on big data to work productively. They use tracking tools and sensors to get access to employee efficiency insights and rely on ML and BD algorithms to make sense of this information. Personal life and socialization . The power of big data in online dating has been a hot topic throughout the last decade. Apps like Tinder, OkCupid, and eHarmony proved that it’s possible to break matchmaking down to a series of algorithms and predictable scenarios. In the future, the impact of BD in the dating market will likely be even more widespread, helping love-seekers fulfill desires they never knew they had. . The power of big data in online dating has been a hot topic throughout the last decade. Apps like Tinder, OkCupid, and eHarmony proved that it’s possible to break matchmaking down to a series of algorithms and predictable scenarios. In the future, the impact of BD in the dating market will likely be even more widespread, helping love-seekers fulfill desires they never knew they had. Healthcare and effective treatment . There is a sea of BD applications in the healthcare sector — from leveraging the power of sensors and trackers in wellness to improving the precision of diagnosis and laying the groundwork to facilitate life-or-death decision-making for physicians. . There is a sea of BD applications in the healthcare sector — from leveraging the power of sensors and trackers in wellness to improving the precision of diagnosis and laying the groundwork to facilitate life-or-death decision-making for physicians. Increasing the relevance of science and the efficiency of academic research . Top research institutions like CERN heavily invest in data centers for a reason — the insights data analysts provide come in handy in making accurate predictions, identifying research areas, relevant to the general public, and broadening a scientist’s perspective. . Top research institutions like CERN heavily invest in data centers for a reason — the insights data analysts provide come in handy in making accurate predictions, identifying research areas, relevant to the general public, and broadening a scientist’s perspective. Improving the performance of athletes . Big data tools have been officially implemented in tennis and soccer to make sure referees don’t make a blind rule on a player’s mistakes. The NFL uses big data as well to help team managers make calculated decisions regarding scouting, running stadiums, or interacting with fans. Team managers and coaches, too, rely on BD and data analytics to plan athlete training and make sure they don’t harm players with excessive or strenuous training. . Big data tools have been officially implemented in tennis and soccer to make sure referees don’t make a blind rule on a player’s mistakes. The NFL uses big data as well to help team managers make calculated decisions regarding scouting, running stadiums, or interacting with fans. Team managers and coaches, too, rely on BD and data analytics to plan athlete training and make sure they don’t harm players with excessive or strenuous training. Optimizing living conditions . Big data is a frontrunner in improving the quality of urban life. City councils rely on BD tools to monitor the flow of traffic and predict road congestion. Electricity and water consumption sensors help communities use resources efficiently and spend less of the taxpayers’ money on maintaining a comfortable living environment in smart cities. . Big data is a frontrunner in improving the quality of urban life. City councils rely on BD tools to monitor the flow of traffic and predict road congestion. Electricity and water consumption sensors help communities use resources efficiently and spend less of the taxpayers’ money on maintaining a comfortable living environment in smart cities. Trading and finance . Big data brought about a revolution in the world of trading. Right now, most equity trading processes rely on ML algorithms — these help track stock market fluctuations, predict the variations of stock prices, and allow investors to make smart, data-backed decisions. Other than that, big data is widely used to discover promising investment and trading opportunities. . Big data brought about a revolution in the world of trading. Right now, most equity trading processes rely on ML algorithms — these help track stock market fluctuations, predict the variations of stock prices, and allow investors to make smart, data-backed decisions. Other than that, big data is widely used to discover promising investment and trading opportunities. Education . The usage of big data at schools and universities is progressively becoming the new normal. Smart progress tracking systems (like the one implemented at the University of Tasmania in Australia) allow students and professors to keep track of classwork, collect behavioral insights to help learners develop an effective study method, and help teachers to fine-tune their performance in class. . The usage of big data at schools and universities is progressively becoming the new normal. Smart progress tracking systems (like the one implemented at the University of Tasmania in Australia) allow students and professors to keep track of classwork, collect behavioral insights to help learners develop an effective study method, and help teachers to fine-tune their performance in class. Entertainment and media. Netflix and Spotify are leading the way in big data implementation in entertainment. The latter relies on Hadoop (a set of Java-based tools) to collect and process user insights. The ability to analyze user data comes in handy, as it allows creating curated track feeds and promotes higher audience engagement. Case For Java in Big Data There’s no tip-toeing around the fact that Python and R are the standard languages of modern big data. I won’t deny the fact that most BD tools have APIs for Python and R so not knowing Java will rarely be indispensable for a data scientist. However, there are a ton of Big Data use cases when Java should be one of the languages in your tech stack. You should learn Java for big data if: You want to implement a theoretical model developed in Python. In most teams, Java is a preferred programming language for writing production code that allows you to use and scale BD algorithms. You want to integrate your project with enterprise tools. In the world of enterprise tools, Java is huge. There are plenty of tools that use the language — so, if you want to integrate your big data with any of those, learning the basics of Java will spare you a ton of stress. You want to scale BD projects. Java helps data scientists process more data, support a higher prediction load, and scale complex ecosystems. You want to adapt existing Enterprise-Grade tools to a particular use case. Why Data Scientists Use Java Java isn’t the newest and hottest language of the market — so it makes sense to wonder why it still has so much impact in Big Data, despite the appearance of newer, more concise technologies. Personally, I (and many of my peers) am drawn to Java both in application and big data development for the following reasons: Broad user base . Simply put, Java is popular among my clients so knowing how to leverage its tools lands me jobs I’d otherwise get “passed” on. . Simply put, Java is popular among my clients so knowing how to leverage its tools lands me jobs I’d otherwise get “passed” on. A lot of learning tools . There are a lot of books, video tutorials, and learning platforms for learning Java. Compared to newer languages, I feel like Java learners have a clearer sense of direction and can create an effective study method relatively easily. Thus, learning Java is worth it even if you will not be using it as a primary language in day-to-day BD tasks. . There are a lot of books, video tutorials, and learning platforms for learning Java. Compared to newer languages, I feel like Java learners have a clearer sense of direction and can create an effective study method relatively easily. Thus, learning Java is worth it even if you will not be using it as a primary language in day-to-day BD tasks. Java is the base for the majority of big data tools — Hadoop, Spark, Storm, Mahout, and more. Since the Hadoop ecosystem is so widely used in BD, some developers go as far as to say that “Java IS Big Data”. — Hadoop, Spark, Storm, Mahout, and more. Since the Hadoop ecosystem is so widely used in BD, some developers go as far as to say that “Java IS Big Data”. Scala is a relative of Java . The backbone of Apache Spark — is essentially a language designed using JVM. That’s why learning Java helps developers smoothen the transition to Scala (for most it’s still rough, however) and become confident Spark users. . The backbone of Apache Spark — is essentially a language designed using JVM. That’s why learning Java helps developers smoothen the transition to Scala (for most it’s still rough, however) and become confident Spark users. Java is flexible, allowing developers to build a practically limitless tech stack on top of it. I also believe that Java gets bonus points thanks to its support of scalability and multithreading. Closer Look At Java-Based Big Data Tools: Hadoop, Spark, and more Hadoop Hadoop is a framework that helps data scientists process large datasets. Companies use the tool to aggregate all external data in one system, group, and categorize it. These are the tool’s main features: Failover support: ensures safe data transfer between slave machines in case one of them shuts down. Scalability: each new machine can easily become part of the Hadoop ecosystem. Low intensity on hardware: compared to other large-scale BD solutions, Hadoop can run on lower-tier machines allowing company managers and data scientists to cut hardware costs. Local data processing: saves bandwidth and increases the speed of information processing. Is there a flipside? Plenty: Hadoop is hard to learn and to implement so a growing number of data scientists prefer to move on to other tools (according to statistics, 11% of Gartner survey respondents said that they plan to invest in Hadoop). Having said that, the demand for Hadoop is still outmatching the supply. At the time of writing, there are nearly 2,500 Hadoop developer job openings on Indeed. The salaries of Hadoop engineers are worth considering as well — according to ZipRecruiter, the national average is at $125,000. Spark Spark is a multi-purpose tool data scientists use for just about everything: stream processing, machine learning analytics, and many other processes. By flexibility, speed, and the smoothness of the learning curve, the framework is a huge cut above Hadoop. It’s worth noting that Spark is built in Scala, not Java (there’s a Java API you can integrate to be fully comfortable). Even if you set your sights on learning Scala, the good news is, there are plenty of similarities between Java and Scala — I outlined the main ones below. Both languages are based on JVM. Commonly used Java IDEs (e.g. Eclipse, IntelliJ) support Scala. Both are OOP languages (with Scala going a step further and extending its tools to functional programming as well). Developers can reuse Java libraries in Scala and vice versa. Storm Storm is another handy tool used to process real-time data streams. The framework approaches streaming similarly to the way Hadoop handles batch processing. Storm has a wide range of applications in big data: ETL, continuous computation, machine learning, and many more. Main features of the framework: Flexibility Fault-tolerance Scalability Ease of setup. To understand the range of Storm adoption, it’s enough to take a look at some of its adopters: Twitter, Spotify, Alibaba, and many more. “Spotify serves streaming music to over 10 million subscribers and 40 million active users. Storm powers a wide range of real-time features at Spotify, including music recommendation, monitoring, analytics, and ad targeting. Together with Kafka, memcached, Cassandra, and netty-zmtp based messaging, Storm enables us to build low-latency fault-tolerant distributed systems with ease.” Spotify team on using Storm Learning Java For Big Data: Where to Start If you can’t wait to start learning Java to improve your versatility as a data scientist, it’s helpful to have a resource deck for reference. While I am not a huge fan of using multiple learning tools at once, I put together a deck of useful books, courses, video tutorials, and forum threads for those eager to learn Java and use it in BD. Best Books for Learning Java: Introduction to Java Programming and Data Structures — gives a concise overview of algorithms, data structures, networking, and almost every other Java concept. It’s one of the fullest and useful programming resources I have ever read. Spring in Action — although Spring isn’t Java, developers deal with it in most daily tasks. Reading this guide will help you get a clear and up-to-day understanding of Spring programming and save developers a ton of workplace stress. Head First Java — often used as a textbook at programming classes, it’s a top choice for students since the book mirrors most university curriculums. Effective Java. Clean Code: A Handbook For Agile Programming — it’s not a Java textbook per se, but it’s beneficial for getting to know best coding practices. Best Courses for Learning Java: Codegym: a well-structured, engaging Java course that covers both the fundamentals of syntax and its more advanced aspects (parallelism and multithreading). MOOC course on Java programming. The MOOC method makes up for thought-provoking discussions among developers — however, a lot of Java learners still find a ton of luck with this Java course so I am putting it on the list. Java For Complete Beginners — in a little over 16 hours, this Udemy course gives a solid grasp of the fundamentals of the programming language. Reddit threads r/java r/learnjava r/hadoop r/learnprogramming r/javahelp Conclusion Describing the evolution of big data, Pearl Zhu said: “We are moving slowly into an era where big data is the starting point, not the end.” The growth rate of the field, indeed, suggests that data scientists will be at the core of every project in the future. So, programmers want to make sure that, when the time comes to jump on the BD wagon, they have the skills necessary to stay relevant. The good news is, Java is a programming language a data scientist will not regret learning. Its widespread use to support the big data ecosystem, dominance in writing production code, and popularity at the enterprise level all suggest that Java is here to stay — that’s why more developers should start learning it as soon as possible.
https://towardsdatascience.com/a-look-at-the-long-lasting-java-and-big-data-relationship-with-a-list-of-resources-data-123d41668836
['John Selawsky']
2020-12-28 17:56:02.496000+00:00
['Big Data', 'Java', 'Coding', 'Programming', 'Big Data Analytics']
A Weary Nightingale
My heart sings like a weary nightingale with a rhythm I can never refine, but I carry within a mournful tale and yearning for dreams that were never mine. By the mountains, in search of peace to grieve, I seek respite from the years filled with scorn, yet no one had a reason to believe I could ever fulfill what I had sworn. But it is the calling of the sublime that leads me to search for rapturous heights, even against the bitter foe of time, I will not lower where I set my sights. But against all odds, can my soul ascend or am I stuck with wings I cannot mend?
https://medium.com/song-of-the-lark/a-weary-nightingale-717ab2fae547
['Lark Morrigan']
2020-12-17 19:53:13.938000+00:00
['Mental Health', 'Spirituality', 'Future', 'Poetry', 'Sonnet']
Evangelicalism Makes People “Stupid.” That’s Why They Won’t Care About Trump’s Taxes.
I don’t think that anyone can claim that evangelicals today have true political power. As The Atlantic said back in July, evangelicals don’t have much to show for their betrayal of Christ as they steadfastly hold onto Trump. They took a gamble and they lost, or to be fair, they won a cruel prize of chaos. Regardless of anything they say, is anybody truly happy in 2020? Do everyday people feel a happiness or peace about politics, the economy, racism, policing, or COVID-19? To say there is a great deal of unrest is, of course, an understatement. But it’s not just the Left that’s been left unhappy. Evangelicals are unhappy too. They think they’re fighting in some holy war against evils like socialism. They turn on Netflix and fear they’re losing a cultural war. Trump’s poor leadership has impacted us all. Further divided us. I daresay it’s even served as the devastatingly perfect experiment for some of the issues discussed in the recent docudrama, The Social Dilemma. Director Jeff Orlowski says the internet is undermining democracy, and that lies spread six times faster online than truth. As it turns out, Trump’s presidency and evangelicalism’s blind spots have converged at this frighteningly ideal point with social media where the facts no longer matter because everybody’s perception of the facts have been clouded by what they want those facts to mean. I wish like hell that I could tell you the solution here is easy, but it’s not. There’s a saying that “You can’t fix stupid but you can vote it out.” Even that is too simplistic, though. Americans are wrestling with mental illness, social media influence, and frankly, stupidity, in a way we’ve never done before. Problems in the world are nothing new. Racism, inequality, and all sorts of political unrest are not issues exclusive to 2020. We shouldn’t be surprised by struggles and unrest, but we should recognize no generation gets through life unscathed. And when we think about American history, not to mention world history, we need to remind ourselves that we are in it. We are making history every damn day. Our actions and errors have consequences with the power to reverberate for generations. It reminds me of the Billy Joel song, “We Didn’t Start the Fire.” After all, we didn’t start the fire. So much of what is happening right now was put into motion long before any of us were old enough to make our own choices. Like me, some of you were raised in an evangelical bubble that stunted your cognitive growth. Maybe it made you more gullible or even “stupid” about certain things. Ironically, there’s no shame (or sin) in being stupid. It’s very human and natural to take the path of least resistance and believe what we’ve been taught when we were very young. And in addition to stunting critical thinking, evangelicalism has had a hand in teaching folks to respond, well, positively to authoritarian parenting and leadership. Of course, authoritarian parenting isn’t limited to evangelicals, but it’s quite common among them and Christians in general. As research professor Peter Gray, Ph.D. writes: “People with an authoritarian mindset believe, first and foremost, in obedience to authority. So, of course, obedience is high on their list of ideal traits for a child; but obedience is also high on their list of ideal traits for people in general. Leaders, especially strong, confident leaders, are to be followed. Authoritarians also tend toward simplistic ways of thinking; things are black or white, right or wrong. If something is right for one person, it should be right for everyone and everyone should see it as right. They don’t tolerate ambiguity and have little taste for subtlety or dissenting opinions. To an authoritarian, the way to solve problems is to find a powerful, confident leader—a sort of superhero who claims in unambiguous language that he can solve your problems—and then follow that person.” I see this weird sort of lust for authoritarian rule a lot, lately. Arguments supporting Trump aren’t debates at all. They’re generally just screaming matches to “get with the program.” For the Trump-supporting, authoritarian-loving person, the President’s critics are just grasping at straws to bring up taxes. They paint any reasonable and valid criticism against Trump as biased or “fake news,” because those are the only options when you lack appropriate critical thinking skills. While it’s no one’s fault to be raised this way, it is up to each individual to not remain stunted in their ability to think critically. Furthermore, we have a responsibility to ensure that younger generations are equipped with the tools they need to think for themselves and acquire the critical thinking skills they will need to make this world a healthier place. This is a big ask. This is going to take work. There’s a reason why evangelicals so often complain about indoctrination from the Left, like this “lovely” rant my friend ran into on social media about evil… teachers: Screenshot shared to Facebook Image text: teachers!! Teachers are thoroughly indoctrinated by the enemies of America. This started a long time ago, 1930s, when atheist Marxists like John Dewey took over the TEACHER COLLEGES and warped them with lies, taught them to use methods of teaching reading that just confused children (“Look-Say”), and turned the classroom into a psych lab to eliminate the independent thinkers. I have the documentation to prove every word. Why TEACHERS? Because they have the souls of children in their hands. Teachers are “God” to young people, a source of truth, strength, love, and courage -- teachers mold the character as well as give skills for the future. So, teachers are a primary target for being unwitting useful idiots in the grinding down of America. The Marxists knew they could not conquer us with bullets, so they had to be sneaky, sly, and dirty and deceive teachers -- who are mostly women, and women are more easily deceived because their hearts are designed to trust. I don’t know how to fix it. How do we undo the lies that have affected so many? That have caused so many bright, talented people to abandon the living water of liberty and jump into a stinking sewer? I wish God Himself would come down and tell these young folks the truth, but I guess for now, all they get is the “bigots” down the street. This is the sort of stupidity we’re fighting against, people. Evangelical Trump supporters believe they are independent thinkers. Adherence to the narrative that Trump is good all the time and that the Left is out to lead their children astray runs deep and it’s rooted in their fight against critical thinking. As The Social Dilemma points out, social media has allowed us to dig deep into our perception of reality without any regard for its accuracy, or if that perception is even good for us. Only one thing is certain right now. We’ve got a helluva lot of work to do. And Trump’s taxes aren’t enough to break his thrall over evangelicals. Americans have to learn how to be critical thinkers first.
https://medium.com/honestly-yours/evangelicalism-makes-people-stupid-thats-why-they-won-t-care-about-trump-s-taxes-e1c294a08009
['Shannon Ashley']
2020-09-28 19:07:56.878000+00:00
['Education', 'Psychology', 'Christianity', 'Society', 'Politics']
A Doooodle A Day: 9 Months, 2 Sketchbooks and 40K Words Later…
It’s time to look back and appreciate all the effort I invested in this personal project and of course, my accomplishments so far. Start — “Try it for a month” After I transitioned from engineering to design, I always heard people (from the design world) talk about storytelling: tell compelling stories to pitch clients; tell stories of users when design a website; and tell stories when communicate with teams. What did they mean by “tell stories”? Well, I never thought that stories can live in places other than movies, novels and storybooks. Therefore, I found it very confusing when someone asked me to communicate with others by “telling stories” — “Huh? Are you asking me to write a novel or something?” Maybe I’m just slow-minded, but I struggled hard for so long but just can’t grasp the gist. This was probably the original motive that eventually led me to this project. I dug up books, learning about stories and techniques to make good ones. I joined Toastmasters, a public speaking club that lets me practice communication skills, including storytelling. They have given me good clues on storytelling in general — provide clear main points and context, talk about them in a way (mostly narrative) that your audience can follow along, and depending on occasions, make your stories more exciting by having special setups, adding twists and turns and using role play or other techniques. I think it would be fun if I can do something, as a routine, to keep the practice going. A Learning A Day, a blog I’m subscribed to, was a big inspiration. For years, Rohan Rajiv the blogger, has been posting daily thoughts on business, technology and a variety of other topics. I decided to do the same — start with daily writing. After all, writing helps organize thoughts and plots, which are important for story crafting. I was able to write for two consecutive days, before gave up. I found out that, as a non-native English speaker, this was really hard for me: On one hand, I didn’t know what to write about; on the other hand, I just didn’t have the desire to write in English every day (I don’t even do much writing in Chinese! It’s my native language). My daily writing project went nowhere. I felt defeated and sick of writing just for the sake of writing. After that, I took a long break. Then, another idea struck me. I’ve been loving doodling ever since I was a kid. I used to copy characters from comics, draw my own characters, and make up stories among them. I never went deep into it. Over time, doodling has degraded to a hobby that I can occasionally only think of. I thought, hey, how about daily doodles with writing — doodle interesting things, thoughts, or whatever, and write about behind-the-doodle stories? Doodles can be a great tool to help me visualize what to write. So the experiment began, with a name “A Doooodle A Day”, a sketchbook and a black pen. I cleaned up my abandoned Instagram account, and started posting there every day. I was afraid of not being able to do it long enough, so I stuck a note onto the sketchbook cover — “Try it for a month”. I knew from past experience that, if I take this project too serious and stress myself out, I would not be able to do it for very long. So, I’m very flexible about the project workload: If I have plenty of time, I will doodle nicer and write more; if not, a simple doodle and one line will do. At the beginning of the project, action was most important, not necessarily how well I drew or how long I wrote. I was satisfied as long as I did it. I still stick to this principle till this day. Every day, I captured at least one interesting topic, doodled it and did some writing in English. 30 days passed without notice. I won!!! My first doodle on 07/19/2015 (view story): Wedding ring shopping kick-off (view story): Another month passed, then another and another. Finished my first sketchbook. I was on fire. More months passed. Level up Starting out, I didn’t have any particular topics or themes to doodle, mostly just capturing interesting things in life. After my first sketchbook was done, I felt like writing about other topics, such as observations and thoughts on designs, work, tech and others. Of course, life happenings were still important, but I began only capturing the ones that were interesting enough and worth memorizing. My topic coverage expanded, a lot — I even used it for book reading notes. Now, this is still the direction that I’m going with. Thoughts on a housing kit (view story) Thoughts on Tesla’s new Model 3 (view story) Since writing is my major focus, typically I’m not willing to spend more than 30 minutes on the “doodle” part. But, once in a while, I’d like to take some time to draw a topic I love. I feel great when I flip pages and see my old cool doodles. Millennium Falcon from Star Wars (view story) A doodle to celebrate Vincent van Gogh’s birthday (view story) Interesting learnings on how a perfect food shot is done (view story) Am I ever out of topics? You bet. My brains are drained pretty often. If I have nothing to draw, I browse news to find interesting topics. Going to news places and attending interesting events, like museums, also feed juices to my brains. Connect the “dots” My diligent work in the past 9 months has brought me close to 300 doodles, as well as more than 40K words in writing. I can tell that I’m getting better at both doodling and English writing. Most importantly, when I approach a topic, I can better organize my thoughts and articulate. Steve Jobs said: “You can’t connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future.” Right after I started A Doooodle A Day, I can’t help but think more than once: Is this thingy really gonna be helpful to me? The answer was always: “I don’t know. I’ll see how it goes”. Recently, I’ve been working on designing an enterprise application. At one point, I needed to write how each potential user type will use the application in high-level idealistic scenarios. Only after I put together draft scenarios of 3000+ words (and on target) in one day, did I realize how much my doodle project has helped me — I can’t even imagine I can write that much English that fast. My doodles about life happenings made their marks, too. I picked some memorable ones, and printed them into magnets and cute photo books. How nice that I can keep them in such tangible ways! By the way, I highly recommend Social Print Studio and Chatbooks for their excellent products! Magnets: Photo books: Other than that, I doodled as I read books. After reading two books “Articulating Design Decisions” and “Essentialism”, I had a nice collection of 15 doodles. I strongly recommend Articulating Design Decisions to any designer who wants to take your communication skills to the next level, and recommend Essentialism to people who are looking to achieve more in life. Book reading doodles for “Essentialism”: On to the next sketchbook A few days ago, I started my third sketchbook. One of the reasons why I’m able to keep “A Doooodle A Day” going, is that I’ve found a method that works out for me: doodling + writing. I feel very lucky. So far, it is one of the few things I can proudly say that I do persistently. Most others ended up nowhere. When I was a kid, I learned playing electric piano and formal painting, it didn’t take long before I stopped. Last year, one of my New Year’s Resolutions was to do 20 push-ups everyday, well, I did maybe for a week and totally forgot about it after that. I tried doing diaries, but I haven’t been able to do it day by day for more than a week, even though I started writing diaries many times. I had many personal projects that I was not able to keep on doing. Sometimes, I feel really sad for not being able to continue those projects: What if I could have finished off some of them and even mastered some skills that I used to learn but stopped — if those “what-ifs” were not just “what-ifs”, what big accomplishments would I have achieved already! However, another thought hits me: Shouldn’t we just keep exploring, experimenting and looking for what we love and can do? That’s the real life, right? A Doooodle A Day has been so much fun. For now, it’s my way to leave some marks in my life. Follow my A Doooodle A Day at instagram.com/yingyingzux
https://medium.com/the-100-day-project/a-doooodle-a-day-9-months-2-sketchbooks-and-40k-words-later-ea1f417085b2
['Yingying Zhang']
2016-04-13 16:22:59.582000+00:00
['Writing', 'Storytelling', 'Art']
Can’t Sleep? This Tech Could Put You in Sync with the Sun
Can’t Sleep? This Tech Could Put You in Sync with the Sun A new device claims to go beyond sleep tracking to reset your circadian rhythm. I’m dreaming too much. That’s what Fares Siddiqui, cofounder of the company Circadia, tells me after its sleep tracker spends several nights perched at my bedside. When I first saw the long stretches of REM sleep — the stage of sleep when dreaming happens — in my data, I romanticized the results. I dream big, I thought. But Siddiqui says the pattern is a result of being either sleep-deprived or anxious. Oh. Circadia is a startup focused on circadian rhythms, and the promise that if you can understand and control your daily patterns, you’ll sleep better: “a sleep lab on your bedside table,” pledges its marketing material. Most sleep trackers — devices on your wrist, on your mattress, or at your bedside — track your tossing and turning along with functions like your heart rate to tell you how much and how well you’ve slept. But typically, they don’t tell you what to do about it. Siddiqui’s company, funded by healthy Kickstarter and Indiegogo campaigns, is developing a connected tracker, lamp, and app. It aims to set itself apart from the current wave of sleep trackers by offering both information on your own personal rhythm, and customized advice. “We want to tell you what time it is inside of your body,” Siddiqui says. He became passionate about the topic after dealing with his own insomnia and learning about NASA’s light experiments to help astronauts’ circadian rhythms. Your circadian rhythm, your body’s natural 24-hour cycle, affects everything from sleep and jet lag to hormones and how well your drugs work. But different people’s internal clocks may run a little ahead or behind — maybe it’s midnight in your body when the clock says it’s only 10 pm. For me, I’m hoping some circadian insight can help me feel more refreshed in the mornings. Other circadian-curious people might need to adjust to jet lag or shift work, or identify bad bedtime habits that are keeping them awake. To learn about my own circadian biology, I let a premarket version of Circadia’s tracking device watch me sleep, I breakfasted by the glow of its lamp, and I gave personal details to its sleep-coaching app. I got an intriguing glimpse into the functioning of my body. But when it came to understanding the significance of my personal patterns, I was mostly left in the dark. Surfing the wavelengths In December, if all goes as planned, you’ll be able to buy Circadia’s $129 sleep tracker, which will be integrated with its lamp and app. For now, a rudimentary version of the app is free, and the lamp sells as a standalone for $79 — but it is a very handsome lamp, a sleek cylinder of blue light that morphs to red when you flip it over. Those wavelengths are intended to reinforce my 24-hour rhythm, helping me sleep at night and be more alert during the day. When I lie down in bed, I’m supposed to leave that dim red light on for half an hour (even though my eyes are closed) to help myself fall asleep. The instructions also say 30 minutes of blue light in the morning will alleviate grogginess, so I eat a few breakfasts with the lamp lighting up my raisin bran. Groggy mornings? Flip the lamp for blue light, which may help you wake up. Courtesy of Circadia I can’t tell if it makes me feel more awake, but according to Sabra Abbott, a neurologist at Northwestern University Feinberg School of Medicine, the ability of blue light to promote wakefulness and adjust our body clocks is well established. Blue wavelengths in sunlight naturally help our brains calibrate our clocks by preventing production of melatonin, a hormone that makes us sleepy. That’s why experts tell us not to stare at our phones, which emit blue light, at night in bed. Abbott uses blue-wavelength light therapy and melatonin to treat patients with circadian disorders, who might naturally fall asleep very late or wake long before dawn. Light therapy is powerful enough that those with circadian disorders should be cautious with its timing, she adds. Someone whose clock is so shifted that they only fall asleep near dawn, for example, could make things worse by using blue light in the morning. “We want to tell you what time it is inside of your body,” Siddiqui says. That said, she doesn’t know of any reason to turn on a red lamp while you’re falling asleep. “It’s not so much the presence of red light that’s helpful, but the absence of blue light,” she says. Siddiqui says red light prevents melatonin suppression, which is true — but it’s no better than being in the dark. If you really wanted to take advantage of red light (and didn’t mind the creepiness factor), you could do all your evening activities by red light only. But Circadia’s red light, by design, is too dim for that. I got yellow-zoned The second part of Circadia’s setup is a sleep tracker, an elegantly designed hand-sized disc that snaps magnetically onto a stand. Siddiqui says it scans my body with radar looking for tiny movements to infer my heart rate and breathing. From that, it figures out which parts of the night I spent in wakefulness, light sleep, deep sleep, or REM sleep. In the company’s own comparison testing, Circadia outperformed wearable devices like the Fitbit. I follow directions to set it roughly an arm’s length away from my bed and aim it at my torso. After some fussing on my phone to connect the tracker and app over wifi (so much for avoiding blue light), I hit “start” and lie down. A radar-based device tracks heart rate and breathing. Courtesy of Circadia The first night, I feel self-conscious with the tracker staring at me. In the morning the app — a beta version that isn’t yet publicly available — says it took me 43 minutes to fall asleep. Even after I get used to the tracker, the app seems to chastise me every morning, displaying a circle about half-filled with yellow and a middling “sleep index” score. The app also shows a timeline of my night that seems generally correct: It takes me a while to fall asleep. I sleep deeply at first, then shift into REM sleep in the early morning. As my husband gets ready for work, I alternate between dreaming and dozing. Siddiqui says that later versions of the app will tell users how their circadian clocks align with the outside world. It will deliver personal recommendations for using the lamp and for changes to habits and sleep environments, so that people can recalibrate their internal clocks, sleep better, or combat jet lag. In the spring, users will also be able to sign up for advice from a human sleep coach or therapist. For now, my only feedback comes from Siddiqui, who notices me waking up often. He also tells me that while an average person spends about a quarter of the night in REM sleep, for me it topped 40 percent on some nights, and 57 percent on one especially dreamy night. My body may be trying to catch up on missed rest by sacrificing deep and light sleep for extra REM. But Abbott doesn’t think I should read too much into my results. Sleep tracking is an imperfect way to tell the time on someone’s internal clock; the best way is to measure melatonin production. In its most recent lab tests, Circadia was about 67 percent accurate at telling what sleep stage a person was in. So far, those lab tests have included only a small number of young, healthy males — not anyone with an actual sleep disorder. Besides, people spend different amounts of time in certain sleep stages for many reasons, including normal variation and drug side effects — antidepressants reduce REM, for example. Circadia claims 1 in 3 people have rhythms that are out of sync, but Abbott says this is hard to know. Everyone falls somewhere on a spectrum from early bird to night owl, she says, which isn’t a disorder unless it interferes with life. But being told by an app that their sleep is abnormal might make people needlessly anxious.
https://medium.com/neodotlife/cant-sleep-this-tech-could-put-you-in-sync-with-the-sun-d8125c4a0a3c
['Elizabeth Preston']
2020-07-13 20:35:40.711000+00:00
['Wellness', 'Self Improvement', 'Technology', 'Sleep', 'Health']
The Seduction of Perfect Memory
Did you forget anything recently? Your keys maybe. Or someone’s name. You’re in good company. A typical person forgets about 4 things a day. Most are minor lapses but then there are the big ones: a third of us have forgotten a partner’s birthday, and 1 in 5 Dads has forgotten to pick up the kids from school. A couple years ago, I experienced one of the big ones. I forgot to call my Dad on his birthday. We live in different countries so that call is really important. He expected me to call. And I didn’t. If I close my eyes I can see the disappointment on his face when he went to sleep that night wondering why his eldest child had forgotten about him on his birthday. Makes me nauseous just thinking about it. The thing is that I had been making constant mental notes about his birthday in the days before, but on the day itself my brain just took a holiday. There was a complete erase of the most dominant thought in my mind. This act of forgetting, no matter how unintended, feels like a betrayal of the bond between father and son.
https://medium.com/descripter/the-seduction-of-perfect-memory-4921159d2fa5
['Craig Brett']
2020-11-11 23:21:57.494000+00:00
['Life', 'Technology', 'Future', 'Self', 'Science']
Running and Blisters: Coping
Fitness Running and Blisters: Coping Most runners have had a painful foot blister. Today we look at risk reduction and management. Photo by Bruno Nascimento on Unsplash I walk a lot. And occasionally do some running (typically alternating with walking). Today, I have a small blister on the underside of my foot. Being ever curious, I began to explore the available literature about foot blister risk reduction and management. Have you ever had a blister that ruined a run or a race? Or that just nagged at you at a low level for a couple of days? What causes a foot blister? If you have formed a small pocket of fluid on some part of your body, you may have had a blister. These bubbles vary in size and have many causes. For example, you may get one after infection, trauma, or even an insect bite. The location of it can influence the effects on your quality of life. For example, foot blisters may lead to challenges in walking, standing, or exercising. Let’s turn to some of the causes of blisters. For ones on foot, friction is often the causative agent. Walk or stand for hours, and you put pressure on various parts of your feet, including the heels, toes, and soles. Poorly fitted shoes may result in fluid-filled bubbles, whether the shoes are too tight or too loose. The resultant friction causes a fluid buildup just beneath the upper layer of the skin. Other potential causes include: an allergic reaction chemical exposure (for example, to cosmetics or detergents) chickenpox eczema frostbite infection (fungal, bacterial, or herpes) Photo by Gabby Orcutt on Unsplash How do you know if you have a blister? Usually, blisters on the feet that result from friction will spontaneously resolve after a few days. If they don’t heal over time or in response to treatment, please see a valued healthcare professional, particularly if you have severe pain, fever, nausea, or chills. You may have an infection. A doctor or other healthcare professional may drain the blister using a sterile needle. If she suspects an infection, the fluid can be examined to get to the root causes. Got any home remedies? Why yes. First, don’t pick at the blister or burst it. If you do so, you may cause an infection. First, may blisters that are left alone will eventually harden and go away. However, in the interval, you may experience discomfort, depending on the size and location of the lesion. Instead of bursting it, Healthline suggest these steps to properly train a blister at home: Wash your hands with warm water and antibacterial soap. Using a cotton swab, disinfect a needle with rubbing alcohol. Clean the blister with antiseptic. Take the needle and make a small puncture in the blister. Allow fluid to completely drain from the blister. Apple antibacterial ointment or cream to the blister. Cover the blister with a bandage of gauze. Clean and reapply antibacterial ointment daily. Keep the blister covered until it heals. Blister risk-reduction If you are in a race, check-in at a medical station. Make sure you have properly fitted shoes. If your feet rub along with a certain area of your shoe, wearing an insole may provide extra padding and reduce friction. Leave a bit of room in the toebox, and learn what amount of lacing is optimal for you. If your feet rub along with a certain area of your shoe, wearing an insole may provide extra padding and reduce friction. Leave a bit of room in the toebox, and learn what amount of lacing is optimal for you. Check your socks. Many prefer ones made specifically for running, including WrightSocks or synthetic-fiber (non-cotton) brands such as CoolMax to reduce moisture. Running socks are often shaped better for those who jog or run. Try some smooth socks with no seams or even double-layer socks. Many prefer ones made specifically for running, including WrightSocks or synthetic-fiber (non-cotton) brands such as CoolMax to reduce moisture. Running socks are often shaped better for those who jog or run. Try some smooth socks with no seams or even double-layer socks. Keep your feet dry. Some apply foot powder, while others wear moisture-wicking socks designed for sports. Some apply foot powder, while others wear moisture-wicking socks designed for sports. Keep your calluses. Don’t remove them with an emery board, for example. Don’t remove them with an emery board, for example. Consider tape or pads. You may wish to try moleskin or athletic tape over high-risk areas that are prone to blisters. If you try this approach, avoid wrinkles, and don’t apply it too tightly. You may wish to try moleskin or athletic tape over high-risk areas that are prone to blisters. If you try this approach, avoid wrinkles, and don’t apply it too tightly. Use lubricant. Substances such as BodyGlide or vaseline on high-risk areas may help, but keep the amount small. We don’t want your foot slip-sliding around. Substances such as BodyGlide or vaseline on high-risk areas may help, but keep the amount small. We don’t want your foot slip-sliding around. If powders, lotions, or soaps trigger blisters, stop using them. If a medical condition triggers the blisters, consult a valued healthcare professional. References
https://medium.com/beingwell/running-and-blisters-coping-7073f8320730
['Michael Hunter Md']
2020-12-26 17:18:40.958000+00:00
['Wellness', 'Fitness', 'Lifestyle', 'Health', 'Exercise']
Letter to My Body
To my chronically ill Body, For too long, my love for you was only skin deep. I saw you purely as an aesthetic object — a poor one at that — and you were valued only as far as my self-criticism allowed. I am ashamed to say that for most of my life, I haven’t been kind to you. I have not always given you the love you deserve. In fact, I have been downright cruel to you, in thoughts, words, and deeds. I have spent so much of my life at war with you, disgusted that you don’t contort yourself into molds that don’t belong to you. At various times, I have cursed the fat on my stomach, hips, thighs, and arms; the hormonal blemishes on my face; the stretch marks and cellulite constellations spreading under my skin; the frizz and waves in my hair; my seeming lack of symmetry. For too many years, I hated mirrors because when I looked in one, I couldn’t help but tally all the ways in which you disappoint me — all the ways in which I disappoint myself. So I avoided my reflection, staking claim to a strange disassociation from you. I ignored you — and you were so easy to ignore you unless something was wrong. But you were determined to be heard: to show me the unnecessary ugliness I felt about you and myself, you asked me to pay attention every time I got a skull-crushing migraine, every time my allergies made it difficult to breathe, every time I had a painful IBS episode, every time the neuropathy in my hands stung, every time I felt discombobulated by hidden depression and anxiety. Then the biggest “something wrong” happened three years ago: trigeminal neuralgia (TN). There was no way for me to prepare for how my life would change, especially how TN would change my relationship to you. Forgive me, Body, because I didn’t see all that you have done for me and all that you do for me. I didn’t accept you for what you are. It took undeniable chronic illness for all that to change. You see, Body, chronic illness showed me that you are a proud warrior. You are magnificent, strong, and resilient in ways I was too blind to before. Every second of every day, you fight for me. Using your inherent intelligence, you are constantly regenerating yourself. You keep entire galaxies of cells and systems alive within your borders. You do your best to protect me from the relentless pain TN assaults my body with, sometimes while also fighting the other wrongs you’ve been afflicted by. And when it all becomes too much, you tell me you’re tired — so, so tired — and that you need rest. With all the compassion you can muster, you remind me that it’s okay to rest. Today, I look in the mirror and see my home. The home that I was born into and the one I will someday die in. I’m more able to find the beauty and humor in your imperfections because I see how you are ever-changing. You are never just one thing at any given time. There is freedom in that, even within the walls of pain. You may be a defective model, but you’re my defective model. It’s taken me too long, but I will never again take this truth for granted: you will always look out for me. I know you will never stop reminding me to look after myself, to focus on feeling good, to trust the wisdom etched in your cellular makeup, to love who I am in this moment and in the next. I will build shrines to you daily. We are both works-in-progress, but that doesn’t mean we’re unworthy of respect and worship. There is no more war between us; there is just the peaceful flow of gently co-existing, of supporting and loving each other for all the days of this life. All there’s left to say is thank you, thank you, thank you. Love, always, from your ride-or-die tenant. Note: A version of this piece was first published on my blog on August 5th, 2020.
https://medium.com/curious/letter-to-my-body-a780932b23c3
['Nisha Kumar Kulkarni']
2020-08-24 18:13:17.405000+00:00
['Health', 'Chronic Illness', 'Mental Health', 'Self Love', 'Body Image']
The Race To Find A Cure For Aging
The Race To Find A Cure For Aging Learn about three pioneers working to turn back the clock We want to look & feel young again, and every year we spend hundreds of billions of dollars on beauty serums, cosmetic surgery, and exotic supplements in the hopes of appearing more vibrant, healthy, and desirable. All of those products, procedures & pills only cover up the symptoms of aging — they do nothing to address the cause. While medicine does help us to live longer, at best it has only slowed the ravages of time, and an aging population is driving demand for alternatives to the gradual decline into senescence. Aging, once thought to be inevitable, is being challenged. For the first time in history, biomedical innovators are starting to view it in a disease model, and not as an inevitability of life — and medical science is working to find a cure. Here are three stories of people from different walks of life who share a singular goal — they’re actively working to extend their own lifespans, and sharing what they’ve learned on how to achieve it: David Sinclair & NAD+ Dr. David Sinclair says the solution is to get your NAD+ levels up — and he’s offering detailed, practical advice on how to do it. In lengthy interviews with Joe Rogan & Rich Roll, as well as his recent book, he discusses the health benefits of intermittent fasting, limiting sugar & red meat, and eating plenty of vegetables — but for Sinclair, that’s only the beginning. Sinclair is an award-winning Australian biologist, professor of genetics, and Founding Director of the Paul F. Glenn Laboratory for the Biological Mechanisms of Aging at Harvard University. His team of 30+ scientists is deeply engaged in studying the mechanisms involved with aging & senescence, and treatments to potentially reverse them. One of the promising life-extension supplements they’ve identified is Metformin — an inexpensive blood sugar medication that may extend the human lifespan by as much as 10%. In addition to Metformin, Sinclair is bullish on the prospects of NMN (nicotinamide monomucleotide) for life extension. This vitamin B-3 derivative converts easily into NAD+ inside your cells, which is claimed to improve cellular function and offer rejuvenating effects seen in human clinical trials. Sinclair claims to have reversed aging in lab mice, and also claims to have “knocked more than two decades off his biological age”, as well as boasting online that he has the lung capacity, cholesterol and blood pressure of a “young adult” and the “heart rate of an athlete.” If he’s right, aging can reversed with NAD+ boosting supplements — and that’s a big step in a cure for aging and the diseases that come with it. Elizabeth Parrish & Telomeres Others, like Elizabeth Parrish, the CEO of BioViva Sciences, have taken a different route: she underwent experimental gene therapy to lengthen her telomeres & reduce muscle wasting back in 2016, and claims her health has improved since the treatment. According to Wikipedia, “independent testing by SpectraCell Laboratories had revealed Elizabeth Parrish’s leukocyte telomere length had been extended from 6.71kb to 7.33kb” — but in 2018, she reported further lengthening in her telomeres up to 8.12kb, along with an overall growth in muscle mass. A telomere is a region of repetitive nucleotide sequences at the end of each chromosome that protects it from damage — and telomeres get shorter as we age, leading to a variety of aging-related diseases. The initial 10% increase of Parrish’s telomeres has been roughly compared to her cells becoming 20 years younger. However, critics such as Dr. Bradley Johnson at the University of Pennsylvania have questioned her results, stating, “Telomere length measurements typically have low precision with variation in measurements of around 10 percent, which is in the range of the reported telomere lengthening apparently experienced by Elizabeth Parrish.” Jim Green in 2007 (on left) and in 2019 (on right) Jim Green & TA-65 Meet Jim Green, patient zero in a “one man experiment in radical anti-aging”. He lacks the Sinclair team’s funding and can’t bioengineer retroviral delivery systems like the Parrish team, but what he lacks in budget he makes up for in courage, innovation & perseverance. A few years back, Jim decided to tackle aging head-on, and started doing intense research into published scientific papers on aging, cellular senescence, and supplements that led him to a rigorous health regime that he claims has literally reversed his aging. Jim’s published a collection of links and notes to all of his papers online, and from talking with him personally several times I can tell you that he’s been more than diligent about his research. Josh Mitteldorf also interviewed him recently, and in that interview Jim talked at length about his use of first a nutraceutical called TA-65 and later Astragalus Root Extract as a telomerase activator to “give new life” to old cells. Jim has taken the hard road: consuming copious amounts of Astragalus extract along with countless other supplements and a daily exercise routine that’s visibly reversed most signs of his aging — including his seeing his gray hair regain it’s youthful color (no, he doesn’t dye it, that’s natural). Conclusion Rather than trying to hide the signs of aging with makeup or plastic surgery, innovators like Sinclair, Parrish & Green have taken action to turn back the clock in the hopes of not only living longer — but also living better. Sinclair has spoken numerous times about aging leading to a “tragic loss of human capital & potential that up until now we’ve taken for granted”, but if the research that these innovators are pursuing bears fruit, then it may no longer be our inevitable fate. Whatever the results of their experiments may ultimately be, their research alone is a testament to our shared desire to stop the sands of time from passing & make the most of every moment that we have.
https://medium.com/discourse/the-race-to-find-a-cure-for-aging-98676b0318dc
['Tim Ventura']
2019-12-26 18:18:13.683000+00:00
['Health', 'Futurism', 'Aging', 'Science', 'Medicine']
What You Should Know About High Blood Pressure
What You Should Know About High Blood Pressure Despite a new understanding of the largely preventable disease, deaths from the ‘silent killer’ are steadily climbing Photo: annebaek/Getty Images High blood pressure is often called the “silent killer” because its first symptoms are typically serious: a heart attack or stroke. Deaths related to the disease, also called hypertension, are on the rise in the United States at a time when the scientific understanding of the condition — and the very definition of it — is changing dramatically. Hypertension’s death rate, adjusted for age, increased by 45% from 1999 to 2017, a new study in the Journal of the American Medical Association (JAMA) finds. And total deaths from heart disease, stroke, and diabetes — for which hypertension is a significant risk factor — are rising as the population grows and ages. Collectively, these four so-called cardiometabolic diseases make up the single leading cause of preventable death in this country. Between 1999 and 2011, advances in diagnosis and treatment contributed to a decline in death rates for cardiometabolic diseases. But they are no longer enough to combat the rise, the researchers say in the new study — arguing that the focus must now shift more to prevention. “Our findings make it clear that we are losing ground in the battle against cardiovascular disease,” says study leader Sadiya Khan, a cardiologist at Northwestern Medicine and assistant professor at Northwestern University Feinberg School of Medicine. The root causes of high blood pressure, Khan says, are physical inactivity, poor diet, and the obesity epidemic — factors that feed off each other and which have become part of life as we know it in a world of desk jobs, extensive screen time, and junk food diets. Redefining “high blood pressure” With hypertension, blood pushes too hard against vessel walls. There are two measures: Systolic pressure, the upper number, is the peak of blood pressure reached when the heart muscle contracts. It’s a measure of how hard the heart works. As arteries become hardened or constricted, the pressure increases and the heart struggles to nourish the body. Diastolic pressure, the lower number, is the lowest pressure reached in the arteries as the heart rests between beats. Similarly, a higher diastolic reading indicates less efficient arteries. In 2017, the American College of Cardiology and the American Heart Association announced new, lower numbers for the hypertension threshold: 130/80 versus the previous minimum of 140/90. In other words, the new guidelines put you in the high blood pressure category if your numbers are 130/80 and higher. The shift, based on a review of hundreds of studies and clinical trials, was profound, resulting in nearly half the country being put on watch. The number of U.S. adults with high blood pressure instantly jumped from 32% to 46%. Another important shift in thinking occurred last year. Health experts had long thought that it was normal for blood pressure to rise as a person ages. But a study in JAMA, drawing from data on 1,252 people who’d had their blood pressure checked every two years from 1948 to 2005, found that regardless of age, blood pressure tends to be stable, unless the top number creeps up to around 120 to 125. At that apparent threshold, some sort of “vascular remodeling” seems to happen, says the study’s senior author, Susan Cheng, a physician in the division of cardiovascular medicine at Brigham and Women’s Hospital. The arteries adapt until they “may eventually reach a point at which they give way to the pressures, the arterial walls stiffen in their efforts to compensate, a tipping point is reached, and blood pressure starts to rise,” Cheng explains. The rise can then be rapid, the study found, to 140/90 and beyond. And the rate of rise is the same for everyone, regardless of how old they are when they reach the tipping point. Cheng attributes the rapid rise to the same old things: poor diet, lack of physical activity, and the everyday stresses of modern life. She notes that in remote tribes in the Amazon, where people live without any of these risk factors, blood pressures remain mostly low and stable as people age. The earlier a person’s rising blood pressure is noticed and addressed, “the more reversible it may be,” Cheng says, stressing however that this possibility hasn’t been proven yet by research. Both numbers matter now Meanwhile, health care professionals have long told people to be concerned mostly with the upper blood-pressure number. You can now worry about both. In what’s billed as the largest study of its kind, researchers recently examined more than 36 million blood-pressure readings from 1.3 million people, along with their health outcomes over time, detailing the results in July in the New England Journal of Medicine. Each measurement, when higher, predicts an increased risk of heart attack or stroke, the researchers say. Lead author Alexander Flint, a stroke specialist and adjunct researcher at Kaiser Permanente, gives this example from the results: A modestly elevated systolic measurement of 136 conveys an additional 1.9% risk of heart attack or stroke. A similarly modest elevated diastolic reading of 81 also conveys an additional 1.9% risk. A blood pressure of 160/96 packs increased risks of 4.8% and 3.6%, respectively. These risks might seem small, but that’s because they were calculated only for the general adult population. Among older people or people who smoke or have diabetes along with high blood pressure, “the risks caused by hypertension are much higher,” the study found. Broadly, other research reveals that for every 20 points of systolic pressure increase or 10 points of diastolic pressure increase, the risk of death from heart disease and stroke doubles. “The take-home message is that both blood pressure numbers — both the top [systolic] and bottom [diastolic] values — matter when it comes to diagnosing and treating hypertension,” Flint says. What you can do Experts at Harvard, the Mayo Clinic, and elsewhere offer much the same advice for preventing or treating high blood pressure: engage in moderate physical activity, eat lots of fruits and vegetables, maintain a healthy weight, cut down on salt and booze, avoid processed meat, and don’t smoke. “We can’t feel high blood pressure, so we need to get our blood pressure measured periodically in order to determine our risks.” Deep breathing and other relaxation techniques have been shown to lower blood pressure, too. A set of studies earlier this year suggests a daily cup (eight ounces) of blueberries can help lower blood pressure and improve heart health. Flint suggests something else: “There are no valuable generalizations about which specific medications or interventions are best for managing hypertension,” he says. “The single most important thing that a person with high blood pressure can do is to have an ongoing therapeutic relationship with a primary care provider.” Treatment must be individualized, Flint says. “Of all of the factors assessed in general physical exam, particularly in a routine screening visit in an otherwise healthy person, blood pressure is one of the most important data points,” he says. “We can’t feel high blood pressure, so we need to get our blood pressure measured periodically in order to determine our risks.” And the risks of hypertension “can be managed over time with medications and other interventions,” he said.
https://elemental.medium.com/what-you-should-know-about-high-blood-pressure-bdb349f0807f
['Robert Roy Britt']
2019-09-04 11:01:01.896000+00:00
['Health', 'Hypertension', 'Death', 'Blood Pressure', 'Science']
You’re more creative than you think
For most of my life I denounced my personality as analytical and logical, without a creative bone in my body. I suppose I put this blind acceptance down to my inability to produce anything remotely lifelike in art lessons, but I guess I will never know. What I do know, though, is that creativity is about more than just the ability to draw. What I also know is that I AM creative, but it manifests itself differently than it does in, say, my artist housemate. It has taken me starting a career as a copywriter at the ripe old age of 25 to realise that being incapable of drawing does not mean that I am uncreative. I no longer panic when I have to come up with my own ideas, writing off my ability to come up with anything interesting or unique. And neither will you, if you remember these 5 things: 1. Be open to inspiration from the most unexpected places I am literally now that person who writes down ideas as soon as they come to my mind, no matter where I am. We all know that sitting outside under a tree is where the best ideas are supposed to come from, but in reality every single place you go can inspire creativity — if you let it. It is also not cheating to take inspiration from the people that you admire. Obviously, I am not condoning copying in any way, but even Picasso had his mentors, probably. Taking inspiration from experiences you have and leaders in your field is by far the best way to grow. So: · If you notice someone else doing something amazingly, let it inspire you. · If you suddenly wonder why an animal is acting a certain way, keep wondering — and take notes. · If you get into an argument with a stranger, be inspired by their different way of thinking. 2. Creativity can be learned As Steve Jobs once said: “Creativity is just connecting things.” We all make sense of things differently, so embracing that fact is a simple way to think creatively. My train of thought will be hugely different from yours, even when we both look at the same object, so we will gain two very different, equally creative ideas from it. We are constantly learning, so the potential for new ideas is always ongoing! Even if you have two awful ideas, the result of their connection might be incredible! So, practise connecting the things around you in weird and wonderful ways, and creativity will become second nature. 3. Create in the morning This might go against the routine of the night owl, as I know that many people work better at other times, but there is actually a scientific reason behind the early-birds-are-more-creative philosophy. It has something to do with the fact that our prefrontal cortex (creative brain) is most active in the morning, and that its activity deteriorates throughout the day. I use the first few hours of every day to come up with the most difficult ideas that will need the most creativity, because that finite amount of prefrontal cortex activity I have during the day makes it much easier to connect my ideas in creative ways early on. 4. The more you come up with new ideas, the better they are It took me about 2 months to come up with an idea for something to write on here. I knew I wanted to write, but I never felt particularly taken by any idea, and was also waiting for my lightbulb moment. I tried random noun generators (as suggested by someone) but soon gave up on that idea when I reminded myself that “I am just not creative”. But, when I finally decided to just write something, anything, (and stopped thinking it had to be the next Macbeth), I found that each idea led to another, and suddenly I was actually finding some creativity inside me! The best thing I have discovered is that creativity is like a muscle, the more you use it the stronger it becomes. I had always had an ongoing list of ideas, but none of them seemed great enough to spend time on, until I actually tried. Which brings me on to my most important point: 5. Your ideas are GOOD Perhaps they aren’t, obviously I have no idea. But they are most likely better than you think they are. Just stop thinking and start DOING. Whether you want to paint, write or invent new tech, creativity won’t happen if your imposter syndrome stops you even trying. And, if the ideas aren’t that good, you will soon find out, and then you can move onto the next one! And the next one will be bigger and better, guaranteed. So, go get creative! We don’t all have to be Big-C creatives, with Nobel prize winning ideas. In fact, we can’t all be, because then the award would mean nothing. But even the most analytical of mindsets can unlock their creativity by being constantly inquisitive, connecting unique ideas together and actually starting to create.
https://medium.com/bulletproof-writers/youre-more-creative-than-you-think-335b0e126bba
['Felicity Thompson']
2020-09-24 04:56:14.997000+00:00
['Creativity', 'Creative Writing', 'Mind', 'Thinking', 'Productivity']
Will human-centered design leave the human designers behind?
A.I. is sort of a peanut butter you can spread across [multiple industries]. With a precise idea of the conditions this thing I’m designing will see in real life, I can design it better. — Maurice Conti DESIGN THINKING IN THE MACHINE AGE Thanks to design thinking advocates, designers have emerged to be taking over all sorts of founding and managerial roles in quite a few Big Tech players over the last couple of decades. At the same time, the advent of machine learning didn’t succeed in challenging the design trade per se. Or did it? When artificial intelligence is taking over every other industry at this sweeping pace, there is an array of lingering questions arising in design and tech communities alike. How do we tame the machines to produce compelling graphics inducing human emotion and thought? How do we restrain from feeding AI with biased data by amateur non-designers? Will the exquisite Adobe software finally evolve into robotic tools taking over a human hand? Feelings and thought are very mixed, so let me reserve a piece of personal stance in this blog. Ai-Da: the first ever robot artist who can draw without any human input. DITCHING HUMAN DESIGNERS? In the mid last century, probably the most celebrated commercial designer of the time, Paul Rand, noticed that the majority of ‘professionals’ in ads business ‘…not even discriminating enough to distinguish between good and bad, between trendy and original, nor can they always recognize talent or specialized skills. In the field of design theirs is the dichotomy of being privileged but not necessarily being qualified — after all, design is not their business’. Some fifty years in, and the situation has aggravated immensely. The abundance of tools and design assets available online is striking giving everyone having some basic Photoshop skills a chance to create graphics of dubious quality. Ditch the years of design training / self-learning — all you need is a Macbook and Adobe Suite (side note — you’d better get a Wacom too). Now, with robot-designers this design education gap will get completely out of control. On the flip-side though, machines have to be heavily trained, and preferably by top-notch designers to make sure the output is adequate. Sounds like a pricey endeavour to me, however, could be another alternate career pathway for graphic design graduates, and consolation to those thinking that humans will be kicked out of the profession entirely. I do believe that this sort of facilitator / teacher role can prove to be most viable, if not exciting, for evolving artistic trades. Collaborating and sharing knowledge while building #ML tools — I’m down for that. AI TOOLS — DESIGNERS’ LITTLE HELPERS In the wake of the recent news of the world’s first machine-artist, a reasonable angle to look at AI is like another tool for artists, like the camera, or the drum machine. Creators are adept to playing with devices and all sorts of collaboration, which is a great way to get those juices flowing, ain’t it so? More so, with AI extensively taking over the niche of affordable freebee design-tools, it seems logical to exploit the ‘little helper’ further to speed up the flow and automate some tedious tasks like preparing, sorting or unifying design assets. Because no one likes to spend hours cropping / retouching hundreds of jpegs. From creating instant pattern variations and legit UI tools to pretty basic logo generators; to more exquisite design tools like Adobe Sensei and Intelligent Alerts — all these are great time savers loved by designers and businesses. Robotic intelligence is also a great way to help making design decisions bringing complex data analysis to the table to iterate faster and consider multiple options otherwise not available to a human eye. WHAT’S INHERENTLY WRONG WITH AI DESIGN-GENERATORS? Well, to my mind, they’re entirely missing the ‘metaphor-behind-the-design point. And if the task of exerting any meaningful emotion resonating with the human audience seems plausible for further robo-generations, that of designing a poster in a way to provoke human brain to ‘close the Gestalt’ will hardly ever be. And of course there is barely a machine (at least, for now) capable of constructing a logo that would satisfy this timeless criterion by Sir Paul Rand: ‘A logo is less important than the product it signifies; what it means is more important than what it looks like’. Another danger inferred by placing human bias and design illiteracy into machine brain. Design amateurs should steer clear, and it’s responsibility of forward-thinking design community to rely on high-class design educators when feeding artistic data to robobrain. ‘Lack of humility and originality … the absence of restraint, the equation of simplicity with shallowness, complexity with depth of understanding, and obscurity with innovation, distinguishes the quality of work of these times’, — applied to our new realia means ‘do not let engineers with bad taste ever approach the machine. Cam robodesigners be that exquisite? More complex AI solutions are still pricey, so the majority design newbies hoping to get some exceptional result with the help of ‘advanced tech’ still have access only to the elementary online generators like. My fellow designers, have you noticed how basic and shallow are Logobank’s graphics? You throw in some yellows and bananas, and it spits out a perfectly aligned ‘Juice Bar’ in a yellow circle-container logo. Come on, we all know that first-level associations never work in design! FANCY THE AIRBNB LOGO REDESIGNED BY A MACHINE Sounds dodgy, no? It is human talent that will be in charge of designing machines and machine learning applications, while others will make use of advanced Photoshop and Illustrator tools to cut down the tiresome work. Over the time machines will surely learn design principles and techniques, but can they learn human emotion? According to A.I. Superpowers: China, Silicon Valley, and the New World Order, one thing the machines can’t do is ‘building empathy, compassion, and trust — all of which require human-to-human connection’. Only humans can truly make a product that serves its customer in a meaningful way. As Paul Rand aptly puts it:
https://medium.com/hackernoon/will-human-centered-design-leave-the-human-designers-behind-1e8d778b52d9
['Olya Green']
2019-06-07 11:26:01.119000+00:00
['Design Thinking', 'Design', 'Human Centered Design', 'AI', 'Human Design']
‘Healthy Eating’ Means So Much More Than It Used To
‘Healthy Eating’ Means So Much More Than It Used To Why staying at home has radically changed how we think about food This story is part of How to Eat in the New Normal, a weeklong series about how the Covid-19 pandemic is changing the way we eat, with expert advice for making food choices that help you stay healthy and happy. Fun fact: Radish greens make great pesto. Toss them in the blender with some oil and garlic and whatever nuts and/or cheese you can scrounge up and you’ve got yourself a killer sauce. Three months ago, I thought of “radish greens” as just the leafy top part you throw away. Actually, I didn’t think about them at all. I barely thought about radishes (a garnish, at best), let alone their greens, and now I know a million ways to make them both delicious. Just as this pandemic has taught me the difference between sanitizing and disinfecting, and how to turn a tank top into a face mask, it has also taught me to appreciate radishes — and everything else in my fridge — like never before. There’s no such thing as a garnish now, and nothing gets thrown away. I’ve been a mindful eater for a long time. But it wasn’t until this pandemic that I learned just how deeply I could appreciate — and relish — what I had. I’ve always been a creative cook, and fairly conscientious about food waste. I’ve never experienced true food insecurity, but I know what it’s like to obsess over food: I spent my twenties learning to turn $25 into a week of meals. I also spent my twenties (and my teens, and most of my childhood) trying to lose weight by eating smaller meals, or no-carb meals, or meals with just three Points because that’s all I had left for the day. This would inevitably end in a frenzied binge, wherein I’d spend months eating all the things I wasn’t allowed — not really enjoying it, but simply cramming it in before I came to my senses and started the next plan. It wasn’t until I finally landed on the sofa of an anti-diet dietician that I actually came to my senses and learned the difference between obsessing over food and being mindful of it. In short, obsession is a fear response. It’s what happens when that primal part of your brain is triggered by a sense of scarcity — whether it’s artificial (say, no-carb diet) or genuine (an aisle of empty pasta shelves). Mindfulness is what happens when you pause and look around the rest of the grocery store, seeing everything that is available: the potatoes, the cold cuts, the humble radishes and their secretly delicious greens. Mindfulness is considering what you really want and need, and what you already have, before impulse buying a giant bag of whatever. I’ve been a mindful eater for a long time. But it wasn’t until this pandemic that I learned just how deeply I could appreciate — and relish — what I had. Jars, for example. Like most millennial white ladies, I’ve gone through several whimsical-glassware phases, lugging home clanky bags of Ball Mason jars, which I’d use for exactly one cocktail, then stick in a cabinet for seven years. Last week, I was scraping the last inch of salsa onto a defrosted hunk of cod and half an onion — all of which had to be used immediately, before they went bad — when I noticed the raised letters on the glass jar in my hand: “Mason.” I thought of all the $6 Mason jars I’d schlepped home on the subway over the years. I thought of all the $4 jars of salsa I’d thrown away, often with a spoonful of salsa left in them. I screamed across the apartment at my bewildered husband: “THE JARS WERE HERE THE WHOLE TIME.” I’ll probably go to hell for being such a wasteful asshat, and that’s fair. But at least I now know that a spoonful of salsa can turn old, frozen fish into a truly delicious dinner, especially if you cook it with a little sautéed onion. I also know that Mason jars can be used to freeze vegetable stock, and that vegetable stock can be easily made from onion skins, and pretty much any other produce scrap. And somehow, that knowledge is a balm. Like many others, I find myself staying up late to simmer stock or bake bread, and not just because it’s hard to buy right now, but because it feels really, really good. It’s comfort food — and that’s okay. It’s what we’re supposed to do, in fact. Just ask a dietician. “This is a really uncertain and anxious time,” says Julie Dillon, RD. “It’s really normal and wonderful that we’re leaning towards food to cope — baking and cooking, maybe trying recipes that have been passed down. I think that’s just a wonderful way for us to connect to our ancestors and different generations and be able to kind of get grounded and calm ourselves.” It’s okay to eat the food too, by the way. Comfort eating is instinctive, not disordered. Just as scarcity triggers fear and hoarding, satisfaction triggers relaxation and ease. “Within a few bites, food sends our mood into a calmer state,” Dillon points out. “It’s really effective. Why is that a bad thing?” It’s a radical concept, truly. Even after years of therapy and nutritional coaching, I still struggled to really get it. A lot of us entered These Times with the understanding that bread is a bad food and comfort eating is a bad habit, period. We live in a world that bombards us every day with that messaging — or, we used to, anyway. Now, everything is different. That’s the behavior I’m ashamed of now — the ‘whatever’ attitude, and the audacity of being annoyed at having to choose. Now, I think of “bad” food as food that has actually gone bad. Now, flour is a precious commodity — especially the plain, white, all-purpose kind. Now I feel like the luckiest duck when eggs are back in stock, though I know how to substitute them if necessary. Before I cooked creatively for fun — and usually only on the odd weekend when nothing else was planned. During the week my husband and I didn’t even eat dinner together, just because our schedules were so different. We’d throw together separate meals when we got home, or we’d order in. Before, choosing takeout felt almost like a chore, with the two of us sending harried texts back and forth at the end of the day, until one of us finally just said, “Yeah sure, sushi, whatever. I’m in the middle of something.” That’s the behavior I’m ashamed of now — the “whatever” attitude, and the audacity of being annoyed at having to choose. That is the shittiest symptom of privilege. Finding pleasure and comfort in your food isn’t bad. Failing to appreciate food and all the ways it nourishes us — that’s the only bad eating habit in my book. Now, we get delivery once a week. We get excited discussing our options, like it’s a special occasion (because it is). “Pizza Fridays!” we cheer, to no one but each other. We choose one of favorite local joints, hoping they’re still open and still will be a few months from now. We leave a huge tip — which is the only appropriate amount to tip someone running through the epicenter of a pandemic to deliver you a pizza. We ask them to leave it on the stairs, say thank you a million times, and wait for them to leave before we dash down to grab it. Then we run back up, leave our shoes in the hall, and put the pizza box on the stove. Then we wash our hands like crazy, pour wine into actual wine glasses, and sit down at the actual table. We toast, and we dig in. Everything has changed, and most of it changed for the worse. But not this. We have fewer options in our lives now, and many more unknowns. But we always know exactly what we do have, and we treasure it: There are six eggs in the fridge, enough flour to make bread tonight, and tomorrow we’ll have leftover radish greens pesto on toast. We have more than enough. If you can, consider making a donation to Feeding America to support those facing hunger, or to Frontline Foods to help local restaurants send meals to health care workers.
https://elemental.medium.com/healthy-eating-means-so-much-more-than-it-used-to-d19b9cb74c9b
['Kelsey Miller']
2020-04-27 18:07:29.497000+00:00
['Health', 'Eat In The New Normal', 'Coronavirus', 'Food', 'Life']
Inspiration is absolutely everywhere.
Inspiration is absolutely everywhere. But you have to train your brain to look for it everywhere. It’s a necessary skill for travel writing. You don’t fly across the globe to get one story. If you want to make a living, you scrutinize every detail of the trip, the destination, and the people you encounter, always looking for another story to add to the stack. Today my inspiration is Thursday Night Football. Are you taking three shots at every goal you set? (Hmm, maybe that’s my title.) Once I saw it clearly, the story began to write itself. What’s inspiring you to write today?
https://medium.com/everything-shortform/inspiration-is-absolutely-everywhere-ae71c9e23f42
['Melinda Crow']
2020-12-18 03:44:30.322000+00:00
['Ideas', 'Writing', 'Creativity', 'Football', 'Goals']
To Improve Your Writing Skills, Find Your Voice
A decade ago I was given my first major magazine assignment. The twist is that it wasn’t actually MY assignment. I had been hired to rewrite an article that had been done by someone else. Poorly, it appears. I don’t know who this inadequate writer was. Possibly William Faulkner. Regardless, the copy filed by the writer was dull and lifeless, and my editor told me that my job was to make it “voicier.” My assignment was to breathe life into the words by imbuing them with my own ESSENCE, which tends to be clear and direct and gleefully profane. The irony is that my own revision of the story ALSO never saw the light of day. But I must have made it voicier somehow, because I kept getting work from that editor, even assignments for articles I was allowed to write from scratch! You can make your own writing voicier, too — the voice you use for your first novel, for the next email you write today, for your 8 millionth tweet, anything. Given that many of you are stuck inside right now, with nowhere else to go, you might even have the time to do it. But before I hand you your bluebook exam, I need to explain to you what voice is. Your voice is you on the page. It’s the sum total of your influences and your life experiences — all built into words. It’s the expression of your identity, or perhaps the identity of the person you would LIKE to be. This is a blank page, after all. Lemme break down the formula for you. Take in all the influences A voice can come in an infinite number of guises. Let’s listen to a few of them right now. We are caught in an inescapable network of mutuality, tied in a single garment of destiny. Whatever affects one directly, affects all indirectly. Never again can we afford to live with the narrow, provincial “outside agitator” idea. Anyone who lives inside the United States can never be considered an outsider anywhere within its bounds. That’s from Martin Luther King Jr.’s Letter from a Birmingham Jail. But let’s say you’d never read those words before and someone asked you to identify their source. I bet you could guess. That passage above is so distinctly King that it may as well be a fingerprint. It possesses all of the man’s public qualities: it’s honest, ambitious, thoughtful, and decidedly righteous. You can hear King saying these words in his actual voice, which was itself a magnificent instrument. Now let’s listen to another critical voice from history. I speak, naturally, of @dril: A friend of mine once told me that @dril is written as the inner monologue of the internet. I think he’s a funny guy who’s stoned out of his brain ALL THE TIME, but our two opinions need not be mutually exclusive. The average person online is both arrogant and defensive as a matter of natural reflex. It makes sense that someone took those qualities and deliberately ratcheted them up to absurd levels, complete with garbled syntax and all. You gotta hand it to him. Is the real @dril like this? I’m gonna say no. But he looked at the deeply insecure lunacy of everyone else online and shrewdly created a voice that reflected it. There are more voices like these all around you. They come in the form of writers, musicians, filmmakers, family members, even advertisers. Listen to them. Listen to Candice Millard. Listen to Alexandra Petri. Listen to Fountains of Wayne, who nailed Jersey better than all the more famous Jersey musicians ever did. Listen to your favorite English teacher. These are your influences. They supply you with information, opinions, and STYLE. For example, I spent the bulk of my adolescence listening to Sam Kinison comedy albums. Explains why I write in all caps so often. I also listened to Metallica, watched The Hard Way 700 times, re-read every Great Brain book over and over, and took in a zillion other influences. Those voices made a boisterous jumble inside my brain, each of them informing the other. This is legal, by the way. You’re allowed to be influenced by Jesus AND by Slayer at the same time. They can’t arrest you for that. Except in Georgia. Try to absorb as many influences as you can, and seek out influences that other people aren’t necessarily into. You’re living through a generation of movies all made by people who were REALLY into Star Wars, myself included. It shows, and not necessarily for the better. You got the time now to go deeper down the rabbit hole. The more diverse your influences are, the more diverse your own voice will be. Live your experiences and use them creatively The old writing edict is to write what you know. I remember being told that in Creative Writing class in college. The end result was 20 students all turning in the same short story about a kid who goes to a house party and has a shitty time. We were not the most imaginative lot. Not terribly popular, either. The goal shouldn’t be to write what you know, but to USE what you know. So you know what it’s like to be somewhere where you’re SUPPOSED to be enjoying yourself, and you know the odd shame you feel when you are not: when the night isn’t living up to the perhaps unreasonable standards you had for it. Okay, now set that party in Nazi Germany. See now? You used what you knew, but you gave it FLAIR. You gave the reader a reason to give a crap. In order to write well, you gotta live. You gotta be out there seeing and doing and learning new things. That’s not easy to do at the present moment, but let’s imagine a world where everything gets un-fucked (a stretch, I know) and you’re finally able to venture out your door once again. Take advantage. When Philipp Meyer was writing his epic novel The Son, he learned how to tan hides AND he drank buffalo blood. Why? Because that’s what his characters did. He had to personally experience what they experienced so that he KNEW what living through those experiences were like. That way, he could then put them into proper words. You know what it’s like to be in love. You know what it’s like to feel lonely. You know what it’s like to have a friend let you down. You know what it’s like to be on the receiving end of violence. These are universal experiences, but in your context they are PAINFULLY specific. You can use all that. And then you can use more. You can live more, whether it’s for the sake of research or for the sake of living itself. You will absorb everything around you in a way that literally no other person will, because you will absorb it at a particular time in your life that only you are living in. Think deeply about how those moments make you feel. Even if you’re uncomfortable with how they make you feel. ESPECIALLY so. Allow those influences and experiences to guide you Now the challenge is to get those thoughts down into words. They don’t have to be pretty. They don’t have to be perfect. What matters is that they are CLEAR, that you are able to get the person reading your copy to understand how you feel and — this is where the magic happens — FEEL what you felt. Scroll back up and read King again: Whatever affects one directly, affects all indirectly. Tell me you can’t feel those words right now. They are both profound and direct. They require no obscure vocabulary to get the idea across. King lived and learned enough to have ideas, to know how to get those ideas across, AND to separate which ideas of his were worthwhile from those that were useless. Think hard about your ideas. Save a good one, and then think about what that idea would look and sound like if it were something you wanted to see and hear, whether it’s in the form of a short story, or a take, or a TV show script, or a song, or even a tweet. Be your own audience for a moment. Laugh at your own jokes when they land the right way in your cortex. It’s okay. You’re finding your voice and you’re gaining a strange, wonderful awareness of how that voice will sound to distant ears. Once you’ve given your voice shape, it has remarkable power. I’m gonna let Carl Sagan explain how: One glance at (a book) and you hear the voice of another person — perhaps someone dead for thousands of years. Across the millennia, the author is speaking, clearly and silently, inside your head, directly to you. Writing is perhaps the greatest of human inventions, binding together people, citizens of distant epochs, who never knew one another. Books break the shackles of time, proof that humans can work magic. You can work that magic, too. All you gotta do is listen: first to others, then to yourself. Drew’s third novel, Point B, is available starting today. Right now. Go buy it.
https://forge.medium.com/how-to-find-your-voice-as-a-writer-43b283b67a0c
['Drew Magary']
2020-04-22 15:50:40.580000+00:00
['Creativity', 'Writing', 'Social Media', 'Inspiration', 'Create']
5 elements of high converting ecommerce product pages
A product page is where e-commerce customers spend most of the time and make the final decision about completing a purchase. If you are running PPC campaigns, the products page might be as well the first impression a prospective customer has of your online business. The importance of a good product page is immense — it is the page where you showcase your products and give customers a reason to make the final buying decision. The structure of your product page can make or break your sales and in this article, we’re covering the 5 elements that all high converting product pages have in common. Here’s what we covered The 5 elements: Trust High-quality images/videos Optimized product copy Clean design, no distractions Flawless checkout Conclusion With these 5 elements, you are more likely to turn your eCommerce visitors into paying customers. Trust One of the key elements of your product page should be building trust. Showing your product page is trustworthy by displaying trust signals can play a decisive role in completing the purchase. Contrarily, not having enough trust signals on your product page can lead to cart abandonment: Trust signals on product pages include: Social symbols — icons of your social plug-ins; Payment assurance — multiple payment options and third party badges and certifications; Reviews and ratings — a snapshot of the general rating, and feedback from previous buyers; Contact and information — contact information and possibility to get in touch with the store; Trust badges — badges that show the payment methods are secure — the most popular are Norton Secured, McAfee Secure, and Verified by Visa; Return policy — offering free return in a certain time period or a Money back guarantee badge. Increase trust on your product page by: Including reviews Including 2 to 10 reviews of the product from your customers builds trust. Some stores offer a possibility to win a discount code if you leave a review of the product. Do not hide the identity of the reviewer — adding full names, pictures will bring more transparency. Display the average score calculated by the reviewers for a visitor, who does not have time to read each of the separate reviews. Offering multiple payment options Your online store should provide the most popular payment method among your target group. However, adding multiple payment methods increases your store’s credibility. The bigger choice of payment methods you offer, the highest the trust will be of your prospective customers. Another possibility is to offer the “pay later” possibility, where the customer pays for the products after receiving them. Do not forget to include icons of the possible payments as well. Displaying contact information Try including the icons with links to your social media channels and contact information on the bottom or top of your product page. Adding a chat-bot on your page that gives viewers the possibility to immediately ask questions is even more efficient. The online store Asos builds trust by offering free shipping and a 60-days return policy. It also includes social icons in the footer: While the website Bol.com which presents more providers of the same products builds trust with the platform by highlighting a free 30-day return policy, 24/7 customer care service, and free shipping: It also includes reviews on each product page: 2. High-quality images/videos The visual representation of the products in your online store is the most important aspect of a product page and can be a deal-breaker. Compared to the physical stores, a customer is unable to hold the product; therefore it has to be presented to him through pictures and videos from different angles. As the popularity of videos is increasing, it is highly advisable to include both — high-quality pictures and videos to highlight the best of your products. Nike already incorporated videos on their product pages: When using multiple pictures, make sure all of them have the same size and height-width ratio. A recommended size for e-commerce pictures is 2048px X 2048px. Include one (or all) of these practices to improve your visual representation: Provide 360° view of your product Offer your customers a more interactive, 3D experience by providing a 360° view of your product. Most commonly used for showcasing jewelry, this is especially handy when you want to show the details of your product. Include a model wearing the product Again, people cannot try your product before buying it; therefore try to show them how it looks. The most successful online stores include the measurements of the model wearing the item. Use video to show how to use the product Adding a video of how to use the product or how to style it has a persuasive effect on your potential customers. Brilliant Earth offers a detailed look of their rings with the 3D feature: While Boohoo includes pictures of the model from different angles, and the clothes size that he or she is wearing: Effects use the video to show the viewers how can they use the product (and indirectly, why do they need to buy it): 3. Optimized product copy Creating a fun, engaging, and optimized product copy can make an online store stand out among competitors. To be successful at product copy, you need to know your audience, have a strong understanding of the brand, and be familiar with the SEO best practices. Improve your copy by: Speaking to your target audience Knowing your target audience dictates the tone of voice, which words to use, and sometimes even how funny your product description becomes. You can use trendy word abbreviations when targeting Gen Z but avoid them completely when writing copy to attract boomers. Talking about value The copy should answer the question: “Why does someone need this product?” and present the advantages of having it instead of just its features. The online store Twelfth South created a copy that speaks to millennials by taking into consideration their buying behavior. Known for being technologically savvy, millennials tend to buy the latest version of mobile phones and are loyal Apple customers: When it comes to innovative headlines, Firebox knows how to catch a buyer’s attention by creating unique and fun product names: Showcasing a product’s value comes almost naturally for the online store Chi Chi London. In this product description they explained where could their customer wear the dress, instead of talking about its features: 4. Clean design, no distractions The design of your product page should be attractive and easy to navigate. To improve your website design: Use tabbed navigation To avoid cluttering your page with text, present more information, create a drop-down menu with links to more details about the product. Create clear and contrasting CTA’s Make your call to action pop by making the button clear and in a different color compared to the other text. Make it mobile-friendly Design your product page to be not only responsive on mobile devices but clear and easy to navigate when being in a smaller format. The online store Watches of Switzerland keeps its product in the spotlight by using tabbed navigation: The website of Jackie Smith store opted for a more colorful and eye-catchy design, paying attention to the colorful call to actions: Warby Parker’s product page is a perfect example of how a clean, well-designed e-commerce website should look on a mobile device: 5. Flawless checkout The checkout process should be simple, transparent, and fast. If your score of cart abandonment is high, consider improving your checkout process by: Including a basket summary Include a review of the products in the shopping cart and the possibility to remove some of them before moving to the next step. Displaying a progress indicator Creating an overview of the needed steps to complete the purchase makes the customer aware of the next steps and can indicate how soon the process will be over. Offer guest checkout Offer a guest checkout for customers who are in a rush or do not want to create an account H&M offers a review of the products at the beginning of the checkout process: In the checkout process, Etsy displays a progress indicator in the top-right corner: Chanel facilitates the checkout process by offering a guest checkout: Conclusion Building trust, using high-quality images and videos to show your products, and engaging your visitors with fun and useful copy can increase the number of prospective customers. While the clear design of your product page and a simple check-out process should lead them to complete the purchase. By keeping in mind your target audience, you will surely be able to personalize each of these elements and use it to connect with your ideal customer.
https://medium.com/analytics-for-humans/5-elements-of-high-converting-ecommerce-product-pages-bdcebc4a6377
['Mike Wagaba']
2020-12-22 19:04:59.250000+00:00
['Marketing', 'Content Marketing', 'Analytics', 'Entrepreneurship', 'Business']
The Facebook Front
I joined Facebook in 2006 when a lifelong, trustworthy friend sent me a “friend request” over email. Simon wants to be your friend, the message conveyed. I was confused. “What? We’re already friends,” I thought. “What is this?” Simon was a bit older than me, well-rounded, in that he was both extremely well-educated and artistic — a man of immense talent and wisdom. He was always articulate, and certainly not the type of person to spread chain mail, debunked hoaxes, or questionable links. As a gay man who disliked the option of online dating services, he often relayed his weariness of “things on the internet.” He especially disliked having an inbox overflowing with spam. “Inkjets and penises,” he’d repeat in a sing-song way, his exaggerated southern lilt lending the phrase extra comedic effect. “That’s all the mail I get anymore. Spam. Always the same stuff. Folks trying to sell me ink jets and penises!” I decided to follow Simon’s link and check out “the Facebook” (back when Facebook still used “the” in front of its name), and found it was actually pretty cool. It had the look of LinkedIn, but the feel of a more casual, fun climate — nothing stuffy or professional. It definitely had the potential to be a huge time-waster with its endless rabbit holes and simple, mind-numbing games. But it also had the special power of doing the whole ‘six degrees of separation’ thing for you, because it would suggest other “friends” who were already in your network, but whom you might not have known had an online presence anywhere. As you invited more people to be your Facebook friend, more people joined and got hooked, and in turn, invited even more people. It was a whole new concept for connecting—and finding people. The potential was astronomical. The very first post on my “wall” was from Simon. I remember getting the notification and the ridiculousness of feeling important for a moment. The message read, “Welcome to the face book! This can be kind of addictive!” Right he was. Simon’s no longer living, but every year that post shows up in my memories and makes me smile. Image by Simon Steinberger from Pixabay In 2006 when I joined “the face book,” I’d just given birth to my third child. I’d been fanatical about taking photos of all three kids, but had no efficient way of picking only the best ones, sorting them, compiling albums, and then sharing with family and friends spread out across the country. Online photo services were not always user-friendly at the time, or they came and went. Nothing seemed to stick around. We’d only discovered YouTube and Blogger about a year prior. We found those to be great outlets for sharing news or home videos, and in return, I could enjoy videos of my baby nieces four states away, taking their first steps—all from the comfort of my own home. This was the next best thing… Only, there wasn’t one platform where you could share all of it together, in the same place. I couldn’t embed YouTube videos on Blogger, for example. Adding good quality photos to a blog post wasn’t the one-step insert like it is now. Nonetheless, we embraced and enjoyed this new age of technology and sharing. It was something we couldn’t have even dreamed of having in the 80s (which, in hindsight, was definitely a good thing). Around the time social media really exploded — about a year or two after Simon invited me to join—I noticed that a lot of folks’ portrayal of life in general seemed to explode. Towards the end of the decade, a trend was evident: the #humblebrag was exploding in popularity. The more you scrolled, the more you saw people across social media who were overwhelmingly happy, like, all the time! Nothing bad ever happened to anyone! Everyone loved their super-satisfying jobs, their happy relationships, their immaculate homes, their loving families, their perfect pregnancies, their perfect kids, their perfect parties, their perfect vacations (which always looked expensive and lavish). It was enough to make you feel like a nobody, like your life simply paled in comparison. This was, at least, one area of life where my psychology degree (and common sense) came in handy. I realized it was pointless to ever feel bad about my current station in life, because deep down, I knew I was doing just fine. And because I saw most of these Facebook friends in real life, I knew that they weren’t necessarily doing so fine — at least not all the time. It didn’t make sense to believe everyone else out there was being real and honest when they showcased nothing but Brady Bunch bliss all the time. It defied all logic. What were they not sharing? What were they hiding? I realized Facebook was becoming merely a convenient “front,” a rose-tinted lens, and I started calling it “the Facebook front.” As Facebook became less real, it became less fun. As we Americans became more obsessed with living our lives on social media, studies started being released regarding the impact of social media on mental health. Time and again, a direct correlation was found among people who consumed lots of social media and also suffered depression or anxiety. In fact, social media was often identified as causation, or at least an accelerant, for those suffering the effects of depression and anxiety. What became evident was that most people were merely showcasing the highlight reels of their lives, and even then, those were frequently edited, filtered, or exaggerated — if not totally fabricated. It was all smoke and mirrors. By using images and words that portrayed only the things people could use as #humblebrag fodder, it became apparent that people were carefully crafting their own personal brands. They were choosing this over honesty, and in the process, withholding the unfiltered, imperfect images and stories that would have more authentically reflected everyone’s reality. We’ve all been complicit at some point. I try to keep myself in check, but fail miserably at times. I do occasionally keep it real by posting exactly how shitty things are—no filter. Surprisingly, those are the posts that resonate the most with others. Those are the posts that seem to help foster empathy. Those are the posts where people who don’t usually write anything on social media suddenly feel led to speak up and comment, “me too.” Trying to juggle social media when you use it for your work is almost impossible. Especially if you’re a writer, actor, artist, or anyone more in the public eye, or even if you’re just job-seeking, in many cases, you need to have a digital footprint everywhere—Instagram, Twitter, Facebook, LinkedIn, etc. Since each platform uses different algorithms and methods for sharing the most relevant content in order to give each unique user the most personally-tailored experience, you have to approach each platform differently. Which is time-consuming to figure out, and exhausting to apply. As one example, through trial and error (and time), I’ve learned that Facebook users are less likely to share the work of anyone who self-promotes, whereas Twitter users fully embrace the shameless plug approach. Almost all traffic to pieces I’ve written that exist behind a paywall have come exclusively from Twitter links, shares, and retweets. (Plus—selfish perk — Twitter offers what Facebook doesn’t: the ability to communicate directly with your favorite celebs who actually run their own accounts. A few of my original tweets and links to my work have been retweeted by some pretty awesome folks — Rosie O’Donnell, John Cusack, Patricia Arquette, Jacob Tobia, and others. That’s my only #humblebrag here, I promise.) But when it comes to keeping up with community or local news, Facebook is far better, because it’s so much smaller. And also because anyone can form discussion or interest groups around any topic under the sun. Communities can be built that don’t otherwise exist. Facebook is superior for this. Though an excellent case can be made for mastering just one form of social media, sticking with just one, and doing it well, I like the variety that different social media platforms offer. When I’m in a hurry, I prefer Twitter because of the fast pace and brevity. When I need a happy escape, I go to Instagram or TikTok. When I want to catch up with my closest friends, I go to Facebook, or I pick up the phone and make a call, or text. (And these are all great options when you have social anxiety, and the thought of going out in public is simply too much some days.) Image by ijmaki from Pixabay I’ve also found it necessary to try and balance social media use with not using it, like, going “off the grid” completely for periods of time, for restorative purposes (or because maybe you just don’t have the time, energy, or interest). But the downfall there is you miss out on info that’s personally relevant to you. I swear, every time I take a Facebook break, I seem to miss about 5 major illness or surgery events, 4 birth or death announcements, 3 major life changes, 2 career updates, and 1 person who suddenly uprooted and moved to Russia. Like it or not, the majority of people live life on social media now. Of course, all of social media is a delicate balancing act; you can’t post only negative stuff all the time because that’s just depressing for everyone. I’ve found that what works best is a mix of everything — everything that’s real. Glimpses of hope, especially when presented along with authentic pain, struggle, or suffering — things we all experience more often than we’d like to admit—can especially resonate with many. Which can go a really long way in cultivating and nurturing empathy. And empathy, I believe, is the one ingredient that’s largely absent in our society right now. We can all afford kindness. Even occasionally. Especially on social media.
https://medium.com/swlh/the-facebook-front-a47db708e156
['Martie Sirois']
2019-10-03 10:42:40.723000+00:00
['Society', 'Mental Health', 'Culture', 'Self', 'Social Media']
You Probably Are a Terrible Writer, but Don’t Let That Stop You
What if You ARE a terrible writer? Sometimes there is worth in assuming the worst. At least insofar as it can be clarifying. Apart from the question of being terrible, I had to consider what it would mean if I was correct in my suspected terribleness. This self-examination led me to conclude that ultimately, I did not care. You shouldn’t either. It turns out that rejection is not so bad. Or at least, it is extremely survivable and not as bad as you think. If you are rejected for a publication, you can write more and submit again. If you self-publish your work and it does not gain an audience, you can write and publish more until you do find an audience. In either case, your family and friends will still love you, and your prospective readers will still be out there. The worst that will happen if your work is terrible is that no one will read it. That’s it. Nothing else will happen. No one will cut you off. There will be no grassroots campaign to keep you from your keyboard. If you are a terrible writer, you can either give up or get better. I vote for getting better. The only way to accomplish this is to write more. Also, you have to start letting people read your work. If you have a chance to submit a story, do so. If you have an article you want to publish, do it. Even if you are terrible, hiding your words from the world won’t help. Writing and publishing more will. The internet can help you improve as a writer because there are nearly endless opportunities to get your work out there. Your work will either be viewed, or it won’t. This equates to real-world, real-time feedback. If you keep at it, you can use this to drive your progress. But please stop keeping all of your written work to yourself. This is the only action that will keep you from improving. Now, let me tell you something you’ve been afraid to hear: You are a terrible writer. See how easily you survived? Now, please write and share your work. I know I will.
https://medium.com/bulletproof-writers/you-probably-are-a-terrible-writer-but-dont-let-that-stop-you-76854ea98bbe
['Adam Rains']
2020-11-30 21:04:17.199000+00:00
['Creativity', 'Creativity Tips', 'Writing Tips', 'Writing']
7 Things You Need To Do To Have Consistently Incredible Evenings
1 || Put your phone to bed early in the evening We’ve all heard of the psychological effects of using screens before bed, but they’re usually focused on eye strain and blue light — not about how damaging it can be to be followed around by a pestering technological device 24/7. While most studies point to the time spent on technology as irrelevant, it’s important to note how much time you spend thinking about things related to technology. In the evenings, how often do you spend time thinking about that work project you have due tomorrow, that assignment you need to wrap up, that text you’re expecting, or the emails you have to check? When you’re thinking about those things as much as you do, you might as well go ahead and check your email, finish that assignment, start your workday, or send a text yourself. “But with no iPhone to keep my mind wired, I was able to tune into my body and fall asleep according to its needs. Every single night of the experiment, I conked out within 10 minutes of getting in bed. And I didn’t make the connection at the time, but my stories were all written well before their deadlines that week.” — Amanda Montell, “The Benefits of Having an iPhone-Free Bedroom” plugging in your phone/laptop in another room 2 || Don’t start projects too late Put work away and don’t start something you know you won’t be able to finish. One thing I’ve found having more regular hours for my job as of late is how detrimental it can be to my evenings to haphazardly start work — especially harder projects that aren’t as urgent as I make them out to be. If there’s something big you need to get done, don’t forget to check and see if it could be done later in the week, before work one day, or earlier in the afternoon. “If you do nothing else, plan each day of your life with intention, purpose, and passion.”― Jeff Sanders One trick I’ve found useful, one that’s used by the likes of Jeff Sanders, podcaster behind The 5 am Miracle show, is an end time to the work day. Basically, barring an unprecedented event, at a certain amount of time — you stop working. Whether that’s 6 pm or 4 pm, you don’t do work at that point. You can also change how you define that. Maybe work for your real job isn’t allowed after 4 pm, and you only allow yourself to write your novel or your blog. However, you want to define that, stick to your plan. You may be thinking, though, “I won’t have time for my work” or “I’ll never finish my projects.” “Regret for the things we did can be tempered by time; it is regret for the things we did not do that is inconsolable.” — Sydney J. Harris, journalist While that might be the case and you might need to allow more time to get work done, setting a boundary will actually allow you to work more efficiently in the time you’ve allotted for it. Whatever you do, don’t start new work too late in the day. Aim to start early, finish early, and have time to do what you need to do to relax and settle down in the evenings. 3 || Respect a boundary between work and play One really hard thing, closely tied to projects started too late, is maintaining a healthy boundary between work and play. I have vivid memories of growing up, watching TV with my family, and all of us being on devices, doing work, school, or a personal project that would have been much more enjoyed and efficiently completed without a distracting show in the background. “W ork refers to the effort someone makes that has value to the person or society or a sustained physical or mental effort to overcome obstacles and achieve and objective or result. Play can be described as any activity someone finds enjoyable and interesting and is valuable in itself for that reason.” — Montessori Child Development Center Not everything you do needs to be a side hustle. Some things can be done just for fun. One thing that really changed my life for the better towards the end of high school and on into college was the realization that I could write and not worry about doing it for the money. While I write a blog here as part of my work and am paid for other writing projects, I write fiction for fun — and freeing myself of the hustling aspect has helped me to enjoy myself so much more. It’s also allowed my evening writing time to be spent much more restfully. 4 || Look over/create your plan for the next day Many people point to the issue of willpower that can become a problem in the morning. If you happen to wake up tired and groggier than usual, having a preset plan to rely on can really save the day. And while you can save some of the planning for the morning of, assessing your current mood and anything that’s come up/come to your mind during your sleep, you can already have a list of things to do, or a calendar with your pre-arranged commitments filled in. “If you don’t know where you are going, you’ll end up someplace else.”― Yogi Berra Also, if you happen to wake up a little out of it or otherwise unprepared for the day, you have a plan that you can rely on to get you started — something you’ve thought about beforehand that you can launch into as soon as you wake up. Even if you prefer to plan in the morning, you can at least already have an idea of what your day is going to look like, and leave it to your sleeping self to think of where all the pieces should go. 5 || Enjoy yourself I know, I know —maybe it’s obvious, but in a world that’s so focused on extreme productivity, going to bed early, and setting yourself apart, it’s difficult to remember that we are designed to relax, reset, and recharge in ways that differ depending on our personality. Whatever your relaxer of choice is, make sure you make time for it in your evening. That will do you a lot more good than spending an unfocused hour on work or being distracted by email while trying to play with your kids. “While accomplishing your dreams, don’t forget to enjoy life too.” — Unknown Whether relaxing is watching a film once a week with your wife, playing a game with your kids before bed, reading a novel, watching a comedy sketch, or talking to an old friend on the phone, find something that fills your soul of and gives you the opposite of stress in your life. It’s worth making time for. 6 || Be realistic with what you can make happen Let’s be rea,l if you get home from your day job or other vocation-related commitment at 5 pm, and you aim to go to bed at 9 pm every evening, you only have four hours. That’s one hour for dinner, one hour for your spouse, one hour for reading and relaxing and getting ready for bed, and one hour for something else. “You can do anything, but not everything.” — David Allen You can’t realistically spend four hours rigorously writing your novel with that kind of schedule. If you can, try and give yourself thirty minutes and really focus on it. That will probably yield much better results and be a much more regular occurrence in your schedule because it fits fairly easily in the time you have. Some things you’re going to have to reserve for the morning, others for the weekend, and more until you maybe have fewer hours or are on a break/vacation of some sort. Being realistic isn’t always fun, but it’ll yield the best results, give you the least amount of steps, and give you realistic increments of time to do what you need to do. 7 || Reset for the next day This involves more than just planning for tomorrow. This can be laying out your clothes for the next day, prepping your gym bag so you’ll face less resistance to working out the following day, doing your laundry, cleaning the kitchen, whatever you need to do to create a fresh start for the next day. [Read: 7 Things You Need To Do To Have Consistently Incredible Mornings] Whatever you need to do to make tomorrow amazing, make sure you squeeze that into your morning routine. Some go as far as to get a light dimmer for their light switches that will turn on lights in the morning, an Alexa with a routine set to wake you up, or some other system that will alert you to the time and encourage you to start the day. While in the end, tomorrow lies in the hands of tomorrow, there’s no reason you can’t start preparing for it the day before. Have a great evening!
https://medium.com/live-your-life-on-purpose/7-things-you-need-to-do-to-have-consistently-incredible-evenings-4774e8dedaab
['Katie E. Lawrence']
2020-12-22 03:02:13.073000+00:00
['Health', 'Life', 'Technology', 'Self Improvement', 'Productivity']
Here’s How I Use Natural Language Processing In Stock Price Analysis
Here’s How I Use Natural Language Processing In Stock Price Analysis Using NLP and Granger causality to analyze the relationship between the sentiment of a written article and a stock’s price Image by Trist’n Joseph Being able to accurately predict the stock market is like being able to see into the future. Stock market prediction refers to the act of attempting to determine the future value of a company’s stock that is traded on an exchange. However, being able to accurately predict the stock market is like being able to ride a purple unicorn; that is, it probably is not possible. There are way too many factors to consider which can affect a stock’s price and building a model that consists of all these factors will likely produce poor predictions in the long run. Because of this, I do not think that the goal should be to accurately predict the stock market. Rather, it should be to determine and understand the factors which have the greatest influence on the stock market’s fluctuations. Image by Trist’n Joseph It is quite common to hear that a stock’s price is nothing more than a random walk and thus, it cannot be predicted. Whether this is true or not, I believe that it is possible for key external events to have a significant impact on a stock’s price. Understanding and investigating these events could lead to understanding a stock’s movement better; even understanding just 1% more of these factors could imply significant returns. A potential issue, however, is that is it not usually immediately clear what information is relevant to a particular problem. This kind of information can be referred to as alternative data. In the world of finance, alternative data is referred to as information that is beyond the typical company filings, earnings calls, or fundamental data sets. The use of alternative data in the finance industry is rapidly growing as investors are looking for new signals which can give them an edge over their competitors. In other words, analysts might just be able to ride a purple unicorn if they can effectively utilize an appropriate set of alternative data. Image bt Trist’n Joseph Since June 2020, I have been working on a project to understand the power of natural language processing (NLP) in the context of the stock market’s movements. My motivation for this comes from the fact that the stock market is a forward-looking instrument. This means that it does not reflect a financial market’s current situation, but it reflects an investor’s outlook on that financial market. If investors generally believe that Apple will be successful in the future, they will invest in Apple now and the stock’s price will increase over time (granted that Apple is successful and holding all other factors constant). This project has definitely been a work in progress, and I have previously published two articles where I outlined my findings as they developed. I suggest looking at those articles for greater context on this one (links in the reference section). As a summary, I collected articles that were published about Apple and Exxon Mobil, calculated the daily sentiment of those articles, and then used these values (along with stock volume data) to predict the stock price for the following day. I found that by using this method I was able to somewhat predict both the stock price and the day-to-day absolute value price change of the stock.
https://medium.com/ai-in-plain-english/heres-how-i-use-natural-language-processing-in-stock-price-analysis-a58c4b160e8c
["Trist'N Joseph"]
2020-12-18 19:19:58.097000+00:00
['Machine Learning', 'Artificial Intelligence', 'AI', 'Data Science', 'Stock Market']
Notes from the field of visual health mapping
Helping doctors get to know patients and make them memorable As I start to work directly with more doctors through this process — namely Sunjya Schweig of the California Center for Functional Medicine and some local VA doctors I’m currently collaborating with — I’m learning more about how this process helps them. Aside from the obvious (things like making appointments more efficient and saving them time digging through the medical record), I’ve recently heard that the visuals make patients’ stories more tangible and memorable: “It makes my intake easier, and even more interesting to me is the way the patient’s case is much better retained in my memory. For the two cases who came in during the last couple weeks with visualizations, I can still remember the story in detail. This is not the case for my normal patients where everything blends together and I don’t retain the details as well when not looking at the chart.” –Dr. Schweig This makes sense. The visuals provide an additional modality that helps doctors commit to memory what they’ve learned about the patient. I’ll quote this article from Psychology Today: “A large body of research indicates that visual cues help us to better retrieve and remember information. The research outcomes on visual learning make complete sense when you consider that our brain is mainly an image processor (much of our sensory cortex is devoted to vision), not a word processor. In fact, the part of the brain used to process words is quite small in comparison to the part that processes visual images.” In addition to helping doctors remember their patients better, it also helps them better understand their patients ‘as people’: “I’m finding these really helpful for getting to know patients better, especially with regard to work and social history.” –VA Psychiatrist For the three veterans I’ve worked with so far, it’s been helpful to map out their work history. I don’t do this with most people, but it felt significant to their story. Two of them had long-term work-related chemical exposures that seemed potentially related to their cognitive and memory loss symptoms. Two also had impressive careers that involved building and making things; they love working with their hands and solving problems, and have continued to do so into retirement. For both of them, their health has interfered with their ability to do this work, which has left them feeling depressed and unfulfilled. I made sure to articulate this in their story and pass this along to their doctors. As doctors try to help mitigate the pain of illness, it’s helpful for them to understand not only physical sensations but psychological impacts and losses; that way, they can hopefully be more effective in helping to improve patients’ quality of life. It is always a pleasure to learn from the people who have entrusted me with their health histories. Thanks to them, and also to you for reading. I’d love to know what you think — feel free to leave me a comment below or learn more at pictalhealth.com.
https://medium.com/pictal-health/notes-from-the-field-of-visual-health-mapping-d9995b91b842
['Katie Mccurdy']
2019-07-23 13:52:43.112000+00:00
['Health', 'Healthcare', 'Patient Experience', 'Healthcare Innovations', 'Design']
How to level up your UI design skills
There was a point when I had a breakdown over my UI design skills. I was working on multiple projects back then, mostly early-stage startups. I knew that my designs were technically fine and at a level that could be successfully used elsewhere. In this article, I will pinpoint a few ideas for how you can leverage your UI design skills that I wish I had known back then. Words that make a difference Due to the high pace I had to deal with, as well as the start-up clients that sometimes seemed not to be very passionate about their new businesses, I mainly opted for popular and hackneyed typefaces. This isn’t necessarily a bad idea, unless your choices have concrete reasons. Yes, there are lots of attractive products that use popular typeface choices like Lato, Roboto or San Francisco, but this choice should always be thought through. Typography is undoubtedly the most distinctive layer of UI design. Don’t be afraid to experiment with various typefaces or try different pairing ideas. Even a relatively small change like this may dramatically impact the whole perception of your designs. It sounds quite obvious, but always try to use real content. Don’t have access to world-class copywriters? Fake it till you make it. TheHub — Website by Martin Strba The devil is in the details Another aspect that made my designs look ordinary was essentially the lack of branding. There were a few factors that contributed to this. The main ones were the lack of budget for proper branding and the lack of vision. No, not every start-up needs $150k brand guidelines from day one, especially for an MVP, but without any “spice”, your designs will fail to stand out. What can you do without any clients’ preferences or brand guidelines? Make sure that the typeface and colour scheme match with your product’s personality. An app for booking doctors’ appointments won’t look good with orange colours and a quirky typeface. The same goes for an app for crypto traders; it won’t look good with pastel colours and an elegant, serif typeface. You can consider using a complementary icon set or a pattern that will add a personal touch. The ability to adapt Low-contrast is becoming an emerging and well-known design trend, and not only on Dribbble. It sacrifices readability for aesthetics, which strains our eyes even more and makes our designs less accessible to users. Insufficient contrast degrades the user experience along with discoverability and confidence. Have you ever used your phone outside? It’s obviously a rhetorical question, but designers often forget about this context. It’s fairly simple to design for high-resolution retina displays and validate our designs on our newest iPhone in an office with bright light all over the place. We often forget that not every user has the newest device, nor will they always be using our apps in a closed room. Low-contrast text is nearly impossible to read outside. Another aspect to keep in mind is designing for dark mode. Negative contrast polarity is growing in popularity. Not only does it save our battery but it increases the readability as well. Our designs should also be adaptable to dark colour schemes. Design unification Although this could be a separate article or even a series of articles, consistency and cooperation with developers is crucial to your workflow. It’s fairly easy to lose track of all the components you’re using within your designs and forget to communicate certain parts of your work well enough. That’s why you should always try to document your work in a way that’s accessible to every team member. You can do it in many ways. Start by documenting all the design tokens you’re using. Make sure that every component you’re using for your design has different states. Create a page that will be the one source of truth for your developers when it comes to the typography, colours, icons and grids. This will not only save their time, but will also allow you to think of the project more holistically. Constraints can actually make you more creative than you might have thought. Focus is also a design skill We are humans, not robots. The ability to focus on one thing at a time is now more difficult than ever before. Constant notifications on your phone and computer are becoming bright moments of pseudo pleasure. At the same time you need to be familiar with lots of UX patterns and ideas. Not only does this speed up your work, but it also lets you improve the detail in your designs. My tool of choice is time blocking. I divide parts of my day into smaller chunks that will be spent on one type of task. I personally prefer using 2h blocks, but you should experiment with various time frames. You can mark your time in an application like Google Calendar or the default calendar app on Mac, but a paper version will also do the job. Improving your skills takes a lot of practice and patience. Leveraging these skills step-by-step will allow you to produce high-quality work. Never forget to listen to others’ feedback, not only from designers. Think of your projects in a broader way; they are digital products, not just collections of screens.
https://uxdesign.cc/how-to-level-up-your-ui-design-skills-696f00e30ef9
['Jakub Wojnar-Płeszka']
2020-10-15 18:04:43.999000+00:00
['Creativity', 'UI', 'UI Design', 'Productivity', 'Product Design']
Trump Administration Admits They Can’t Control the Pandemic
Trump Administration Admits They Can’t Control the Pandemic The president needs a vaccine to bail out his incompetence. America, your long national nightmare isn’t quite over. Not yet. White House Chief of Staff Mark Meadows’s admission that the Trump administration is powerless to control the pandemic and merely waiting for a vaccine is nothing short of unconditional surrender. Welcome to Trump’s America where walls are never built and problems are kicked down the road until someone else solves them. Imagine if John F. Kennedy had said, “We choose not to go to the Moon in this decade because it is hard.” Science is complicated and sciencing takes a lot of brainpower, whether it be designing a rocketship or curing a disease. Given that degree of difficulty, Trump decided to make America wait again. To be fair, he valiantly fought his way through his own Coronavirus contraction and has returned to throwing superspreader events with thousands of acolytes in attendance. MAGA hats are recommended but masks are optional. He and Mike Pence soldier on with their 2020 campaigning, despite Mike Pence’s chief of staff Marc Short, Pence’s close aide Zach Bauer, and three others testing positive for the virus. With Pence needing to stump for Trump in Minnesota and the Carolinas this week, he’s ignored CDC guidelines, leading Chief of Staff Mark Meadows to admit what many of us already suspected. We are not going to control the pandemic. We are going to control the fact that we get vaccines, therapeutics and other mitigation areas. The Trump approach is akin to the basketball adage ‘You can’t stop him. You can only hope to contain him.’ Except instead of contain, Trump’s team is running a let-him-score-at-will defensive formation, while awaiting a medical Hail Mary. I’m mixing my sports metaphors almost as fast as this administration mixes its messages. At last week’s presidential debate, Donald Trump claimed the US is ‘rounding the curve’ in its fight against COVID-19 and a vaccine is weeks away, or very, very soon, or by the end of the year, or by April 2021. Regardless, someday in the future, we will have a vaccine, but the important part is the US is rounding the curve. This is important because one day after he said that, the US recorded the ‘highest number of Coronavirus cases in one day since the pandemic began.’ Geometry was never my strong suit, but this curve we’re on looks suspiciously like a spike. I’m being harsh on Chief of Staff Meadows and the rest of the Trump administration. You can’t hope to control a pandemic with masks, social distancing, and a thoughtful sense of community spirit. That would be, what’s the word? No, not malarkey. That would be samfundssind! Yes, that’s it. Samfundssind isn’t the result of the three espressos I shotgunned as I sat down to write, but it is a Danish term defined as “putting the concern of society higher than one’s own interests.” Social mindedness has been all the rage in Denmark in 2020. Back on March 11th, Danish Prime Minister Mette Frederiksen held a press conference, stating the following: We have to stand together by keeping our distance. We need community spirit. We need help. I would like to thank… all who have so far shown that this is exactly what we have in Denmark — samfundssind. Unlike many in the MAGA crowd who insist you’re going to have to pry their mask from their cold, dead conspiracy theory, Danes responded ‘enthusiastically.’ They complied with government guidelines like a bunch of grown-ups! This resulted in Denmark being one of the first European countries to re-open schools while keeping COVID-19 deaths below 700 and maintaining ‘relatively low’ infection rates. They believe that by valuing society over the individual, all the individuals benefit and it’s working! Seems the only thing rotten in the state of Denmark is the stench of infection wafting across the Atlantic from the US. There have been 223,378 new Coronavirus cases reported in the US since Thursday night’s debate. Despite medical experts stressing masks help reduce the risk of infection, the idea of asking a Republican-leaning American to make sacrifices of anything other than possibly their lives at a Trump rally, and signing a liability waiver before they do it, is unthinkable in Trump’s America. Personal accountability is something for those people who need to stop leeching off the welfare system and pull themselves up by their economic bootstraps. A real American can’t be told to wear a mask for the health of the country. A real American can’t even be asked to think about eating a few more vegetables at dinner. America is the land of optional masks, all meat platters, and an exorbitantly expensive healthcare system that no human can possibly explain why it’s so pricey. A cure will happen, but perhaps not until Trump turns over his tax returns. But before you dismiss Trump as a candidate of broken promises, remember he gave corporations and rich Americans their tax cuts, he pulled the US out of the Paris Climate Agreement, he stacked the Supreme Court with right-leaning justices, and he slashed environmental regulations. According to Gallup polling, this is enough to keep 56% of the country happy, thinking they are better off now than they were four years ago. You can scoff at Trump and Pence for appearing to wave the white flag in their battle against COVID-19, but many maskless Americans breathe easier knowing those two are in charge. Furthermore, many see Trump as sympathizing with their current social and economic plights, even if he does it from a palatial suite in Mar-a-Lago. When they attend one of Trump’s mask optional rallies, they pack in tightly, their unprotected faces shining statements of freedom that proclaim, “Give me liberty or give me death,” for which they’ve already signed a waiver.
https://thebrianabbey.medium.com/trump-administration-admits-they-cant-control-the-pandemic-656f1da010c7
['Brian Abbey']
2020-10-26 15:24:58.880000+00:00
['Election 2020', 'Politics', 'Health', 'Coronavirus', 'Culture']
Don’t Miss Out! A Biologist Explains Antibodies through Rap Music
Don’t Miss Out! A Biologist Explains Antibodies through Rap Music A scientist shows us how to make science fun Antibodies are all the rage these days. As Covid-19 vaccines come to market, many want to understand how immunity works. Scientists and health writers struggle to find ways to make these topics accessible and exciting. This morning my Twitter feed blessed me with an outstanding video from Raven Baxter AKA Raven the Science Maven. Her video provides a fantastic explanation of immunology. The best part is she teaches us through rap music. She walks us through the various types of antibodies and explains B cells as “B cells know the haters when they see them, so it’s fight night.” And how can you not love a lyric like “NK natural killers makin’ haters going night night”? Baxter is an American molecular biologist and science communicator. She is a doctoral student at the University of Buffalo and the founder of STEMbassy, an organization dedicated to high-level science and technology discussions in politics, culture, and social issues. She is also the founder of Black In Science Communication. We need more scientific leaders using social media tools and creative approaches to educate the public. Science is fun! Thank you for reminding us Raven the Maven.
https://medium.com/beingwell/dont-miss-out-a-biologist-explains-antibodies-through-rap-music-9f60427586e1
['Dr Jeff Livingston']
2020-11-28 16:53:57.164000+00:00
['Science', 'STEM', 'Education', 'Health', 'Immune System']
Welcoming Dash 1.0.0
It’s hard to believe that just two years ago, we released Dash. The last two years have been a whirlwind—we’ve made over 100 releases containing bug fixes and new features, we’ve introduced 10 brand new chart types, we’ve published 5 new first class component libraries, and I’ve personally spent over 130 hours helping folks out on the Dash community forum. Dash’s active community forum with 19 active discussions over the last 24 hours. To everyone in the community, thank you. Thanks for keeping the community productive, friendly, and accessible. Thank your feedback and candor and patience. We couldn’t have gotten to 1.0.0 with you ❤️ Why 1.0.0? At Plotly, most of our projects adhere to semver which means that in the 1.x.x. series we won’t be making any breaking changes to Dash. In this sense, it’s a big release. 1.0.0 commits us to a promise that the library is stable and continues to be production-ready, and that no breaking changes will be added to Dash for the near future. In another sense, it’s not a big release — Dash has been production ready since day one. With the community and commercial uptake, we’ve been very careful about breaking changes for the past two years. With 1.0.0, we’re taking the opportunity to make this commitment official and to streamline the interface with a few breaking changes for the days ahead. While 1.0.0 is significant symbolically, we haven’t changed anything at the core of Dash. We’re proud of this — Dash’s architectural foundations are solid. As we continually refined Dash over the past 2 years, talking and working with Dash users, we developed new features that outdated old ones, discovered which settings were awkward or confusing to new users, and overall accumulated a lot of ideas for how to improve the experience of building Dash apps. With 1.0.0, we’re resolving some of the friction that users experienced with Dash by streamlining Dash Core Components and changing some parameter names and configuration settings to be friendlier and more semantic. Essentially we’re taking all that we’ve learned from our community and making Dash more intuitive and powerful for years to come. For a full list of what’s changed, and what breaking changes you’ll need to reconcile in order to upgrade, see the Dash 1.0.0 Migration Guide.
https://medium.com/plotly/welcoming-dash-1-0-0-f3af4b84bae
[]
2019-06-27 17:22:10.223000+00:00
['Python', 'Plotly', 'Data Science', 'Data Visualization', 'Dash']
What is the “Collective Prisoner’s Dilemma”?
What is the “Collective Prisoner’s Dilemma”? New research looks at brain function during individual and group decision-making scenarios to better understand the role of personality, brain sync, and morality. How we make decisions together when we are looking to maximize the benefit to all is an important area of research, and important for the survival of our species. We’re passing the point where we can act as if we are competing on a playing field with infinite resources. Planetary prisoners Nations are approaching full interdependence as they expand to take up as much space as they can on the global landscape. Space is another frontier, but on the surface of the planet, we’re shoulder-to-shoulder. Our situation is what game theorists call the “repeated Prisoner’s Dilemma” (PD). In the single-case PD (detailed description), co-conspirators are captured, and then separated for interrogation. If they keep the faith (cooperate), they each get a moderate reward, and go free. If they stab each other in the back (defect), they both get punished. If one defects, and the other cooperates, the defector gets a sweet deal, and the other… well, you know. Go directly to jail, do not pass Go, do not collect $200. The Prisoner’s Dilemma Illustrated, Source: Christopher X Jon Jensen (CXJJensen) & Greg Riestenberg, WikiMedia Commons Open Source This paradigm is used as a way of studying how we make decisions, part of “game theory”. Do nations cooperate, or defect? Are short-term strategies, which may favor defection, good in the long-run? The long view Research on best long-term strategies for repeated PD support ongoing cooperation as the winning approach. Cooperation on balance works out best for all, even with some defections. Illustrating this, in a repeated PD study of 94 participants playing 400 ten-round games, researchers found that 40 percent of players were “resilient cooperators”. Based on this data, modelling predicted that “a sufficiently large minority of resilient cooperators can permanently stabilize unravelling among a majority of rational players.” Individual factors Two against one, wearing fNIRS headgear, Source: Zhang et al., 2020 In their paper Group decision-making behavior in social dilemmas: Inter-brain synchrony and the predictive role of personality traits, Zhang, Jia, ZHeng and Liu (2020) set-up a variant of the PD, pitting two players against one (who was actually one of the researchers) in order to study how Big 5 personality factors (Openness to Experience, Conscientiousness, Extraversion, Agreeableness, Neuroticism) and brain activity correlate with decision-making. They recruited 54 participants who did not previously know one another into 27 pairs. Before the PD games, they completed a measure of Big 5 personality traits. While they were playing, participants’ brain activity was captures fNIRS (functional near-infrared spectroscopy, with a focus on key areas identified from prior research. It’s easier than fMRI for studies like this as players don’t need to be in a scanner. Reward matrix for HIR and LIR, Cooperation and Defection, Source: Zhang et al., 2020 There were two parts of each game. In the first, participants decided to cooperate or defect without conferring (the individual decision-making stage IDM). In the second, they discussed their strategy (the group decision-making stage GDM). This way they compared self-interest with shared interest. Researchers also set high and low reward scenarios, (high incentive reward HIR) and low reward (low incentive reward LIR) in order to see how the stakes affected findings. Brain activity was measured throughout, along with reaction times and outcomes. Findings Overall, there was more cooperation than defection with lower stakes, regardless whether IDM or GDM. Players were faster at deciding what to do when cooperating; deciding to defect slowed them down. Cooperation was more common in group decision-making compared with individual. The brain’s right inferior frontal gyrus (rIFG), thought to be a key component in the human mirror neuron system (involved with empathy and attunement), was more active during GDM. Inter-brain synchrony (IBS) was higher during GDM, seen in increased rIFD coherence. Players’ brains were more entrained together at those times. Their IFDs were also more synced-up when the reward was higher. Sync (IBS) spiked in the high-reward/group decision-making condition, suggesting that the mirror neuron system is most engaged when our fates are intertwined and the stakes are high. IBS was elevated in the right dorsolateral prefrontal cortex (rDLPFC) during GDM. The rDLPFC is key in executive function, mediating cognitive control and moral decision-making. Greater effort may be required to make the best decision for all in exercising joint top-down control when the immediate impulse may be to go for the quick win. Personality tracked with these findings. Extroversion and agreeableness correlated with IBS. Extroversion tracked with rIFG IBS during cooperative decisions, and agreeableness with lIFG. Agreeableness and extroversion correlated with rIFG IBS when deciding to cooperate in the HIR task. On the other hand, IBS in the DLPFC was lower when pairs chose defection, reflective of “being of two minds” about what to do. For consideration This work is preliminary, extending earlier work on group decision-making showing that we tend to be more rational when we think together about what is best for all. Personality plays a role — agreeableness and extroversion are connected with greater brain sync between players during cooperation with group decisions. Prior work has shown that agreeableness, extroversion and conscientiousness are correlated with cooperative choices, on average. The role of two important brain areas, the IFG and the DPLFC, were highlighted by this study. Given the IFG’s role in the mirror neuron system, it makes sense that group intelligence is reflected in sync (or lack thereof) between these areas. When people are working together collectively, our brains must be entrained, figuratively constituting the “hive mind” (which is fascinating from a neuroscience point of view [1]). Are we selfish by default, or cooperative by default? Personality and upbringing make a difference, as does the need for rational group decision-making. People higher in “dark traits” are less likely to cooperate unless they are higher in empathy (“dark empaths”) or if there is clear self-interest and they are not so narcissistic, sadistic or psychopathic they can’t act in their own self-interest when it also serves others’ needs. For people average or low on dark traits and average or higher on empathy, cooperation may be more or less an obviously good idea, depending on various factors including personality and upbringing. In terms of working out how to share the planet most effectively, making the cooperative long-term decisions which game theory suggests are the best strategy for future generations[2], it’s important to combined education with rational decision-making, using tools to strengthen the factors which enhance cooperation on for individuals and groups, and setting up reward systems which increased the odds of choosing the best overall long-term strategies. Amplifying the voices of — and providing support and resources to — “resilient cooperators”, who can shift toward positive outcomes even as a significant minority, may be a winning strategy. Understanding who these people are, and determining if they are best suited for leadership positions, could be part of a rational strategy to secure a better future for humanity. Notes 1. The DLPFC is interesting because it mediates top-down cognitive control especially when moral values are implicated in group decisions. In defection, there is a lack of sync between players’ DLPFCs. The time required to decide to defect is longer than the time required to cooperate, as we morally grapple with the decision to betray rather than the more prosocial move to cooperate. The DLPFC, and closely related areas, are involved in many key functions. For instance, prior work suggests that we take in collective knowledge via the dorsomedial prefrontal cortex (DMPFC). Our tendency to pay attention to our own needs, a factor involved in weighing group decisions which may not immediately gratify, is mediated by the DMPFC as well, a key area in the brain’s activity at rest, or default mode network. We are programmed by our culture, and tend to focus on our own needs for basic survival — yet in order to survive in the long-run, we have to function collectively. When it comes to social relations, the ventromedial prefrontal cortex (VMPFC) comes on line. Whether or not you’ll be friends three months after meeting someone can be predicted based on VMPFC activity at the time, both individual and in sync. Other work has pointed to the key role of the DLPFC in radicals, such as people who hold terrorist beliefs, and connects with the willingness to act on — even die for — those beliefs. 2. Science-fiction out-take: Perhaps decision-makers can be hooked-up to fNIRS and related systems, during negotiations. Real-time brain-activity can be used to gauge the degree of true synchrony during group decision-making tasks, and stake-holders (such as voters, the media, etc.) can pay attention to meaningful neuroscience activity. I, for one, would like to see streaming graphics on candidates’ brain activity in order to inform my decision-making, at least. Please note: ExperiMentations Blog Post (“Our Blog Post”) is not intended to be a substitute for professional advice. We will not be liable for any loss or damage caused by your reliance on information obtained through Our Blog Post. Please seek the advice of professionals, as appropriate, regarding the evaluation of any specific information, opinion, advice, or other content. We are not responsible and will not be held liable for third party comments on Our Blog Post. Any user comment on Our Blog Post that in our sole discretion restricts or inhibits any other user from using or enjoying Our Blog Post is prohibited and may be reported to Medium. Grant H. Brenner. All rights reserved.
https://medium.com/age-of-awareness/brain-activity-personality-and-group-decision-making-83c662b893b0
['Grant H Brenner']
2020-10-29 04:21:17.328000+00:00
['Neuroscience', 'Psychiatry', 'Psychology', 'Decision Making', 'Personality']
Retinal Inspired Neural Network Structure
If you’ve ever trained a GAN(Generative Adversarial Network) or an image classification neural network, you know just how data hungry it can be. Especially when you want to work with higher resolution images. The other day, I tried running a GAN on a couple hundred images. Just on a resolution of 256x256 and the compute time was close to an hour. And I’ve got a fairly decent graphics card: a GeForce RTX 2070 Super. So that got me thinking: How in the world do animals process so much visual information on the go? Think about your own eyes for a minute. Photo by Paul Skorupskas on Unsplash We focus on a point in center, while everything else is out of focus. If you could look at the back of your eyeball under a microscope, you’d see around 130 million rods(think light/dark) and 7 million cones(think color). But the bulk of those rods and cones are smack dab in the center of your eye. Photo by salvatore ventura on Unsplash Having all of those sensors in the center allows you to see high definition at the focal point, while still being able to see if there is anything of interest is in the periphery. So I wondered if a model based on nature could help reduce the high GPU requirements in any way. And it turns out, yes it can, and in a much more significant way than I could have imagined. The Problem To start, let’s consider a typical GAN and, say we were to feed it 10,000 images at 4,096 x 4,096 resolution. If you pull out your handy calculator, you’ll see that is shy of 17 million points of data in gray scale per image, and 3 times that if in RGB. Compare that to the 256x256 images I was processing earlier, which total around 65kb per picture. So we’re talking about an increase of 256 times the size for the larger images. But that doesn’t take into consideration that training a neural network requires performing operations on humongous matrices. The time it takes to calculate the product of two matrices grows exponentially at just a measly* n³!!! (https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm) *dash of sarcasm That turns our 256 times increase in data into nearly 17 million times the processing time! See the problem? So any way to reduce the amount of data needing to be processed will yield dividends. Plus, it’s not like every part of an image is important for determining what you are looking at. With a quick glance at a dog, you know you’re looking at your furry companion and not a hungry crocodile. Photo by Roberto Nickson on Unsplash But you didn’t have to focus on the wall, floor, fur, etc. It was all there, but just not as clear. So how could we turn a simple glance into digital code? I developed a method to do just this. The RNN-i Structure Meet the Retinal Inspired Neural Network Input method, or RNN-i, for short. First, we want to choose a focal area size. For this example, we’ll use 4x4. That will be at the center of our focus. We will pass all of those data points into our first row of “glance” data. Since it is 4x4, that will be 16 data points in our first row. Second, we will want to take an area of 8x8(two times the width and height), centered on the same focal point. But we will process each 2x2 part to take the mean average. The mean average is just (x1+x2+x3+x4)/4. That turns those four pixels into one averaged pixel. Then we continue doing this with each 2x2 subsection in the 8x8 section. This gives us, again, 16 points of data, which we now enter into our second row. Third, we continue with this process, but with 16x16 as our area and 4x4 subsections to be averaged. RNN-i Grid Fourth, we can keep doing this as many times as we like, until our entire image is “in view”. Each time we double the width and height, we only add one row of 16 pieces of data. You might be asking now, “Wouldn’t that be confusing since each row of data is starting with a new top left corner?” Maybe for us. But the cool thing about neural networks is that they can adapt to whatever way you feed the data, as long as you stay consistent. The order doesn’t matter if you just make sure you always send it in the same order every time. The Result So just how much data can this method save? Taking our 4,096 x 4,096 pixel image example earlier, instead of entering close to 17 million data points, we will be entering only 176 points of data per “look” with this method. Photo by Ben White on Unsplash Game-changer. Now, with so few data points, you would expect to lose a lot of accuracy. But I tested this method out on the MNIST fashion set here. And, in this experiment, the accuracy lost was only from 88.3% accuracy for whole images down to 81.4% accuracy with a single glance at center, using this approach. Here is a link to the GitHub paper and code: https://github.com/therealjjj77/RNNi How can we increase accuracy? With just one look, we are already doing a fantastic job of categorizing the images. But suppose we have a self driving car and 80% survival is just not the kind of odds we want to leave in the hands of AI. Then we will need a more comprehensive solution. RNN-Brn So you’re driving on the road and, out of the corner of your eye, you see a truck plowing toward you. Within a few short instances, your eyes dart to see the massive object coming your way and determine how you can avoid it or if you have time to stop before entering its path. Photo by Ernesto Leon on Unsplash In this case, you needed multiple looks to understand the object and make some determination about it. A certain part of your brain (think neural network), that has been trained to identify peripheral threats, gave you a “heads up” that you should look that direction and see what’s going on. Likewise, an RNN-i will need a processing neural network layered to analyze the output, certainty of classification, and decide below what threshold requires a second or third look at the object. And, lastly, where the new focal center should be relative to the first. Utilizing a second look at the object may greatly increase accuracy, but only requires an additional 176 points of data. We’re still not even scratching the surface of what a whole 4,096 x 4,096 image view would cost in time and data! So that is where a Retinal Inspired Neural Network Bidirectional Relay Node (or RNN-Brn for short) comes in. This will allow us to, if necessary, loop back through our trained RNN-i and see if a second look in another part of the image improves the certainty of classification. I might come back to continue this at a later date with a follow-up article, depending on the how interested people are in this topic. Leave a comment if you have any questions or thoughts you’d like to share.
https://medium.com/analytics-vidhya/retinal-inspired-neural-network-structure-79a3fed50cc2
['Jeremiah Johnson']
2020-12-07 15:14:32.932000+00:00
['Neural Networks', 'Artificial Intelligence', 'Image Classification', 'Generative Adversarial', 'AI']
After You Drop Your Balls
Is it finally happening? I can hear you asking after reading that headline. Is Zach finally going to join the league of excellent Medium writers who write about their sex lives? No. I don’t have a sex life, and the title is a metaphor. Moving on. Over the years, I’ve become a master of pushing through — being tired, being exhausted, not feeling particularly well, but still getting up and doing whatever I have to do. And there are some days where that just doesn’t happen. I get so used to feeling my usually vaguely crappy, that it’s easy to forget what it’s like to be well and truly sick. Like all-out fever and chills sick, complete with a giant anthropomorphic frog telling me that I should drink more water. At least, that’s what I think was going on. The details are kind of fuzzy.
https://zachjpayne.medium.com/after-you-drop-your-balls-39829022f8ec
['Zach J. Payne']
2019-04-05 01:09:54.416000+00:00
['Life', 'Life Lessons', 'Productivity', 'Self', 'Creativity']
Activating the Vagus Nerve Might Lower Your Covid-19 Risk
Activating the Vagus Nerve Might Lower Your Covid-19 Risk While physical distancing and masks are crucial, social interaction could calm the immune system and turn down inflammation Like other apes, humans are social animals. We evolved to live in codependent communities, and we do poorly if deprived of interpersonal contact. Everyone has a different threshold for social interaction. But nearly all of us tend to become distressed when cut off from others, and our immune system responds to this distress by ramping up its defenses. A new study in the journal Neuroscience & Biobehavioral Reviews finds that social isolation is associated with a rise in inflammation-promoting molecules, including some that are implicated in severe Covid-19. And past research has linked loneliness to poor cellular immune health and increased viral loads during an infection. All of these cellular and immune changes are worrying in the context of SARS-CoV-2. Inflammation is a unifying feature of illness, and out-of-control inflammation seems to be a common feature of severe Covid-19. “People [who have Covid-19] are not dying because of a high viral load, they’re dying because of a high cytokine load,” says Stephen Porges, PhD, a distinguished university scientist at Indiana University. Cytokines are immune cells that can turn up or down inflammation. In many cases of severe Covid-19, pro-inflammatory cytokines surge, and the resulting inflammation causes organ damage and death. “Our nervous system requires social interaction. Without that information, our bodies can’t calm down.” Porges says that this now infamous “cytokine storm” can build up for a number of reasons. Medical conditions such as obesity and diabetes — both of which are established risk factors for severe Covid-19 — tend to raise a person’s cytokine load, he says. But so does social isolation. “Our nervous system requires social interaction,” Porges says. “Without that information, our bodies can’t calm down.” In some of his work published during the pandemic, he’s made the case that safe and appropriately distanced social engagement is an underappreciated element of Covid-19 prevention and care. In one recent paper, he and his coauthors point out that corticosteroids — drugs designed to turn down the body’s production of pro-inflammatory molecules — have become a mainstay of Covid-19 treatment. But for the most part, public-health officials have done little to promote safe social interaction as a method of calming the immune system and turning down inflammation. This is where the vagus nerve enters the picture, and particularly an underappreciated branch of the nerve that exerts a calming effect through positive social interaction. The vagus nerve and social interaction When people experience anxiety or distress, including the type that stems from social isolation, what they’re actually experiencing is a swell of sympathetic nervous system activity. This system speeds up the heart, slows digestion, and causes a number of other physiological changes that are known collectively as the body’s stress response. Stress and sympathetic nervous system activity are normal and healthy in moderation. “But if the autonomic nervous system is always in fight-or-flight mode — if it never goes to relaxation — then there’s a loss of balance,” says Peter Payne, a researcher affiliated with the Department of Microbiology and Immunology at Dartmouth College. This loss of balance underlies the immune system dysregulation and runaway inflammation that are associated with chronic stress, and that also seem to play a role in cases of severe Covid-19. Payne says that the parasympathetic nervous system (PNS) provides the counterweight to this stressed-out state. Sometimes called the “rest and digest” system, PNS activity is associated with feelings of calmness and relaxation. “It’s a very positive and recuperative state,” he says. The vagus nerve — which is actually a network of nerves that links the brain and immune system to the heart, the gut, and other organs — governs PNS activity. When the vagus nerve is active, Payne says, it acts like a brake on stress and all of its immune-stoking effects in the body. There are many ways to activate the vagus nerve and its stress-lowering powers. Payne mentions deep breathing, yoga, meditation, and other relaxation techniques that entail “shutting down” or stepping back from life’s stressors. But the vagus nerve has two main branches — one of which seems to have developed in humans much more recently than the other. Payne says that this newer branch seems closely tied to positive social interaction, and it may exert a more robust calming effect on the nervous system. “In evolutionary terms, activating this branch is a much more advanced way of dealing with distress and balancing the nervous system,” he says. How to activate the vagus nerve when close contact is risky Porges says that this newer branch of the vagus nerve is connected to the muscles of the face, head, and throat. This may explain why social interaction — smiling, laughing, talking, listening, emoting — all seem to switch on the vagus nerve and its calming influence. “What we think of as talking is a form of co-regulation,” he says. “You’re projecting your autonomic state through your tone of voice, and you’re also receiving that from the person you’re talking with.” “If we don’t have social interaction, the threat of the pandemic is exacerbated.” While social contact should not take precedence over social distancing, he says that in-person interaction is optimal, and seems safe if people are outside and appropriately distanced and masked. If that’s not possible he says that video-based calls — Zoom, FaceTime, and so on — are good surrogates because they preserve most modes of normal interpersonal exchange. Plus, you can see people’s whole faces. If a video call isn’t possible, a phone call is a good stand-in. “Intonation of voice is an important safety cue we share with one another,” he says. “Think about a mother calming a crying baby, or how a dog responds to its owner’s tone of voice.” Texting or emails are better than nothing, but inferior to a call. Apart from not being able to see or hear the other person, texting can involve a lot of waiting and even elements of anxiety. “You can get nervous if you don’t get a response,” he points out. So much of modern medicine is focused on interventions — whether those take the form of a drug, a surgery, or an apparatus like a ventilator. All of those measures have their place. But Porges says that we should also utilize “the body’s own resources” to combat Covid-19, and that includes its ability to calm itself and its inflammatory activity through social interaction. “The take-home message is that we need to connect as much as we can in any way we know how to,” he adds. “If we don’t have social interaction, the threat of the pandemic is exacerbated.”
https://elemental.medium.com/activating-the-vagus-nerve-might-lower-your-covid-19-risk-e08ed0ce7a04
['Markham Heid']
2020-11-25 06:31:59.389000+00:00
['Coronavirus', 'Health', 'The Nuance', 'Pandemic', 'Covid 19']
This Was The Most Traumatic Experience of My Life
On Friday night, May 31st, 2013, I found myself in the middle of a tornado. Ever since I was a child the weather experts always warned about being in a car on elevated roads when a tornado is bearing down on you. That Friday night in 2013, I was highly weather-aware and was confident that the tornado that formed near Yukon Oklahoma was heading north and away from me. I even called my parents that live in Tucson to tell them not to worry about me. About the time I got off the phone with them the storm took a sharp turn south. I got a call from my friend Nikki warning me to run. I had decided earlier not to run away from the storm and suddenly I was faced with the realization that the apartment that I live in would not survive a direct hit from the tornado the news was describing. The weather forecaster said, “This is a groundscraper. You must be either underground, or you must get out of the way.” So, I got in my car with plans to head south, effectively trying to outrace the tornado. I immediately regretted leaving the safety of my apartment. Before I even got out of my apartment complex a wild and unpredictable wind buffeted my little Nissan car. I considered heading west instead, but I had no idea what was that way and I knew south was away from the tornado. So, I decided to go south to the I-35 highway. I figured that I could speed away from the storm. Unfortunately, I-35 was a parking lot. Apparently, everyone else heard the same weatherman tell us to flee the beast and now we were locked in a traffic jam. I was maybe half a mile away from my apartment when I hear the cross-streets of where I live mentioned as the path of the tornado and it was heading directly toward me! As the storm intensified, I pulled over to the shoulder of the bridge I was on to get close to a concrete wall. Rain was pelting my little car horizontally as the wind strengthened. Then about 100 yards in front of me I saw a power flash as wind ripped the lines out of Moore street lights, one after another. I looked up to see large black debris floating above me about 30 feet off the ground. Twenty yards in front of me a large heavy construction barricade was effortlessly lifted into the sky and thrown across the highway. My heart felt like it was about to come out of my chest. My car started rocking and I began to hear what everyone describes as the signature sound of a tornado — a high pitched banshee howl that is similar to the sound of a freight train. So, after decades of hearing that you should not be on an elevated roadway when a tornado was chasing you, there I was… Trapped. Tornado overhead. On an elevated roadway. I had never been more frightened, but I knew if I didn’t do something my life was in jeopardy. So, I decided to take my little Nissan car off-road. I jumped an embankment, crossed a side road and went over a curb to get my car up against the wall of the Central Church of Moore. The storm was throwing branches and debris on top of the car and I still didn’t feel safe, so I got out of the car and ran to a doorway that was covered and halfway below ground. By that time a steady stream of people who were trapped on the highway had followed me, parked behind me on the lawn, and were huddling with me in the covered doorway. People who were staying in the church heard the commotion and let us in. They were there to help with cleanup after the devastating May 20th tornado in Moore. It was an ironic and fortuitous occurrence. In most cases, the church would have been empty on a Friday night and we would have had to take our chances with the tornado outside. I would like to thank the staff at that church for helping me and, what may have been another 100 people that followed me off the highway, find shelter during the storm. Amazingly, me, my car and my apartment made it through mostly unscathed. You can see the track of the tornados in the image above. In addition to these tracks, there were reports of tornadoes all around us for several hours that evening, and I’m convinced that I experienced a mesocyclone as soon as I left my apartment too. I can tell you that the following day was clear and beautiful, and that I had a focus and desire to achieve my goals more than I ever had before that dark night. Hello, my name is Dirk Hooper. I have a deep passion for writing that has led me to win a few awards. I’ve had work published at Huffington Post, Slate Magazine, Business Insider, Quartz, The Sporting News, and much more. In addition to writing, I’m a professional photographer and artist, a consultant for adult marketing and branding, and an audio talent. My love for words extends to reading as well. Let’s connect! You’ll see stories on writing, motivation, entertainment, life, business, marketing, art, kink, and poetry on my Medium profile.
https://medium.com/swlh/this-was-the-most-traumatic-experience-of-my-life-888b53182302
['Dirk Hooper']
2019-09-28 07:57:22.493000+00:00
['Weather', 'Life', 'Storytelling', 'Life Lessons', 'Tornado']
What It Means for Conversational AI to Be “Conversational”
HAL is a well-known fictional conversational AI that lacked in empathy for it’s users. About the Author: Dr. Ender Ricart is a Principal UX Researcher at LivePerson, a company at the forefront of conversational AI applications for customer service. The content of this article is informed by insights from in-depth qualitative research on customer experience with conversational AI. In the research I have performed on conversational AI, people tell me that they do not want nor expect an AI to be human-like. In spite of what they say, in practice, I observe people applying the same fundamentals of linguistic interaction with conversational AIs as they do with people. Interactions with conversational AI are, thus far, designed to emulate (or simulate) human-to-human conversation, and, therefore, trigger people to apply fundamental principles of communication. If and when an AI does not behave in accordance with these principles or ignores them entirely, it leads to confusion and frustration for its conversational partner. In this article, I am going to unpack two fundamental principles of communication and their applications to chatbots: Forming a Shared Symbolic Cloud — How we can successfully communicate about things and build shared understandings. Maxims of Cooperative Communication — How we manipulate what we say and how we say things to communicate meaning. Insights in this article draw from my training in cultural and linguistic anthropology and qualitative research I performed at LivePerson on people’s interactions with agents and chatbots in customer service. After reading this article, you should have a better idea about the complexity of communication and how it comes to bear on people’s expectations and frustrations when interacting with conversational AI. 1. Forming a Shared Symbolic Cloud — How we can successfully communicate about things and build shared understandings. In communication, we build a kind of Shared Symbolic Cloud, if you will, that conversational participants contribute to and draw from. This cloud is comprised of subjects, objects, temporal markers, spatial markers, referenceable symbolic systems, and more. We can engage in conversation to begin with because there is enough pre-existing overlap in the language spoken, perhaps similar experiences, learnings, and sociocultural underpinnings. While conversational participants engage in the mutual building of this symbolic cloud, they nonetheless have a unique set of interpretations and understandings of the conversation at hand, because they also have their own Individual Cloud through which they filter and process information. We have different understandings and meanings attached to words born from slight to dramatic differences in our sociolinguistic systems and experiences. What is your mental image of a “chair?” Is it a La-Z-boy or an armchair? Mine is a wooden chair. You probably imagine a different “wooden chair” than I do. I see the wooden chairs my parents had in the house when I was growing up. There is no way that this is also your mental image of a chair, let alone a wooden chair. Regardless, I can successfully communicate with you about chairs or the need to buy wooden chairs for the dining room table. What this specifically means for you, your mental image of a wooden chair for the dining room, will not 100% map to my mental image, but it will still overlap enough that we can communicate. Communication, then, is like a game of telephone. The message is communicated and received by others with a degree of fidelity to the original and intended meaning. Each conversational participant takes away something different from a conversation because our Individual Clouds differ. This is why it is so valuable to have supporting, nonverbal language infrastructures in place afforded by face-to-face communication. Things such as body language, vocal intonation, other vocal cues such as sighs but also visual aids like specific dining room chairs to point out and compare. All of this comes together to form that Shared Symbolic Cloud of communication. Participating members of a conversation have access to, understanding of, and can contribute to this symbolic cloud of linguistic interaction. In sum, the Shared Symbolic Cloud of communication is a composite of a participant’s Individual Clouds, and Individual Clouds too are composites of societal clouds such as macro levels of language, culture, and social norms and the more micro levels of personal experience, sociolinguistics, and niche culture, etc. All this comes together to enable communication at all and then about specific things. Knowledge cultures — mind the gap Building a Shared Symbolic Cloud becomes more difficult when we throw things like medicine, physics, philosophy, heating and ducting installation, ballet, university admission process, etc., into the picture. These are known as “knowledge cultures.” The more stand-out knowledge cultures include specialization that require advanced training or education. There is usually identifiable jargon (common examples being legalese or technobabble). Included in the term knowledge culture are also less obvious things like a business. Think about any business or company you have worked in. There is an internal work culture, business goals, best practices, brand image, processes, systems, departments or divisions, ranks, etc. You have likely experienced the confusion of engaging in an unfamiliar knowledge culture at multiple points in your life. Maybe you started a new job, and people around you were using acronyms or software you were unfamiliar with. Much of a company’s New Hire Orientation is around helping new employees learn and implement knowledge culture tenets like corporate goals or principles. According to Pokémon.com, there are 809 official Pokémon types. Many hobbies are also deeply entrenched in knowledge cultures. I can recall not too long ago trying to get into the then newly released Pokémon Go. I quickly found myself overwhelmed by the sheer variety of Pokémon, Pokémon classification, abilities, stats, and care/evolution. Meanwhile, my partner, who had grown up playing Pokémon, was making strategic decisions about which Pokémon to catch, develop, and evolve. Another and more frequent exposure to foreign knowledge cultures occurs when you call customer service because of a question or issue. You frame the problem or question using your Individual Cloud of experience and knowledge — your point of view. You have very little understanding of how the company or business talks and thinks about said problem or question within their knowledge culture. It can be frustrating to engage in a conversation with said company or business because there is little correspondence between how you are thinking and talking and how the business is. It is a failure to build that Shared Symbolic Cloud wherein communication takes place. This is because there is a larger gap between your Individual Cloud and the cloud the customer service agent is mobilizing, heavily influenced by the knowledge culture they work in. It makes it more difficult to talk about the same thing and build shared understanding if and when you have different meanings underpinning similar words and concepts. Diaper debacle — black-boxed business process and practices I ordered diapers on Amazon that were supposed to arrive in two days. Five days later, I had yet to receive them and noticed that the delivery date had been pushed out another two weeks! I got in touch with Amazon’s customer service through the in-app chat. They informed me that the diapers were sold through a third-party seller, and Amazon could not do anything to help me. I needed to send the vendor a message to cancel or refund the order. This was confusing as, when I made the initial purchase, I could find no indication that this was a third-party vendor. I sent a message to the third-party vendor. There was no response or refund. The next delivery window elapsed, and the delivery date was pushed out even further. I got back in touch with Amazon to complain. They again informed me there was nothing they could do at this point except message the vendor and wait. If the product did not arrive by the specific date for delivery, then they could compensate me. I never heard from the seller. I never got the diapers. Amazon issued a refund. I presume the seller is still selling diapers. I still can’t tell if I am ordering through them or not when I go to purchase diapers. In the above example, there is a gap between how I understand the Amazon Marketplace operates and how the customer service representatives understand it. From my point of view, everything on the Amazon Marketplace is Amazon’s. I do not have visibility into what is being sold by a third party and the rules and regulations behind cancellations, returns, refunds, or complaints. I experienced the promise of two-day delivery being broken, and the diapers failed to be delivered. For me, this was the fault of Amazon and not some then invisible third-party seller. Already frustrated, it was even more frustrating to have the customer service representative tell me nothing could be done because of rules that seemingly were magicked into existence just to annoy me. Had I known at the time of my purchase that the seller was a third-party vendor (and maybe the seller’s star ratings and not the product’s) and had I known the rules surrounding cancelation and refunds, I likely would have gone about my purchase differently. This is a clear example of inside/outsider knowledge of the knowledge culture the business is operating under. The onus is on Amazon to be transparent about this for improved customer service relations and should not be on me to learn through some agent telling me, basically, “yeah sorry; not our problem!” Customers do not have access or exposure to a business’ knowledge culture: the company’s way of thinking, saying, and doing things is black-boxed. The customer service agent or sales rep, however, is in a unique position. They have an intimate understanding of the company’s knowledge culture and can empathize with the customer and their point of view. Because customer service representatives are in this privileged position of dual understanding, a good customer service representative will go the extra mile to meet the customer where they are and build the bulk of the Shared Symbolic Cloud to enable effective communication. Image of a customer trying to make sense of a business’ knowledge culture with access only to a small portion of the whole. The Amazon customer service representatives I spoke with did not successfully empathize with me. They did not realize that I, in my Individual Cloud, did not have the knowledge or access they have. If they had stepped into my shoes, they would have gone the extra mile to demonstrate to me how I can find out if a product is being sold by a third-party seller on the Amazon marketplace. They could have informed me all in one sitting about the rules regarding cancellations and refunds for goods sold by third parties, rather than doling out these policies slowly over the course of a month with different agents. They also could have followed up with the seller and perhaps notified me that they are going to be putting the seller on probation or removing them from the Marketplace (I had done some digging and found this particular seller had failed to fulfill orders for a number of people). They did not do this. Instead, I had to be the angry and confused customer. I would have much preferred to be informed, teaming up with Amazon’s customer service representative to resolve my situation and monitoring it over time. I just needed them to share the necessary knowledge with me so we could build that bridge of mutual understanding... Starting from the customer’s point of view Based on qualitative researchI conducted at LivePerson with people of a various age, gender, educational background, income, occupation, and locality, we know that a positive experience with customer service includes empathy and personalized care. These are the actual terms used by the majority of study participants, with all mentioning this in some form or other. Empathy was characterized by study participants as when the customer service representative acknowledges what the customer is experiencing and how it is impacting them. This amounts to feeling like one is being heard, taken seriously, and that he or she will be given individualized attention given the specifics of their situation. This latter aspect dovetails into study participants’ conception of “personalized care,” discussed as the customer service representative working toward identifying the specifics of what is going on and providing tailored solutions given such particulars. What empathy and personalized care have in common here is this feeling of having successfully connected through communication with the customer service representative. That is at the core of what makes for a positive interaction with customer service. These research insights demonstrate that customers value when agents go the extra mile to meet them where they are, starting from their Individual Cloud of experience and understanding, and work from there to build out a Shared Symbolic Cloud that they, the customer, can understand and see the applicability to their situation. This amounts to recognizing the gap between the internal knowledge culture that a business possesses and the knowledge and experience of the customer and starting from the customer’s point of view (empathy) to find a resolution that satisfies their situation and the business (personalized care). Building chatbots that start from the customer’s point of view As discussed, customer service needs to go further than a typical conversation partner to translate the internal world of the business (its knowledge culture) into something easily digestible for the customer and the Individual Cloud of experiences and understanding they are operating within. Customers cannot do this work because the internal logic and workings of the company are black-boxed and inaccessible to them. In customer service interactions, therefore, it needs to be the agent building this bridge, the Symbolic Shared Cloud, and providing, at necessary junctures, pertinent information to help the customer join in and engage successfully. This same responsibility applies to a business’ digital customer service agent, the chatbot. It too must work from the customer’s point of view, their Individual Cloud, to build empathy. Tell-tale signs that your chatbot is not bridging the gap are as follows (insights derived from research I conducted at LivePerson): A. Customers are struggling with how to word things to get the chatbot to understand them. This is a struggle to translate his or her problem/query as it is understood and experienced in their Individual Cloud into the knowledge culture of the company. B. When the chatbot presents selection options, the customer cannot figure out which category to choose. This is also a translation issue, but more directly related to the organizational schema that a business knowledge culture might be implementing. It is a question of how the company is classifying or categorizing this product or this topic. It is a similar experience to walking up and down every aisle of a grocery store to try to find where they categorized the dried fruit — is it next to the fresh fruit, nuts, spices, cereals, or canned goods? If you can’t find it after your first try (maybe two if you aren’t in a hurry or don’t have kids), of course, you are going to ask someone that works there rather than go through the whole store. Building conversational AI experiences for customer service that possess the winning qualities of empathy and personalized care are readily achievable with user research. Below are a few examples of things you can start investing in today to help your bot bridge the gap and translate effectively between your internal business knowledge culture and the customer’s point of view. Solution — how to build chatbots that have empathy and personalized care Reminder that empathy for customers is feeling listened to and understood and personalized care amounts to having his or her situation be identified as unique and then customer service working towards finding a resolution that works for the customer. Both of these can be achieved by a chatbot. Start from the user’s point of view — their Individual Cloud It is important to work backward from the customer. In another article, I talked about this from the perspective of mental models. It also applies to what a bot says and how it says it. You don’t want the bot to be too steeped in jargon or the company’s knowledge culture. It needs to be a proper marriage between the customer’s point of view and the business’. Conduct user research into how people are framing problems or issues — what language they are using to talk about things? What is the context in which they are experiencing it? Incorporate learnings into the language and phrasing of the bot. This will go a long way to help customers feel grounded in their interaction with an unfamiliar knowledge culture. You can admix business jargon or info about your company’s organizational system as teachable moments. In the example below, the bot gently rephrases the customer’s query using the business jargon “digital portal.” Customer: I have a new credit card but when I log in to view my activity online. I can’t see the new card there. What is going on? Bot: I am sorry to hear you are experiencing difficulty accessing your credit card activity on the digital portal. To help you better, would you please take a moment to log in here… Having the bot restate or rephrase the customer’s intent is additionally a way to build empathy. It demonstrates to the customer that their specific situation was understood — the first step toward receiving personalized care. 2. Plain talk doesn’t just apply to words The design basics of user experience on the web also have many parallels with conversational AI. To create satisfying customer experiences, it is absolutely imperative to design categories, information architectures, logic hierarchies, and more from the user’s point of view. Again, just like user experience on the web, working backward from the customer will make their role in the conversation and the interaction options seem intuitive (that is, resonating with their Individual Cloud). At risk of sounding like a broken record, and tooting my own horn, perform user research (such as card sorting, first click, or tree testing) to identify how to label and construct information hierarchies and categories so it resonates with the customer’s understanding. 3. Recognize a customer’s issue or need as unique and deliver “personalized” care with bots Sure, maybe the company gets hundreds, thousands, even millions of customer service hits about the same issue daily. It doesn’t matter. From this one customer’s singular point of view, the issue is unique to them. They don’t want to be told that their issue is commonplace. If they did not go to or find their answer in the “Frequently Asked Questions (FAQ)” page, being shuttled to the FAQ page reinforces that: (a). they are just a number, and (b) the company doesn’t value them and their situation enough to provide personalized care. Unless your bot is specifically an FAQ bot, don’t send a customer to the FAQs. It is OK to pull content for the bot’s response from an FAQ page, but don’t link to the FAQs or indicate that their question is commonplace. Instead, have the bot talk to the customer and frame content (derived from FAQ pages or not) as unique to this individual and their specific situation. This will set the bot up to deliver a personalized care experience to the customer. 2. Maxims of cooperative communication — how we manipulate what we say and how we say things to communicate meaning In addition to Individual and Shared Symbolic Clouds, what we say and how we say it conveys meaning as well. The British philosopher of language, Paul Grice, outlines four principles of cooperative communication that we apply unconsciously when we converse with others to drive and derive meaning. The maxims are as follows: Maxim of Quantity — Your contribution to a conversation should be informative only to the extent needed; that means there should be no additional information nor should there be too little. Maxim of Quality — Say only what you know or believe to be true and possesses sufficient evidence to support it. Maxim of Relation/Relevance — Contribute to the conversation at hand. Maxim of Manner — Don’t be obscure, be brief, don’t be vague, and organize your contribution. Gricean maxims operate at the overarching level of the conversation as a whole. To this end, they draw individual utterances made into the larger whole of related subjects, objects, spatiotemporal references, and topics. We unconsciously apply these maxims to convey and comprehend meaning, both implicit and explicit. If and when we encounter a violation of one or more of these maxims, the violation and type of violation serves to communicate significance beyond the surface value. See the following example: Person A: Did you talk to Michal and Jorge about getting together next Saturday? Person B: I sent a message. Here, Person A applies the maxims of cooperative communication to derive meaning from what Person B has stated. They apply the Maxim of Relevance to determine that the “message” must be related to getting together with Michal and Jorge. Surely, Person B wouldn’t intentionally mislead Person A by talking about some irrelevant message they sent! Person B has additionally flaunted the Maxim of Quantity. They could have responded by saying, “I did talk to them. I sent them a message, and they responded to say, ‘yes; next Saturday works.’” However, Person B did not say this and in not saying this but rather violating the Maxim of Quantity, Person B has, in turn, successfully communicated a different set of implicit meanings to Person A. Person A: Did you talk to Michal and Jorge about getting together next Saturday? Person B: (EXPLICIT) I sent a message. (IMPLIED) No, I have not talked to them. I messaged them, but they never got back (and I may be a bit irritated by this), so I don’t know if we are getting together next Saturday or not (please don’t ask me again). Thus, violations of these maxims serve to communicate additional value beyond the explicit content of what was uttered. All of this occurs tacitly, often without calculation, as a part and parcel of communication. How do these maxims apply to communication with conversational AI? Conversational AI are not good at carrying context across utterances. Because of this, the application of Gricean maxims fail and with it the thread of cooperative communication. For intent recognition purposes, conversational AI tend to ground conversational context on a turn-by-turn basis or per discrete interaction, rather than across interactions and intents. The longer context can be retained (subjects, objects, topics, and other identifying information) and mobilized, the easier it will be for the bot and the customer to mobilize Gricean maxims. Businesses should preface the development of machine learning and natural language processing that enables AI to integrate transconversational historical data and multithreaded intents in any given interaction with a customer. Below is an example of a well-known conversational AI, Mitsuku, developed by Pandorabots and acclaimed “record breaking five-time winner of the Loebner Prize Turing Test, is the world’s best conversational chatbot.” (Link) Between individual conversational turns (I say something and someone else says something), Mitsuku appears to be in accordance with the principles of cooperative communication, but Gricean maxims apply at the level of the overarching conversation (comprised of multiple conversational turns), not a single turn. Looking at the bigger picture of what is actually being talked about, Mitsuku violates the Maxim of Relevance, Quantity, and Manner. It switches the topic of conversation from the weather to the cost of raincoats, after which it fails to be relevant altogether. It violates the Maxim of Manner and Quantity by talking at length about random things like a “Mousebreaker” clearing its memory. As Mitsuku’s conversational partner, I felt frustrated and confused because I tried to apply these maxims to understand what Mitsuku was saying, sussing out any implied intentionality related to our larger conversation. For example, I had to think over what “Mousebreaker” might mean. At first I thought maybe “Mousebreaker” was a play on words, referring to a computerized “windbreaker,” but then why does it erase memory? So, this implied meaning didn’t make sense and essentially I wasted cognitive power. The topic continued to leapfrog. Even Mitsuku’s possible joke about the cost of my raincoat lands awkwardly when she promptly forgets the conversational thread and then becomes distractingly vague (violation of the Maxim of Relevance and Manner). Mitsuku’s repeated violations devoid of intentionality (and, therefore, meaning) prevent us from actually cooperatively communicating and failing to build a Shared Symbolic Cloud. Maintaining the larger conversational context across each of our interactions is essential for a truly cooperative and collaborative conversation to occur with conversational AI. To summarize If we are going to position AI as conversational, then we need to be more aware of the anatomy of a communicative event. Shared Symbolic Cloud — how we build mutual understanding. Maxims of cooperative communication — how we say things and what we say convey implicit and explicit meanings related to larger context of conversation. I discussed how building mutual understanding becomes more complicated when a complex knowledge culture is involved. With customer service, it becomes increasingly important to meet the customer where they are, their individual clouds, and work backward to build empathy and personalized care. The same need applies to chatbots used for customer service. Research into the customer’s point of view is needed to achieve this. This includes not only what they are experiencing and how they communicate, but also how they organize information. With maxims of cooperative communication, I emphasized the need for conversational AI to maintain conversational context beyond discrete interactions (however a complete interaction is defined). These maxims, which, are unconsciously applied by people in communicative events, apply across interactions to index all past subjects, objects, topics, places, people, etc. The goal is not to make AI indistinguishable from humans but make them more conversationally compatible with humans. This is important because people will unconsciously apply the fundamentals of communication when invited to converse with AI.
https://medium.com/swlh/what-it-really-means-for-conversational-ai-to-be-conversational-c796ff278656
['Dr. Ender Ricart']
2019-12-22 11:06:38.309000+00:00
['Artificial Intelligence', 'Chatbots', 'Customer Service', 'AI', 'Conversational UI']
What I Wish People Knew About Reporting Suicidal Friends on Facebook
What I Wish People Knew About Reporting Suicidal Friends on Facebook With no one to turn to, I turned to Facebook — and ended up with a cop on my doorstep Photo: Jack Halford/EyeEm/Getty Images In the winter of 2013, I found myself spending a month on a leaky air mattress. I was staying at the home of my ex-fiancé’s Facebook friend, in Iowa. She’d generously welcomed me after my ex kicked me out of our shared Tennessee apartment. I was three months pregnant and battling suicidal ideation every day. When my fiancé told me to go back to Minnesota and began spending all of his time trolling online for dates, my prenatal depression kicked into high gear. I was pregnant, recently dumped, filled with guilt, and terrified of being a bad mother. I was afraid my depression would prevent me from bonding with my child, and I was in desperate need of help. No matter how much people told me to move on, I couldn’t understand how to actually do it. In those days, I still had a Facebook account, which constantly reminded me of the breakup. Everything online did, but Facebook was particularly good at it. Plus my Facebook posts were pretty damn depressing. I like to think I was careful about what I posted. I knew I shouldn’t tell people how much I wanted to die. I knew I shouldn’t share how often I went for walks in the middle of the night with a knife in my pocket. One day I posted a status I don’t remember posting: “Today I’m thinking a lot about taking a walk and disappearing for good.” I was alone when there was a loud knock at the door. Startled, I opened the door to see a police officer. “Are you Shannon Ashley?” he asked. “Yes,” I answered, the blood draining from my face. I didn’t understand what he wanted. “One of your friends was concerned about some things you posted on Facebook,” he said. “Can I come in and talk?” The officer sat down and asked me some questions about what was going on and how I was feeling. As I realized what was happening, I felt my face burn. Someone had reported my post to Facebook, which advised them to contact my local authorities. I knew if I answered too honestly, I would have to go to the hospital. For a lot of folks battling suicidal ideation, going to the hospital is an unknown that seems even scarier than our darkest thoughts. We will do everything we can to avoid it. So I was careful to tell the police officer just enough to get him to leave me alone. I’m not sure why we don’t talk more about this flaw in the system: So many of our protocols surrounding depression or suicide checks assume the person who needs help will tell the truth. But a lot of us won’t. The officer didn’t stay long, and my main concern was making sure he left before anyone else returned home. It was bad enough to feel so miserable; the last thing I wanted was to explain myself to somebody else. I still don’t know who reported my post and called the police. I do know Facebook didn’t deem the post “against community guidelines” because it’s still visible six years later on my now-unused account. “If someone you know is in danger, please contact local emergency services for help immediately,” Facebook advises on its help page. “After you’ve called emergency services, connect with your friend or call someone who can. Showing that you care matters. Make sure they know that you’re there for them, and that they aren’t alone.” I’m glad to see Facebook recommends that the concerned user reach out to their friend, but I have mixed feelings about the entire policy. In my case, the person who reported my post and called the cops never revealed themselves or reached out to offer personal support. People don’t know how to react to a pregnant woman who isn’t glowing with joy or delightfully sharing photos of baby showers and nursery makeovers. And they definitely don’t know how to deal with one in the deep throes of prenatal depression. Some friends did send baby gifts, but they were hard to look at — more reminders of what I didn’t have, and the massive responsibility that was about to come screaming into my unprepared arms.
https://humanparts.medium.com/i-made-a-facebook-post-that-had-the-police-knocking-on-my-door-b5e3d11baf20
['Shannon Ashley']
2020-01-15 17:30:33.282000+00:00
['Life', 'Facebook', 'Depression', 'Social Media', 'Mental Health']
The Ultimate Tool to Start the New Year the Right Way
The Ultimate Tool to Start the New Year the Right Way How to get started with the Wheel of Life I have never been the biggest fan of New Year resolutions, purely due to the short lifespan they seem to hold. Research by Strava recently found that in 2020, Sunday 19th January is the day when most people will give up on their resolutions. In another research, HelloFresh discovered 40% of New Year’s Resolutions have already failed by this point, despite our best intentions. There are quite a few ways you can go about planning your new year and setting meaningful intentions, and one of them is known as the Wheel of Life® (or Life Wheel). The original concept of The Wheel of Life is attributed to the late Paul J. Meyer who founded the Success Motivation® Institute in 1960 and created an array of programs and tools for leaders worldwide. Commonly used by life coaches, it highlights each area of your life in turn in order to assess which areas need more attention. It’s a great nifty exercise that also helps you outline your core values as well as understanding how you can create a life that can best lead you to meet your ultimate goals — yes, it’s quite powerful stuff. When it comes to the Life Wheel, you can follow the standard areas you should focus on, however, I’m more of the school of thought of choosing the core areas based on your personal and professional values. How do you go about finding your values? If you have not done this already, it may be time for you to outline what are the values you stand by on both a professional and a personal level. I love to get a piece of paper and draw a line to create two columns. One column should be for your personal values (family, connection, freedom) and one for your professional values (success, space, flexibility). Exercise: once you have outlined the top 5 values for each column, I want you to compare your answers. What is overlapping, and what is clashing? How can you make sure these values work together, instead of working against each other? Once you know what values matter most to you, it’s time to fill your wheel. The idea is to helps you to identify areas that need more attention, so getting clear on the values will help you map out only the areas of your life that truly resonate with you. A few approaches have been used in the past, including the “roles” you fulfill in your life (mother, manager, daughter, etc) or the areas that matter to you the most (family, work, creativity). In my case, I love to use the values I would find from the exercise I mentioned earlier in the piece since they would represent my priorities in my personal and professional life. Examples would be Personal: relationships, movement, spiritual wellbeing Professional: freedom, creativity, connections By using a wheel, you get a clear representation of the way your life is currently, as opposed to the way you’d ideally like it to be. The concept can be adapted to the way you want to highlight your focus. Some people love to use numbers (from 1–10 to rate each area) others use a traffic light system instead. Exercise: map out your wheel and start adding the areas you want to focus on. You can consider each in turn and rate them based on the amount of attention you’re devoting to that area of your life, how much you feel aligned with each value (for example freedom and creativity). It’s time to get honest now: I hate to break it to you, but it’s unlikely you’ll be scoring a 10 in every single area any time soon. Truly, what you are looking for is to be able to have a balanced spread across the board. Some coaches recommend joining all the marks together from each area to see if they are all in balance. Yet, I believe what is truly important is to take the time to ask yourself: Am I happy with the balance I see across the different areas of my life? Which areas need more attention and focus right now? Action step: write down for each area the one step you can take to make improvements for yourself, and work on them for the next 30 days. I personally run this exercise every 90 days for the members of our collective. You may want to do this every single month, and that is also fine. Being able to look over and assess what works for you it’s key when it comes to creating a more balanced life overall. The real balance is the one that allows you to fill your life with more than one area. First, you have to decide what you want to fill your wheel with. Truth is, many people are looking to have a very “unbalanced” life. Don’t roll your eye just yet. Hear me out. The real goal of “balance” is to experience happiness on a daily basis. Today my slice of happiness was a long run. Tomorrow it may be work. Rethink was a balanced wheel means and tailor it to you, in order to truly achieve happiness every day.
https://medium.com/live-your-life-on-purpose/the-ultimate-tool-to-start-the-new-year-the-right-way-32661dd7810f
['Fab Giovanetti']
2020-12-28 23:03:04.884000+00:00
['Habits', 'Goal Setting', 'Productivity', 'Self', 'Creativity']
Redesigning Chrome Desktop
As my involvement in the Chrome browser is ending with this Core UI project, I’m excited to see what the Chrome team has for the future of Chrome on desktop and mobile. Lessons learned and initial release feedback As a closing note, I’d like to share some of the lessons I learned during this project as well as some release reaction, both internal and external, hoping that these will be helpful to you, in your projects. 1. Engineers are great designers We talk a lot in our design circles about if and why a designer should code. There are a lot of diverging opinions on the matter and it comes from the fact that there is no simple definition of the role of a designer. However, we talk less of the opposite: Should engineer design? In the end, they are the makers and to a certain extent, the “designers” of your product, the ones who make it become a real thing. Sometimes, as a designer, I feel like all that we do is trying to “fake” or “mimick” the end result. Cutting to the chase as fast as possible to try things in the real environment, in real code, is essential. In this project, a lot of my assumptions were broken by engineers who not only brought better solutions to the table but executed and iterated on the design better that I could ever do. I’m thinking more specifically of programmatic rendering (an effort championed by Peter Kasting) and motion design (lead by Ben Ruthig). Their code knowledge was crucial in getting the design right and more than only changing the design it changed the nature of the project, from a visual revamp of the UI to a core rewriting. Everybody is a designer. Ideas are not limited to a role. If you are lucky enough to be working with motivated and engaged engineer, you might realize that they can sometimes be a better designer than you. 2. Involve engineers early In the present case, they were involved right away and were an integral of the design and conception process. As I mentioned earlier, their motivation to deliver better design through better engineering solution made the product what it is today. Maintaining constant communication was key to bring the right design to life. You don’t necessarily need to understand how to code, it’s more important to understand the people who do. 3. Know when to be precise, learn when to be loose Delivering extremely precise spec work is necessary, however, in some cases, leaving your preliminary work open for feedback and new ideas can bring your design to another level. As long as your end design stays true to its original goal or intent, let others enter your process and improve on your design. 4. Beware of change aversion When you are redesigning a product, especially if it has been around for a while, you will run into what a lot of us have encountered: change aversion. Now be careful, sometimes your design might actually be bad but in a lot of cases, the simple act of changing something is sufficient to trigger moderate to extreme reaction from your users. It can be extremely hard to receive and extremely hard to fight. For this redesign, adding a few pixel triggered lengthy discussions or debate. I won’t lie, I do not have the miracle solution to it but there are things that you can do to minimize change aversion: Communicate with your audience. Always have a deck ready to explain your vision to who might want to hear it, especially if they are your stakeholders. Stay the course. It is very beneficial to strongly believe in the choices you made. Don’t get stubborn but stay confident. Be prepared. When somebody wants to discuss some of your design decisions, be prepared to back these up with facts, studies (when possible) or past experiences. If you find yourself in a situation where you cannot answer back, you might find that their feedback is worth considering. Understand some things take time. Probably the most frustrating and the one I’m having the most trouble dealing with, some things just take time to change. The bigger the product, the longer it might take. Find happiness in the small victories and understand that your product is never done. 5. Manage your expectation When the update started to roll out, the hardest feedback I received was: “That’s it?”. It’s a fair feedback in my opinion. The project took time and it’s not a visual revolution. I like to think that if you look into the details, you might start to see how much attention and care we put into it. The biggest improvements brought by this redesign project are under the hood. It is an engineering achievement most and foremost. I do hope that the benefit will be felt over time, both in our team and with our users, as Chrome has never been so flexible and consistent across our supported platforms. Finding satisfaction in your work and the result of it through your own eyes is more important than seeking validation through others’. If you are truly and honestly satisfied with what you have done. You’re good. Closing notes Thanks for reading so far. If you want to reach out, feel free to connect on Twitter or anywhere else. If you want to connect with the Chrome design team, these are cool people to follow on Twitter: Alex Ainslie, Chris Lee, Max Walker, Rachel Ilan Simpson, Peter Schaffner, Hannah Lee, Glen Murphy. The awesome engineering team behind this work: Peter Kasting, Ben Ruthig, Evan Stade, Terry Anderson, Valery Arkhangorosky, Jayson Adams.
https://medium.com/google-design/redesigning-chrome-desktop-769aeb5ab987
['Sebastien Gabriel']
2016-11-14 05:36:37.960000+00:00
['User Interface', 'Design', 'UX', 'Visual Design', 'Google']
We need 2 things to be great at what we do.
We need 2 things to be great at what we do. From a writer’s perspective, we need to learn daily. Eating a slice of the humble pie goes a long way. Next, we need working tools. We are not perfect. With working tools, we can get better. Right?
https://medium.com/technology-hits/we-need-2-things-to-be-great-at-what-we-do-12cbe7f5918a
['Aldric Chen']
2020-12-15 07:36:26.987000+00:00
['Productivity', 'Business', 'Writing', 'Short Story', 'Technology']
SEM: Is There Value in the Google Guaranteed Program?
SOURCE: Google Here’s a new wrinkle to Google search for local businesses. Advertisers on Google can now display a green checkmark, which Google will tag businesses with either “Google Screened” or “Google Guaranteed.” It’s only available right now for certain categories of business, such as law, financial planning, and real estate for the Screened badge and service-oriented companies for the Google Guaranteed mark. To get the checkmark, businesses will have to go through background checks, provide proof of insurance and licenses, and other paperwork. To use the mark, you have to pay for the ads and be part of the search engine’s Local Service Ads program. “…the real value of the badge is the access it provides to Local Services Ads (LSA). This is Google’s local trust pack. It is a cost-per-call advertising inventory unit that acts unlike anything we have ever encountered as marketers.” SEARCH ENGINE LAND For consumers, booking through the Google Guaranteed ad includes an offer to refund up to $2,000 if you aren’t satisfied with the service you’ve received. SOURCE: Google Here’s how Google describes how the service works: Google Screened On Local Service listings, you will see the Google Screened icon next to these businesses. How it works All firms that have the Google Screened badge must pass a business-level background and a business-owner background check. Additionally, each professional in the business must pass a license check, and in some categories, a background check. See Requirements by category for details. These checks ensure consumers that the professionals they work with have been thoroughly vetted and provide them added peace of mind as they work with you. Who it covers Only firms that provide professional services including Law, Financial Planning, and Real Estate are eligible for the Google Screened badge. MORE INFO FROM GOOGLE The Google Guarantee The Google Guarantee badge is available for businesses that pass a Google screening and qualification process through Google Local Services. If you’re backed by the Google Guarantee, and your customers (that came to your business through Google) aren’t satisfied with work quality, Google may refund the amount paid for the service. The following are the upper limits of lifetime coverage for claims: What it Covers United States: $2,000 Canada: CAD $2,000 The Google Guarantee covers claims up to the amount on the job invoice up to the lifetime cap for coverage. Services must be booked through Google Local Services. The Google Guarantee doesn’t cover add-on or future projects, damages to property, dissatisfaction with price or provider responsiveness, or cancellations. How it Works If one of your customers submits a claim, we’ll contact you to learn more. You’ll have an opportunity to make things right with your customer. After investigating the claim, Google will decide on a resolution. MORE INFO FROM GOOGLE A Deeper Dive Search Engine Land does a deeper dive into this and is worth the read if you’re interested. If you’re trying to figure out whether it’s worth it for you, get in touch with me and we can do an assessment for your business.
https://medium.com/digital-vault/sem-is-there-value-in-the-google-guaranteed-program-c0f350ae476a
['Paul Dughi']
2020-09-18 15:15:15.363000+00:00
['SEM', 'Search Engine Marketing', 'Google', 'Advertising', 'Marketing']
Cut the Shit: Why advertisers need to sell more and bullshit less.
In the summer of 2015, Pepto-Bismol made one of the most epic ads in the history of indigestion medicine. You might want to get comfortable before starting this: It took a strange 4 minutes to get there but eventually the point was made: if you were raised by goats, you would need Pepto-Bismol to deal with the trash you’d end up eating. “The Boy Raised by Goats” won a couple of Clios but failed to generate much viral traction. And because of it’s unwieldy length, Pepto-Bismol wasn’t able to pump money into paid views of the ad to make it seem viral. So, not a lot of people saw it. And that’s probably a big reason why Chief Brand Officer at Procter and Gamble, Marc Pritchard, used the piece of content to highlight the “content crap trap” of advertising during his keynote speech at the 2016 ANA “Masters of Marketing” conference. “In our quest to produce dynamic real-time marketing in the digital age, we were producing thousands of new ads, posts, tweets, every week, every month, every year. I guess we thought the best way to cut through the clutter was to create more ads. All we were doing was adding to the noise.” Pritchard, who was appointed as the new ANA chair shortly before his speech, used his new platform and the P&G-produced goat spot to illustrate the pitfalls of content marketing. He described the goat spot as “why did we do this” content, questioning the relevance of the creative insight and the length of the video. It wasn’t the only piece of content he called it but it was a crowd favorite. Waste in advertising has been a hot topic for Pritchard in this era of his career and he used the ANA speech to stress the need for higher quality advertising. In Pritchard’s version of the story, marketers and agencies have been too quick to pour money into content for new technologies, leading to lower-performing ads because, more ads with unchanging creative budgets equals crappier ads. Even if “The Boy Raised by Goats” is seen as the 21st century’s “Citizen Kane” in a few decades, it would’ve been hard to justify as a business expense because it only ended up accumulating a few thousand views. It’s unlikely that a professionally produced four-minute film had a positive return on investment with only a few thousand views. And that is probably what Pritchard means when he refers to it as crap. Not subjectively in an artistic sense but objectively in a crappy for business sense. Because Pritchard is speaking up about waste. For him, it’s about value. And waste is the enemy of value. Pritchard isn’t alone in this. Marketing is a function of business, after all. It shouldn’t be a surprise that senior marketers want results from the investments they make. Surveys of senior marketers show over and over again that what they want most from agency partners is good value for their investments. Unfortunately they’re not seeing it. USA Today and the ad agency RPA surveyed senior marketers in the US in 2014 and found that only 56% believed their agencies understood how to generate sales and only 40% believed their agencies delivered a positive return-on-investment. In other words, about half of advertising agencies in the US couldn’t do the number one thing they were hired to do. A similar British survey from 2015 found that 46% of senior marketers in the UK were not satisfied with their agency partners’ work, and only 8% were ‘very satisfied’.⁶ Again, roughly half of client-side marketers weren’t getting the value they want. Pritchard’s goat story and ANA speech illustrates the frustration of client-side marketers across the world and across industries who continue to see their agencies pour ad budgets into costly productions that have virtually no impact on sales. Of course four-minute ads aren’t the only form of advertising waste and sometimes they work. But clearly the goat spot was not the G.O.A.T. spot when it came to delivering business results. Unfortunately, agency-side marketers might not even know that they’re making crap. USA Today asked senior-agency side marketers the same questions they had given the client-side marketers in their 2014 survey and found that agency-side marketers had a much sunnier opinion of their own abilities. 84% said they understood how to drive sales (compared to 56% of clients) and 76% believed they delivered positive return-on-investment (compared to 40% of clients). Agencies will find out soon enough that they’re not cutting it because Pritchard and his fellow marketers are starting to call bullshit. CALLING BULLSHIT “We should get the best price for our consumers, and if that means rooting out inefficiencies in someone else’s business, I will do it.” — Keith Weed, CMO of Unilever In June 2017 Unilever’s CMO, Keith Weed, announced that the company would be ending half of their 3,000 agency relationships as part of a larger effort to cut inefficiencies in marketing. They set a goal to save €6 billion by 2019 and had already saved €1 billion by the first half of 2017. Unilever’s agency purge came roughly three years after Pritchard’s P&G announced a similar initiative. Between 2013 and 2016, P&G cut their agency roster of 6,000 by 50%, leading in part to a $620 million in savings, which they reinvested in media and sampling. Between the two advertising giants, 4,500 agency relationships were over or ending. Kraft, Glaxo-Smith-Kline, Hershey, and other major marketers followed suit by consolidating their work and cutting redundancies around the same time. The marketing consolidation trend was driven by several factors including technology that streamlined the planning and buying of media and the financial ripple effects of the great recession that increased the burden on business leaders to be more efficient with operational budgets. But the thing that really drove clients to lose trust in agencies was our expensive love affair with content. THE CONTENT CRAP TRAP Advertisers are suckers for trends. Stuff like QR codes, web series, food trucks, virtual reality… Show us a cool new, unproven communications technique or medium and we’ll spend millions of dollars to be a part of it. In the late ’00s and early ’10s the trend was branded digital content. “Content is king” was a favorite go-to-phrase and agencies like VaynerMedia were raking in new business by relentlessly pumping out content and experimenting with new digital formats and social platforms. “The more content I can put out, the more luck I have… You have to get into the content game. You have to force yourself to make more videos, write more posts.” — Gary Vaynerchuck People started calling themselves “content strategists” and building monthly content calendars for their clients’ brand pages. Brands became content publishers and dumped huge chunks of their marketing budgets into supporting a constant stream of branded content, where new ads were posted to brand pages daily, sometimes several times a day. Teams of people were hired to manage content, respond to consumer comments, and take advantage of big trending topics. But the pendulum is just starting to swing back and marketers like Unilever and P&G are starting to realize that quality really does matter when it comes to the bottom line. A lot. And that sometimes less is more. Unilever created a proprietary tool to measure the creative wear-out of their ads and found they were grossly underusing their creative assets. On average only 1% of ads they produced reached a point of “wear-out” where it had no effect or negative effects on the audience, and only 40% of ads were “worn-in,” meaning that consumers were familiar enough with the idea to move the sales needle. In theory, you want to use all of your ads to the point of “wear-out” to maximize the investment made in the creative idea and production of the film. In June 2017, Unilever announced that it would cut the number of ads it made by 30%, largely because of these findings. P&G also made cuts, slashing their creative production budgets by $570 million between 2014 and 2016. Unilever and P&G don’t represent the entire advertising industry of course, but they do have massive influence on the trajectory of the industry. P&G spent $4.26 billion in advertising in 2015 and Unilever spent $8.3 billion in 2014. That financial power alone can be used to force change on the media and creative industries. And that is exactly what Pritchard and Weed are doing. As of 2017, agencies were still making a profit but combined 2016 revenue was at it’s lowest since 2013, and in a common-sense-defying trend, agencies’ digital revenue dropped from 13.5% to 8% in the 2016 fiscal year, while client side budgets were increasing.¹³ If we want to reverse our fortunes and ensure that we remain indispensable instruments of the marketing process, we will need to understand how we’re creating waste and how we can avoid creating it in the future. At some point in 2015 it became trendy to start talking about the death of the ad agency. Mashable wrote an article with that exact title and AdAge heavily covered the “death of the agency of record.” Agencies aren’t dying. But we’re changing in a big way. We’ve been chasing the newest trends at a high cost to efficiency and marketers are wisely demanding an end to the insanity. It’s time we suck it up ourselves and cut the shit. CUT THE SHIT Every three years or so, the Institute for Practitioners in Advertising (IPA), a UK-based advertising trade organization, analyzes hundreds of advertising case studies from around the world to understand what’s going on in the world of advertising. It’s kind of like a census for ad campaigns. The reports they produce from these studies are usually over a hundred pages long and examine the differences in brand impact from various advertising strategies, like how different campaign lengths can affect the overall growth of a brand, or how emotional and rational messages impact sales differently. They’re incredibly influential reports because of how massive and comprehensive the data sets are; nearly one thousand case studies from a range of industries, countries, and cultures. Despite any selection bias inherent in reviewing award-submission papers, these IPA reports are one of the closest things we have to scientific proof of our collective impact on marketing success. Within each report, the IPA calculates something called the economic multiplier of creativity, which is a unique measurement designed to isolate creativity’s impact on business growth. The thinking behind this metric is that a creatively excellent campaign should drive more business growth than a mediocre creative campaign, when media spend is equal. Otherwise, why would marketers pay creative agencies millions of dollars to come up with a creative ideas? This “multiplier” is built on the assumption that brands grow when they spend above their relative market share in paid media, regardless of the creative quality of their advertising. But they grow a lot more for the same spend if they have excellent creative. Consider this example of imaginary fruit advertising; Oranges have 20% of the fruit market but they buy 30% of all fruit ad space and grow 0.5 market share points without getting any creative awards. Oranges are good but not great. Oranges are the norm. Plums though… Plums are much more creative. Plums have the same market share (20%) and spend the same amount as Oranges — 30% of fruit ad space — but they come up with an awesome creative idea, rake in a bunch of creative awards and see a 2.5 increase in market share. Plums were 5x more efficient with their media spend than Oranges and excellent according to all the creative experts. Plums for the win! This is the idea behind the IPA’s multiplier score and fortunately for creative agencies — that creatively awarded campaigns deliver a stronger business results when a brand is spending above the minimum threshold to gain market share. Fortunately for agencies, the IPA has found that creatively awarded campaigns significantly increased the financial efficiency of campaigns in every edition of the study. It reached a high of 12x in 2010. However that number has been declining ever since, reaching a low of 6x 2014 according to the 2015 IPA report “Selling Creativity Short.” This drastic four year slide lines up well with the rapid erosion of client trust illustrated in part one of this series. Something that was driven by the perceived increase in ad ‘clutter,’ which the IPA also signs of in their research: Between 2010 and 2014, the amount of digitally integrated campaigns campaigns tracked by the IPA rose from 75% to virtually 100%. And while they saw above-average returns for campaigns with multiple assets, they also found a potential limitation to the benefit of multiple assets; campaigns with 5 or more assets performed worse than campaigns with 3–4 assets. It’s been said so many times that sometimes we forget to remind ourselves how much media changed between 2005 and 2016. Lenses, Live Video, Augmented Reality, Cinemagraphs, Shoppable posts… it’s been a bonanza of ad innovation. And there have been some incredible ads to come out of these new ad formats. According to the Interactive Advertising Bureau (IAB), from 2000 to 2016, internet advertising revenue grew from $8.2 billion to $72.5 billion. But we’ve become so eager to fill the space that we’re sacrificing the impact of our creative. Not only in the number of channels and ads used but the way in which those channels are used. And how they’re used together. And it’s coming back to haunt our relationships with clients. ABC — ALWAYS BE CLOSING “Your role is to sell, don’t let anything distract you from the sole purpose of advertising.” — David Ogilvy The erosion of client trust and scaling back of agency budgets that we’ve seen over the past five years comes down to one very important problem; advertisers have lost sight of their #1 job: to sell. This article shows how a variety of behaviors and beliefs can create waste in advertising. But all of those behaviors and beliefs boil down to a lack of business-mindedness. Whether it’s adding more creative ideas to an existing idea and diluting it’s message, selling in more assets to produce and decreasing the efficiency of a campaign, or creating ads that can’t achieve scale — we don’t always have our clients business in mind. If we want to reverse the erosion of trust between agency and client, agencies need to step up and cut the shit. And that means prioritizing your clients business goals above all else.
https://medium.com/comms-planning/cut-the-shit-1fc48e163ad2
['Brian Brydon']
2019-05-01 22:25:30.323000+00:00
['Advertising', 'Marketing', 'Measurement', 'Creative', 'Creativity']
Integrate machine learning and big data into real-time business intelligence with Snowflake and Plotly’s Dash
Integrate machine learning and big data into real-time business intelligence with Snowflake and Plotly’s Dash plotly Follow Jun 5 · 7 min read AI-enhanced BI powered by Dash and Snowflake — (Check out the app here) Business intelligence (BI) is an indispensable tool for many, if not most, modern organizations. BI covers an entire gamut of end-to-end activities from data mining to reporting, all carried out with a core goal assisting critical business decision making. How significant has BI become? One indication of its popularity can be gleaned from this Google Trends chart showing its search popularity over the last five years. Google Trends Data — BI vs Machine Learning This chart shows a steady and sizable increase in search volume for BI throughout the last five years. In fact, it has consistently remained above even the volume for machine learning, another critical business capability that often supports BI activities. Modern BI activities have evolved to almost unrecognizably complex forms even since the 1990s or 2000s, never mind its nascent days of the 1950s and 1960s using mainframe computers. Quite simply, business Intelligence is here to stay, and it is well and truly intertwined with the domain of big data. Take a look at customer reaction or competitor activity monitoring for instance. These days, natural language processing (NLP) tools might be deployed to parse and analyze millions, if not billions, of social media posts across multiple platforms not to mention press releases, websites and online fora. Or, internal systems might be built to search and analyze corpora of internal text data comprising tens of millions of texts in documents, e-mails, internal chat logs, and customer feedback. Quite simply, business intelligence is here to stay, and it is well and truly intertwined with the domain of big data; a necessary consequence of which has been that machine learning / AI tools are now indelibly linked for their necessity in analyzing the enormous volume of data. One side effect of growing complexities in BI activities has been increased demands for those building the underlying infrastructures, such as data management platforms. In fact, many organizations these days have eschewed building their own solutions for contracting external service providers such as Snowflake to fill their data warehousing needs. Snowflake Snowflake is one of the leading data warehousing service providers, offering ‘near zero-maintenance’ service, as well as uniquely providing de-coupled, ‘near-instant‘ scalability to their clients. This means that Snowflake’s compute power or storage are independently scalable, allowing the user to scale one or the other up or down for as long (or as short) as needed. For these reasons and more, Snowflake is a massively popular solution in the world of data management and warehousing. But, collecting data and having fast access to the data is only one part of the puzzle. To meet BI’s goal of aiding business decision-making, the requisite systems must effectively analyze the latest and greatest datasets, and subsequently deliver its key findings to the relevant stakeholders. In other words, it requires a tight integration between the underlying data, analysis layer, and user interface. Pairing Plotly’s Dash with Snowflake Dash was designed with these goals in mind, and that’s why it is a natural partner to a premium data service provider such as Snowflake for delivering not just vanilla, static, BI, but integrated, responsive BI systems incorporating machine learning analysis layers. Dash is a lot more than a simple tool for visualizing existing data, but an integrated user interface layer for machine learning and data science models. An example of a successful marriage between Dash and Snowflake can be seen in this demo Dash app, designed to search and analyze over half a million user reviews from Amazon. Screenshot of Dash / Snowflake driven BI app This app allows the user to perform a search of the underlying dataset, as well as to analyze the text of a review, whether it is from the search results or manually typed in by the user. When the user updates a filter or performs a search, Dash sends the query through to Snowflake, which returns search results from half a million records in less time than a blink of an eye (within tens of milliseconds). Dash then takes the returned result set to generate a dashboard report with not only macro-level statistics, but also natural language processing analysis outputs that are generated on the fly. In other words, Dash-powered BI dashboard can incorporate not only live data, but live machine learning analysis layers under the hood. With traditional systems, database, data analytics, and dashboard outputs must be separately updated by disparate individuals or departments. Combining these components to set up an equivalent system to Dash and Snowflake would require more time and cost, not to mention that it would be slow and prone to errors or inconsistencies as some pipelines are updated faster than others. D ash-powered BI dashboard can incorporate not only live data, but live machine learning analysis layers under the hood. Dash can help to integrate these components and automate intermediate tasks, pulling the displayed data from the primary database and running required analysis or ML models in real-time. This ensures that the entire organization is aligned and working with common ground truths from the one dataset. By connecting Dash with Snowflake, the analytics outputs will be always up to date and in sync with the primary data; there is no need for further intermediate processing, passing data back and forth between departments, and updating the app separately. Take a look here at the app in action: Dash in action — responsive filtering & NLP outputs (app) As the animation shows, the Dash app reacts to a user’s inputs by triggering a series of processes, starting from passing a query to Snowflake and processing the returned data set. The app carries out statistical analysis in the background, updates corresponding graphs, and triggers the NLP analyses for sentiment analysis and named entity recognition (NER). Dash is, of course, easily customizable. While in the above animation you see multiple outputs being updated simultaneously, as much, or as little, of the app can be made to be triggered by specific inputs. The next figure shows a user fetching a random review from the filtered results set, or clicking through to one of the named entities from a review to perform a new search. Once again, a new review is populated, automatically triggering Dash to run the NLP engine looks for named entity, before displaying them in the results. Fetching data & triggering NLP analyses (app) Beyond simple BI-style controls like sliders, dropdowns and buttons, Dash supports much more advanced interaction possibilities, such as free-form text input for on-the-fly processing with real-time model execution. To demonstrate this, in this app the user can even type in their own review as shown in the animation below. Once the user finishes typing and clicks away from the text box, Dash once again initiates the sentiment analysis and NER analyses, updating the results. Triggering NLP analyses on user-entered data (app) Not only is it merely possible to build these repeatable analysis layers with Dash to streamline analyses and reporting, it is actually incredibly easy to do so. The secret is to leverage Dash’s callback functions that wrap a function to inputs and outputs, for example a change to the filter parameters to an output graph. Writing callback function to update a dashboard element takes just few lines of code; here is an example: @app.callback(Output('filt-ner-count', 'figure'), [Input('filt-params', 'children')]) def update_ner_freq_chart(filter_params): fig = ... return fig This, and a reference to the output element, is all it takes for a Dash function to detect an update to the search parameters and update the relevant chart on the dashboard. The fact that Dash allows data scientists to code the analysis modules in their preferred language (such as Python, R, or Julia), as well as the front end, is simply gravy. Not a trivial one, mind you. We’ve seen Dash empower many data science teams to take control of the entire data dashboard, instead of building analysis or machine learning layers and handing the outputs over to separate front end engineers. By pairing Dash with powerful services such as Snowflake, you and your organization can take advantage of all that bid data offers, while minimizing the headaches, inconsistencies and labor involved in analysis and communication. If you’ve gotten this far, and haven’t looked at the app in action — what’re you waiting for? Go and take a look. We are excited to see what you build with these tools and look forward to seeing the amazing creations from our community of incredible, creative Dash users. If you would like to learn more about Dash and its capabilities, check out our weekly live demo!
https://medium.com/plotly/integrate-machine-learning-and-big-data-into-real-time-business-intelligence-with-snowflake-and-c972b5ea274e
[]
2020-06-05 14:50:16.516000+00:00
['Snowflake', 'Python', 'Business Intelligence', 'Data Visualization']
From Fibonacci to N-bonacci
The Fibonacci sequence gives rise to the golden ratio found in many beauties of nature. This file is licensed under the Creative Commons Attribution-Share Alike 4.0 International license. The Fibonacci sequence was one of the first mathematical concepts that I analysed when I started coding. In particular, this sequence was vital in enabling me to grasp recursion. To construct a positive Fibonacci sequence, we start by initialising the first two terms: 0, 1. The next term is derived from the sum of the prior two terms: 0 + 1 = 2. And so on. In Python, we can automate the process via recursion. So if we want to determine the third fibonacci number: in the function fibonacci, we first check if the sequence already has the value by supplying k as the key into a sequence defined as a dictionary. If k is in the sequence, then we simply return the value paired with k as the answer. However, as we have just initialised the sequence, k = 3 would not be found. So we go ahead to calculate the kth value of the sequence by recursively calling the fibonacci function twice to calculate the two terms before. The recursion stack keeps piling on with half-completed function calls until the base case is satisfied — that is the sequence has already stored or cached the value in the fibonacci_sequence dictionary. We could have returned the answer immediately, but we decide to add one more line to add our result into the sequence so that the machine would not need to waste time recalculating a value that we have seen before. Now, what if we want to generalise the Fibonacci sequence into an N-bonacci sequence, where we initialise the first N-1 values as 0, while the N-th value is 1. If you are interested in a related coding challenge, today’s post is inspired by this Edabit problem, so you could try to solve it before continuing on. To be sure, copying and pasting the code below would not work as-is as the answer that I am seeking is slightly different. The approach I have taken is actually a generalised form of the solution, where the problem is solved via object-oriented programming rather than functional programming. OOP has the advantage of enabling us to first create and object representing some N-bonacci — e.g. Fibonacci is also a 2-bonacci — before adding a kth_term method that could be reused to find different kth values. Let us create an object called Bonacci such that a Fibonacci sequence can be represented by Bonacci(2). In the __init__ magic method, we initialise the sequence self.ans to store the initial sequence values. We also write a __repr__ magic method to easily identify the object upon instantiation, which is almost always more useful than printing some memory address of the object. The kth-term method is very similar in process to the main code in the fibonacci function, except that we use comprehension to sum up the N prior terms when an unseen kth-term is found. This method also relies on calling recursion stacks to calculate all required N-bonacci values in reverse upon reaching the base case. A few more examples on how we could use the object: The Fibonacci sequence and the resulting golden ratio is found throughout nature — an example would be the seashell in the image above. With the computing power to generalise such sequences, I wonder what else could be discovered in time? Enjoyed this article? If so, get more similar content by subscribing to Decoded, our YouTube channel!
https://medium.com/python-in-plain-english/from-fibonacci-to-n-bonacci-d1bc874a54ba
['Kelvin Tan']
2020-10-06 20:01:34.184000+00:00
['Recursion', 'Python', 'Algorithms', 'Software Engineering', 'Fibonacci']
Why you need to become conscious
Perhaps we all have this habit — switching habit. The habit of switching from conscious to unconscious. We are trained professionals in switching. Most of us had learned this very well from childhood — if nothing else :). Perhaps it was an essential strategy for that little child, that we were once, to survive the heaviness of unpleasantness. It could be that we were not equipped — our brains — to face them consciously. But this habit had gotten the better of us — this unconscious avoidance. And this makes us suffer the weight of suppressed emotions, the ugly ones: anxieties, shame, embarrassment, humiliation, …the list goes. And at some point in our lives, mostly through some sort of suffering, that this switching process no longer works — become ineffective. At this point, we get to see that all is ‘not’ well as it seems. This compulsive switching to unconscious can happen in two ways depending on one’s unique makeup: one might jump into action with the intention of having to not face the sensations. one might flee or pull back more when the threatening sensations are felt. Whatever be one’s way of avoidance, at some point the body-mind cannot take it anymore. Panic, sheer panic arrives!!!. It becomes difficult to simply deny the force of suffering arising from this old habit in our body-mind. But still, we continue this habit via thinking — we think for a solution, yet thinking cannot solve it; moreover being overwhelmed by thoughts, we remain stuck in the unconscious mode even worse. To be conscious is to see what is happening within — the habit of slipping into the unconscious, the habit of suppression or avoidance of unpleasant sensations. We are forced by life to switch back to the conscious mode, which is painful, but when the old habit had gotten the better of us, there is no way out than to become conscious — if we have to get out of the mess. Being conscious and learning to break the cycle of unconscious suffering can lead to healing and stabilization within. To be conscious of the fact that thinking is a way of avoidance, by itself opens up a possibility to undo the old habit. There are no stubborn rules on how to consciously heal the body-mind with the release of suppressed energies. Sitting with thoughts and feelings without doing anything about it (as in some meditation) or purposefully engaging in any sort of creative work or a conscious walk, engaging in daily life situations risking consciously, active imagination method (Carl Jung), etc can serve the healing process. Often we turn away from immersing in any creative works because our thoughts need assurance that it will feel better before we even start. It is perhaps a trick of thoughts to keep clinging onto the habit of remaining unconscious. It even comes with fear of missing out as an excuse for preventing a wholehearted immersion — allowing all the tensions to arise and get released along with the creative process. The habitual mind — as soon as a difficult feeling/thought arises, if let alone — gets preoccupied with how to avoid or become unconscious of the difficult thoughts or feelings; it seems the mind is quite satisfied with anything but not being conscious of the experience; it is a trick, in the sense, we do not really get to escape the reality, the most it can do is to be unconscious or not see ‘what-is’ — running in circles for the hope of avoidance. Though the way in is the only way out. Being conscious of how we prolong our suffering, opens up the possibility for us to be willing to face the threatening sensations in a kind manner as we become able to unhook from the pull of old habits.
https://medium.com/spiritual-secrets/why-you-need-to-become-conscious-75b966cedb5b
['Pretheesh Presannan']
2020-10-14 14:44:50.790000+00:00
['Creativity', 'Spiritual Secrets', 'Consciousness', 'Mental Health', 'Healing']
Why the warming oceans will get louder
Why the warming oceans will get louder Mashable Follow Feb 21 · 4 min read “The ecological implications of this are wide open.” BY MARK KAUFMAN IMAGE: GETTY IMAGES/ISTOCKPHOTO Every 10 minutes, the relentlessly warming oceans absorb 50 megatons of energy, the amount of energy released when detonating the largest-ever atomic bomb. These warming seas — which soak up over 90 percent of the heat humanity traps on Earth — harbor a particularly loud critter found all over the world: the snapping shrimp. The shrimp make an omnipresent background noise similar to static, or frying bacon, or crinkled paper. And new evidence points to a future where snapping shrimp may get significantly louder as the oceans continue to warm — a big environmental change for the many creatures inhabiting bustling reefs. Marine scientists from the Woods Hole Oceanographic Institution recently gathered some 200 snapping shrimp from waters off of North Carolina. The researchers found the already-clamorous species become markedly louder when introduced to warmer temperatures, and will present their research Friday at the 2020 Ocean Sciences Meeting. “The sea is a pretty noisy place,” said Aran Mooney, a marine biologist at Woods Hole who coauthored the research. “We’ve been worried about human-produced noise in the ocean,” he said. “But we haven’t thought about natural sounds.” The sound of snapping shrimp (which is likely an involuntary, automatic response) in coral and oyster reefs is truly ubiquitous. “If you’re listening to a healthy coastal ecosystem, you’ll hear snapping shrimp,” said Michelle Fournet, a postdoctoral researcher at the Center for Conservation Bioacoustics at Cornell University. “We hear them almost every hour of every day,” Fournet, who had no involvement in the research, added. Each “snap” is made when shrimp quickly close their claws and pop a little bubble of air. Listen for yourself: The Woods Hole researchers exposed the shrimp to three main temperatures: 10 degrees Celsius (50 F), 20 C (68 F), and 30 C (86 F). For reference, the shrimp usually experience 10 C temperatures in the winter, and about 28 C during the summer. The difference in noise levels between 10 C and 30 C, measured in decibels, was dramatic. In the warmer conditions, the snapping was 20 decibels louder, which equates to being ten times as loud, explained Wood Hole’sMooney. “It’s already pretty loud” Overall, ocean temperatures have risen by about 2 F over the last 100 years, so each season, whether summer or winter, presumably has boosted overall temperatures and loudness, explained Mooney. Critically, the U.N. International Panel on Climate Change (IPCC) expects surface ocean waters to warm by at least another 1 degree Fahrenheit or so by 2050. The warming will continue until human-made carbon emissions drop to zero — and there’s no evidence of that happening for at least decades. ( Emissions are still on the rise.) The link between temperature and snapping is clear, said Mooney, and suggests that it’s almost certainly going to get louder in ocean reefs, which teem with life (Though, like all novel research, these compelling results must be reproduced again). The looming question, however, is what does this boost in noise mean for ocean animals? IMAGE: NOAA “The ecological implications of this are wide open,” said Cornell’s Fournet. “We don’t know what these effects are,” agreed Mooney. This research opens up the door to find answers, he said, by further researching snapping shrimp and animals, potentially in wild places. Louder environments could easily cause problems for fish and other creatures. The boosted volume can mask sounds of the reef, as animals look for food or avoid predators. To us, reefs might appear quiet, but when recorded with underwater microphones, it’s a bustling place. Toadfish whistle, groupers make low-frequency sounds, and many fish “puff.” “Fish make a lot more sounds than people think,” said Fournet. Or, the amplified snapping could be like having excessively loud neighbors, said Mooney. In that case, the noise would be incessantly stressful. Already, civilization’s warming of the oceans — stoked by dramatically boosted levels of atmospheric carbon dioxide to amounts likely not seen in millions of years — has resulted in animals fleeing from their homes and may also blind some sea creatures. Now, we’re likely altering the way their world sounds. “We’re changing the soundscape,” said Mooney.
https://medium.com/mashable/why-the-warming-oceans-will-get-louder-2277444e4b6d
[]
2020-02-24 16:42:59.111000+00:00
['Science', 'Oceans', 'Climate Change', 'Environment']
Automating a Machine Learning Workflow using Google BigQuery and Amazon Managed Apache Airflow
Automating a Machine Learning Workflow using Google BigQuery and Amazon Managed Apache Airflow Using BigQuery, Airflow, and Amazon Personalize to Build a machine learning workflow Amazon announced the availability of Amazon Managed Workflows for Apache Airflow (MWAA), a fully managed service that makes it easy to run Apache Airflow on AWS and to build data processing workflows in the cloud. Apache Airflow is an open-source tool used to programmatically author, schedule, and monitor sequences of processes and tasks referred to as “workflows”. This article shows how we can build and manage an ML workflow using Google BigQuery, Amazon MWAA, and Amazon Personalize. We’ll build a session-based recommender system to predict the most popular items for an e-commerce website based on the traffic data of the product pages tracked by Google Analytics. Amazon Personalize enables developers to build applications with the same machine learning (ML) technology used by Amazon.com for real-time personalized recommendations — no ML expertise required. BigQuery is an enterprise data warehouse that solves this problem by enabling super-fast SQL queries using the processing power of Google’s infrastructure. High-level solution We’ll start by extracting the data, transforming the data, building, training, deploying a solution version (a trained Amazon Personalize recommendation model), and deploying a campaign. These tasks will be plugged into a workflow that can be orchestrated and automated through Apache Airflow integration with Amazon Personalize and Google BigQuery. The diagram below represents the workflow we’ll implement for building the recommender system: Architecture diagram by Yi Ai The workflow consists of the following tasks: Data preparation Export session and hit data from a Google Analytics 360 account to BigQuery, use SQL query analytics data into Pandas data frame with Personalize format, and then write data frame to CSV file directly to S3. Amazon Personalize solution Create a Personalize dataset group if it doesn’t exist. Create an Interaction schema for our data if the schema doesn’t exist. Create an ‘Interactions’ dataset type if it doesn’t exist. Attach an Amazon S3 policy to your Amazon Personalize role if it doesn’t exist. Create a Personalize role that has the right permissions if it doesn’t exist. Create your Dataset import jobs. Create / Update Solution. Create/ Update Campaign. Before implementing the solution, you have to create an Airflow Environment Using Amazon MWAA; “extra packages” should be included while creating an environment; please don’t include BigQuery Client in the below requirements.txt, we will install the BigQuery Client in the next step: boto >= 2.49.0 httplib2 awswrangler google-api-python-client When the new Airflow environment is ready to be used, attach the Personalize policies to the IAM role of your environment, run CLI as below: $aws iam put-role-policy — role-name AmazonMWAA-MyAirflowEnvironment-em53Wv — policy-name AirflowPersonalizePolicy — policy-document file://airflowPersonalizePolicy.json Installing Google Bigquery Client MWAA currently doesn't support Google Cloud Bigquery client ( google-cloud-bigquery ) and pandas-gbq with grpc > 1.20 , We are not able to install Bigquery Client through requirements.txt . if you put the above dependencies into requiremnts.txt, pip installation won’t install any dependencies in requirements.txt , and you will meet error No module named “httplib2” when running DAG. To resolve this issue, we can 1)package the required Google libraries in local computer and upload to S3, 2)and then download them to Airflow workers when the bigquery export task started, 3)after that we can dynamically import required modules given the full file path. I created a bash file and requirements.txt for the above steps; run the following command: $bash setup.sh setup.sh requirements.txt then, copy the following code to the DAG task to import google modules dynamically : Next, We will create Google Cloud Connection in Airflow Airflow UI. Now, we will use Google BigQuery in Amazon Managed Airflow workers; let’s begin to create workflow tasks. Data preparation First, export session and hit data from a Google Analytics 360 account to BigQuery, use SQL to query Analytics data into Pandas data frame with Personalize format. To prepare an interaction dataset for Personalize, we need to extract the following data from BigQuery and Google Analytics: USER_ID In this example, we don’t have user data of the e-commerce website, and there is no user interaction data from the website database. However, we can use the client id provided by Google Analytics. The client id ( cid ) is a unique identifier for a browser–device pair that helps Google Analytics link user actions on a site. By default, Google Analytics determines unique users using this parameter. The client ID format is a randomly generated 31-bit integer followed by a dot ( “.” ) followed by the current time in seconds. Hence we only need the 31-bit integer before the dot. BigQuery provides regular expression support, which we can put REGEXP_EXTRACT(USER_ID, r’(\d+)\.’) AS USER_ID in Big Query to extract the session id as session-based User Id. In this example, we don’t have user data of the e-commerce website, and there is no user interaction data from the website database. However, we can use the client id provided by Google Analytics. The client id ( ) is a unique identifier for a browser–device pair that helps Google Analytics link user actions on a site. By default, Google Analytics determines unique users using this parameter. The client ID format is a randomly generated 31-bit integer followed by a dot ( ) followed by the current time in seconds. Hence we only need the 31-bit integer before the dot. BigQuery provides regular expression support, which we can put in Big Query to extract the session id as session-based User Id. ITEM_ID Google Analytics provides page location ( page_location ), so we can extract product pages (product slug) by WHERE Clause page_location LIKE ‘%/product/%’ and make it as Item Id. Google Analytics provides page location ( ), so we can extract product pages (product slug) by WHERE Clause and make it as Item Id. TIMESTAMP , timestamp data must be in UNIX epoch time format, use TIMESTAMP_TRUNC(TIMESTAMP_MICROS(event_timestamp) to convert Analytics event_timestamp to correct format. , timestamp data must be in UNIX epoch time format, use to convert Analytics to correct format. device.category AS DEVICE . AS . geo.country AS LOCATION . AS . event_name AS EVENT_NAME . SQL query in BigQuery as below: Next, write the data frame to the CSV file directory to S3 using AWS DataWragler. The following PythonOperator snippet in the DAG defines BigQuery to S3 task. Creating a recommendation model with Amazon Personalize In this section, we will build a Personalize solution to identify the most popular items for an e-commerce website integrated with Google Analytics. We will use Popularity-Count Recipe for training our model. Although Personalize supports importing interactions incrementally, we will retrain the model base on daily interaction data to get more relevant recommendations. What we’ll cover: check_s3_for_key ( S3KeySensor ): check if the dataset CSV file exists. ( ): check if the dataset CSV file exists. t_check_dataset_group ( BranchPythonOperator ): check if the Personalize dataset group exists. If Yes, trigger t_init_personalize , else trigger t_skip_init_personalize . ( ): check if the Personalize dataset group exists. If Yes, trigger , else trigger . t_init_personalize ( DummyOperator ): trigger parallels tasks if dataset group doesn't exist( t_create_dataset_group,t_create_schema,t_put_bucket_policies,t_create_iam_role) . ( ): trigger parallels tasks if dataset group doesn't exist( . t_create_dataset_group ( PythonOperator ): Create a Personalize dataset group if it doesn’t exist. ( ): Create a Personalize dataset group if it doesn’t exist. t_create_schema ( PythonOperator ): Create an Interaction schema for our data if the schema doesn’t exist. ( ): Create an Interaction schema for our data if the schema doesn’t exist. t_put_bucket_policies ( PythonOperator ): Attach an Amazon S3 policy to your Amazon Personalize role if it doesn’t exist. ( ): Attach an Amazon S3 policy to your Amazon Personalize role if it doesn’t exist. t_create_iam_role ( PythonOperator ): Create a Personalize role that has the right permissions if it doesn’t exist. ( ): Create a Personalize role that has the right permissions if it doesn’t exist. t_create_dataset_type ( PythonOperator ): Create an ‘Interactions’ dataset type if it doesn’t exist. ( ): Create an ‘Interactions’ dataset type if it doesn’t exist. t_skip_init_personalize ( DummyOperator ): Downstream task of BranchOperator task. ( ): Downstream task of BranchOperator task. t_create_import_dataset_job ( PythonOperator ): Create your Dataset import jobs. ( ): Create your Dataset import jobs. t_update_solution ( PythonOperator ): Create / Update Solution. ( ): Create / Update Solution. t_update_campagin (PythonOperator): Create/ Update Campaign. DAG tasks for Personalize workflow In the next section, we’ll see how all these tasks are stitched together to form a workflow in an Airflow DAG. Defining DAG Different tasks are created in the above sections using operators like PythonOperator for generic Python code to run on-demand or at a scheduled interval. Now let’s set DAG with parameters; a DAG is simply a Python script that contains a set of tasks and their dependencies. Next, specify task dependencies: After triggering the DAG on-demand or on a schedule, we can monitor DAGs and task executions and directly interact with them through Airflow UI. In the Airflow UI, we can see a graph view of the DAG to have a clear representation of how tasks are executed:
https://medium.com/ai-in-plain-english/creating-a-machine-learning-workflow-using-google-bigquery-and-amazon-managed-apache-airflow-1596b1a0375e
['Yi Ai']
2020-12-22 23:04:48.530000+00:00
['Big Data', 'Machine Learning', 'Airflow', 'AWS', 'Google Big Query']
I Hope That You Learn How To Gently Start Over
I hope that you learn how to gently start over. I hope that you learn how to look at yourself and know that you aren’t quite the person you want to be, without condemning the person you currently are. I hope you learn how to see your evolution not as a linear ascent into perfection, but an unpacking of why you might want to perfect yourself in the first place. What feels so broken? And who taught you it was that way? I hope that you learn success is less about vision than it is consistency, because ideas are easy, and everyone has them. It is what you act on consistently that you truly see the viability of. It is what you do all of the time that you learn to grow around and through. You are not supposed to get it all right the first time, you’re just supposed to keep trying until you do. I hope you learn that loving is much like life — it takes everything and gives everything back. And merging your life with someone else’s is the greatest honor you’ll ever get, so I hope that you learn how to bend, not break, how to compromise, not take, and how to appreciate, not assume. I hope you learn that you are also your own project, your own muse, your own love affair. I hope you learn you belong to yourself. I hope you learn that you are not meant to grow once and never again, but to fall in love with the process of building, and pulling apart, and rebuilding again. Life calls us to shed ourselves at different points in time. There is nothing we can do to avoid this — no dogma, no religion, no belief, no accumulation of belonging that could possibly remove this requirement from us. We are not here to be just one person, nor a series of ourselves piled up upon one another, fighting for relevance and dominance and space. We die and are reborn often. Instead of gripping tightly to that which gives you place, I hope you learn that growth is really just learning to love what you have while you have it, be where you are while you’re there, and not get too flustered at the fact that you’re still a work in progress. There is no point at which you are supposed to be completed. The only finish point is death. Your life is about gently starting over, every day, every hour, in ways both subtle and disruptive, beautiful and melancholy, startling and expected. I hope you learn how to gently dust yourself off and begin again because life is too short to stay stagnant, life is too full to only drink a quarter of the glass. If you enjoyed this piece, check out my new book on self-sabotage, or book a 1:1 mentoring session with me.
https://medium.com/age-of-awareness/i-hope-that-you-learn-how-to-gently-start-over-22f55c749257
['Brianna Wiest']
2020-12-01 19:01:11.424000+00:00
['Spirituality', 'Life Lessons', 'Inspiration', 'Motivation', 'Creativity']
Ever Wondered Why O’Reilly Books Have Animals on Their Covers?
History of the Design of O’Reilly Book Covers In the article “A short history of the O’Reilly animals,” it’s mentioned that in the mid-1980s, O’Reilly used to sell short books on Unix topics via mail order. These were held together by staples and had plain brown covers. As time progressed, Tim O’Reilly (born 6 June 1954, and the founder of O’Reilly Media, formerly O’Reilly & Associates) decided to sell books through brick-and-mortar stores. He hired a graphics designer for the first two titles that were sold in the book stores, but he wasn’t satisfied with them. Then enters a woman named Edie Freedman into the picture. She was neighbors with a woman who was involved with the company’s marketing and technical writing team. Over a friendly chat, the women discussed the two book covers that had been published by then by O’Reilly, and Freedman was asked if she had some better ideas. Quoting Freedman directly here from the article: “ I had heard of Unix, but I had a very hazy idea of what it was. I’d never met a Unix programmer or tried to edit a document using vi. Even the terms associated with Unix — vi, sed and awk, uucp, lex, yacc, curses, to name just a few — were weird. They sounded to me like words that might come out of Dungeons and Dragons, a game that was popular with a geeky (mostly male) subculture. Sometimes when designing, things come together effortlessly — everything falls into place as if it were inevitable. It just flows. As I looked for images for the book covers, I came across some odd-looking animal engravings from the 19th century. They seemed to be a good match for all those strange-sounding UNIX terms and were esoteric enough that I figured they’d probably appeal to programmers. And, as I investigated the attributes of the real animals, I quickly discovered that there were intriguing correspondences between specific technologies and specific animals. That resonance grew and expanded as I learned more about both the technologies and the animals. I was so energized and inspired that I spent an entire weekend working on the covers without much sleep. At the end of the weekend, I gave several sketches to my neighbor to take into the office. “Some of the people at O’Reilly were taken aback: they thought the animals were weird, ugly, and a bit scary. But Tim got it immediately — he liked the quirkiness of the animals, thought it would help to make the books stand out from other publishers’ offerings — and it just felt right. And so it began. We’ve published hundreds of Animal books since then, and the brand is well known worldwide.” And this has surely worked. O’Reilly books can be recognized distinctively, due to the animals on their cover pages, even in the most crowded of the bookstands. This distinction has contributed immensely to the intangible brand recognition and value systems. This was the history of the design of the cover pages of the O’Reilly books.
https://medium.com/better-programming/ever-wondered-why-all-the-oreilly-books-have-animals-on-their-covers-b9440d41570
['Juhi Ramzai']
2020-09-08 14:54:15.079000+00:00
['Software Development', 'Technology', 'Design', 'Books', 'Programming']
When I Win, It’s Skill. When Someone Else Wins, It’s Luck
When I Win, It’s Skill. When Someone Else Wins, It’s Luck Roll the dice more when the worst that can happen isn’t a big deal From PIRO4D on Pixabay The title is a joke in a fantasy football league of my friends from college. Whenever we win on a given week, we talk trash about our skill at starting just the right players. Whenever we lose, we blame it on bad luck and external factors like weather and injuries. Of course, with COVID-19 making the season especially precarious, there are even more factors our fantasy football league can attribute to bad luck. The facetious attitude is that when we win, it’s skill. But when we lose, our opponents got lucky. Of course, I don’t believe this attitude outside the realm of fantasy football. It’s simply the dynamic with my friends that make it interesting and entertaining to trash talk. On a more serious note, as a writer and teacher, it’s easy to fall into the trap of getting jealous and attributing your successes to skill and others to luck. You can think that other teachers teach easier classes and less challenging students. But the lesson here is clear —most of life is luck. It’s a reality I first started to realize this year, and then start to buy into as a way to cope with realities I faced, my students faced, and more faced that seemed increasingly unfair. It’s a human tendency to attribute our own successes to our own skills and merit, and the accomplishments of others to luck. Luck is more important than skill. And while most people in western society find that mindset defeating, I find it liberating — I don’t take too much stock in my own successes and think I’m better than people, because I just got luckier. I don’t take my losses and failures as personally as I used — I just got unlucky. And I’m more than willing to take risks and venture out of my comfort zone because luck dictates I have no idea where the outcome is going to go. Don’t get me wrong — I have a lot of willpower. The problem is that willpower, determination, productivity hacks, and anything else I can credit to my personal attributes only goes so far. It only lasts so long. As a writer, luck means writing about topics I’ve never written about before because I don’t know how well I’m going to do writing about them, or how much I’m going to enjoy the writing process. Luck means pitching big and dreaming big, and not taking it personally when I get no response from a prominent editor or publication. According to Scott Barry Kaufman at Scientific American, the secret to success just might be luck. Kaufman emphasizes that magazines like Success, Forbes, Inc., and Entrepreneur as telling the secrets to success, and that success is all due to personal characteristics like hard work, toughness, optimism, a growth mindset, and emotional intelligence. And that’s not only a part of our values but also how we allocate resources. In the words of Kaufman, “We tend to give out resources to those who have a past history of success, and tend to ignore those who have been unsuccessful, assuming that the most successful are also the most competent.” Kaufman is a psychologist, and he spent his whole life studying psychological characteristics that predict creativity and achievement. While he has found many traits that explain differences in success, much more is left unexplained. Kaufman concludes that luck and opportunity play a far greater role in anything than we previously realized. He doesn’t think luck is everything but instead thinks that personal characteristics matter much less than we think, and are more determined by other determinants of success. Some of these determinants of success are that half of the differences in income of people across the world is explained by the country they’re raised in and the income distribution in their country. Kaufman also notes that scientific impact is randomly distributed, not based on productivity. Also, the chances of being a CEO is often most influenced by your mother’s name or month of birth. So that begs the bigger question — is all of success just luck? And what would be the implications for whether successful people are just lucky? How would we allocate resources? How would we assess the “potential” of others? The idea of a meritocracy has been discredited enough, and in a 2018 paper by Pluchino et al., the Italian author credited the “toy mathematical model,” which defined talent as the personal characteristics that allowed them to exploit lucky opportunities. The authors replicated the Pareto Principle, which suggests that a small number of people end up achieving the success of most of the population. The simulation found that talent was equally distributed, but success was not equally distributed. The 20 most successful individuals held 44% of the total success in that study, which is consistent and even an underestimation of how wealth is distributed in the real world. The authors found that talent isn’t irrelevant to success since people with talent had a greater probability of increasing success. People with more talent could exploit the possibilities offered to them. But talent is not the most important factor, since the most talented people in the study were often not the most successful. The people most successful had above average talent, but the most luck. “In general, mediocre-but-lucky people were much more successful than more-talented-but-unlucky individuals,” Kaufman says. Luck ended up being the most important factor in the study. As a Christian, I believe everything is up to God. At the same time, I believe God controls luck. And God isn’t Santa — He doesn’t just reward you because you believe in him. The “shit just happens” attitude of Robert Frost is one I completely buy into, because the randomness of life makes it so a lot of good people suffer immensely, while people who commit mass genocide gain power. I personally will have to read much more into the theology of luck. Luck is an unpopular idea in western society, and for good reason. We want to focus on what’s merely within our control. But maybe we need to keep believing in luck, according to the School of Life, despite luck being “a substantial offence against modern ideals of control, strategy, and foresight.” We want to be the authors of our own destinies, and there’s certainly nothing wrong with that notion. Advances in science, medicine, insurance, and education tilt the tide against luck. But luck will never be tamed as an entity. Every success requires a significant amount of luck, and we avoid failure using luck. Think about driving a car — the most dangerous activity most of us do every day. Lord knows how many near accidents each of us has gotten into, how many close calls we’ve had either making risky turns or having other people make risky turns. People see themselves less as passive actors in the world by day. Personal responsibility is the idea and concept with which we feel the greatest degree of control, and luck often assigns our failures to things outside our control. But a belief in luck should ground us in humility when we’ve successful, and not blame other people for their circumstances. A belief in luck can help enforce our own sense of compassion and empathy, because no one is to blame for their circumstances. At the same time, a belief in luck means no one else is to blame for our circumstances — there’s a tendency in some circles I’ve been in to blame all our problems on our parents. And while I didn’t have the perfect and picturesque childhood, my problems are my own. Luck and personal responsibility can co-exist. But doesn’t luck give us the capacity for personal responsibility? We’re lucky that we didn’t get into that accident that nearly ended our lives. We’re lucky for so many things. And just because we’re so lucky doesn’t mean we should be complacent, but rather aim higher and take more risks. Of course, that doesn’t go for all high stakes situations, but situations where there’s really not much to lose. Roll the dice more when the worst that can happen isn’t a big deal. Apply for that dream job. Pitch that dream publication. Strike that conversation you’re nervous about. Go on that trip. Have some fun. What’s the worst that can happen? We just might get lucky.
https://medium.com/the-partnered-pen/when-i-win-its-skill-when-someone-else-wins-it-s-luck-477e3b551473
['Ryan Fan']
2020-12-20 17:38:47.421000+00:00
['Creativity', 'Ideas', 'Philosophy', 'Nonfiction', 'Self']
How To Write For In Fitness And In Health On Medium
Welcome to In Fitness And In Health! Ours is a community dedicated to living happy, healthy lives and inspiring others to do the same. We share stories, anecdotes, tips, tricks, and suggestions on how to do so. We believe in the power of storytelling. The content that performs best for us comes from firsthand experience in the form of personal lessons, thoughts, ideas, opinions and profound takeaways. We want to provide as much value as possible for the reader. We’re so excited you’re interested in joining us as an active contributor. As long as your content has a focus on health, fitness or general well-being, we’d be happy to accept it. Read on for our official submission guidelines.
https://medium.com/in-fitness-and-in-health/how-to-write-for-in-fitness-and-in-health-on-medium-f4c67f028073
['Scott Mayer']
2020-10-06 23:10:20.233000+00:00
['Health', 'Fitness', 'Nutrition', 'Mental Health', 'Running']
Scientists Prove that Yoga Can Help With Fighting Migraines
We’ve all had to deal with having a headache. Some of us get it regularly in stressful times, others might experience headaches when the weather changes, when they don’t drink enough (or too much, right?) or when they have other issues to deal with, like back pain or musculoskeletal problems. Around 12% of people worldwide are suffering from migraines. Now, even though the term migraine might have been used ubiquitously to describe a very bad headache in the past, it is actually way worse than that. Patients suffering from recurring migraine attacks can have various symptoms, reaching from extreme vulnerability to light and noises (phono- and photophobia), to vomiting, blurry vision, the inability to stand up straight, and — of course — a severe, pulsating headache. An attack can last from 30 minutes up to a whole day. Migraines are hard to treat Once an attack occurs, it knocks out the patient completely. He or she has to stay home from work, hide in a dark room, and will not be able to tolerate any light, loud noises or movements. In the long run, this can lead to complications like depression, sleep deprivation, and decreased quality of life. Photo by Artem Podrez by Prexels Of course, modern medicine offers medications to treat the disease. This includes pain medication, like ibuprofen or paracetamol, and also specific treatments like triptans that work centrally in the brain itself. Nevertheless, a treatment for migraine can be tough. Some patients don’t respond to the offered treatment and have to try out multiple medications until they experience an improvement of symptoms. Triptans come with unwanted side effects and can lead to an increase in blood pressure, dizziness, ischemia of the heart muscle, and more. Plus, if taken for a long period of time, the migraine might stop to respond to triptans and the pain might come back even worse. Multiple other treatments and supportive measurements have been proven to help with migraines. To decrease the frequency of attacks, patients are advised to avoid stress and to exercise at least three times a week. Yoga might be a new potential therapeutic In May 2020, a research group from New Deli in India published their results of the randomized clinical trial CONTAIN in the renowned journal Neurology, introducing a new and promising treatment option to help with migraines. In the trial, 160 patients suffering from the disease were observed for three months. Before the trial started, they were randomly divided into two groups: The first group received only the standard medications recommended for prophylactic and acute medical therapy. The other group received a supervised regimen of yoga modules, consisting of 60-minute classes 3–5 times a week next to the standard medical treatment. Both groups were assessed at baseline for their migraine severity and handed a headache log book to write down symptoms and attacks throughout the study. Patients were also asked to document any side medication. Multiple objective scales like the Migraine Disability Assessment questionnaire (MIDAS) and the Visual Analogue Scale were used to quantify symptoms. The two groups were very comparable due to similar baseline demographic and clinical characteristics except for the yoga group showing a slightly increased baseline headache frequency before the trial when being compared to the group treated with medication only. Promising results are highly significant After three months of observation and follow-up examinations, patients in the yoga group showed a highly significant reduction of headache frequency, headache intensity and pill count. Supervised yoga sessions helped with reducing stress and increasing relaxation. While other studies have suggested yoga sessions or meditation in the prodromal phase of an upcoming migraine attack, this study stresses the need for practicing Yoga on a regular basis. In conclusion, this well-planned trial published in a well-known and critically reviewed medical journal shows the beneficial effect of add-on therapy with Yoga when being compared to medical therapy alone. Photo by Samuel Silitonga by Prexels Yoga can be studied by almost everyone and does not cause any unwanted side effects. Moreover, it could effectively reduce the number of pills needed to treat the migraine after all. It is also cost-effective, making it very attractive to low-income countries. Even though the study has only observed patients for the course of three months, the effect is assumed to last also for longer periods of time. More studies will be needed to understand the biochemical and psychological changes yoga induces in the human body. Nevertheless, this study shows the need of integrating different therapeutically approaches to match a patient’s needs.
https://medium.com/beingwell/scientists-prove-that-yoga-can-help-with-fighting-migraines-e00017d69283
[]
2020-06-29 07:20:09.123000+00:00
['Migraines', 'Yoga', 'Science', 'Health', 'Medicine']
11 Principles of Placemaking: How to Design People-Centered Places
If you want to design places catered for humans, placemaking is the concept you want to utilize. It’s UX on an urban scale. We plan, design, and develop according to the user’s needs and preferences, to ensure that the output is actually useful and usable for them. In this case, the user is the community — the people. In their book How to Turn a Place Around, Project for Public Spaces (PPS) proposes 11 principles to help urban planners, designers, and developers make places people actually want to live in. Let’s dissect them one by one. 1. The Community Is the Expert To begin a placemaking process, it’s best to first identify the talents and assets of the community. In every community, there are “experts” who can present valuable perspectives and insights about the area’s history, culture, functionalities, or any other aspect that’s considered meaningful for the people. By placing the community as the expert, it will foster a sense of “community ownership” which elevates the place’s importance in the eyes of its residents. 2. Create a Place, Not a Design When we hear “design,” our minds are likely to conjure images of shapes, colors, and other physical elements. However, it actually extends beyond that. As we understand from UX, design is about experience. In placemaking, every element (physical or otherwise), must serve a purpose. Mainly, the place should make people feel welcome and comfortable. This often translates to the development of effective relationships between the place, the people, and the activities. The goal here is to make the whole greater than the sum of its parts. 3. Look for Partners In a placemaking project, it’s best to look for partners as early as possible. These partners may come from the residents, local governments, educational institutions, museums, libraries, or other related organizations. Since placemaking is community-driven, and the place itself is supposed to foster community ownership, more partners will generally mean a greater sense of community. 4. You Can See a Lot Just by Observing Instead of guessing what might work or not work, it’s much more preferable to learn through observation. Simply by looking at how people are conducting their activities in a place, which elements they use or don’t use, what features they like, or don’t like; we can accurately identify what elements are missing and thus can be incorporated through placemaking. It’s also important to remember that the people’s preferences will evolve over time, and needs to be managed continuously. 5. Have a Vision A good placemaking approach shouldn’t just flow by “whatever seems easiest” or “whatever seems good at the time” — it should have a vision. This vision might include the physical shape of the place, its brand/image, the kinds of activities that might happen there, and so on. Ideally, the vision should come from the community itself. 6. Start with the Petunias: Lighter, Quicker, Cheaper Places are complex entities, and the placemaking practitioners shouldn’t expect to do everything correctly from the get-go. The best places are made through incremental improvements that are tested and refined over many years. For example, physical elements like seating, footpaths, or public artworks can be added and subtracted as needed, in time. Your first step can be as simple as planting a row of petunias. From there, you can gradually experiment with more intricate elements. 7. Triangulate Holly Whyte defined triangulation as follows: “Triangulation is the process by which some external stimulus provides a linkage between people and prompts strangers to talk to other strangers as if they knew each other.” In the context of placemaking, this generally means the relationship between separate elements within the place, and how we select and arrange them to achieve a desirable effect. For example, if we put a coffee stand and a bench near a pond, they will create a behavior pattern: People will buy a cup of coffee, then sit on the bench to drink it, while enjoying the pond’s view. 8. They Always Say “It Can’t Be Done” Yogi Berra once said: “If they say it can’t be done, it doesn’t always work out that way.” Creating good places always has its obstacles. Often, no single stakeholder in the placemaking project has a specific responsibility to “create places.” Each person will have a narrow job description such as “landscape design” or “traffic management.” As mentioned before, placemaking itself is a community effort that’s nurtured through small scale improvements over time. 9. Form Supports Function While “form” is important, it must be designed to support a certain “function.” This function should come from the place’s vision and the community’s needs and preferences, which can be understood through the input of the partners, and the community itself. 10. Money Is Not the Issue Usually, the major problems of placemaking are not financial in nature. After basic infrastructures are put in the public spaces, any additional elements often don’t have that big of a cost. Additionally, if the community and other partners are thoroughly involved in the process, this cost can be reduced further. In the broader context, the financial costs are not as significant as the benefits. The bigger issue is, therefore, to ensure that all stakeholders involved are on the same page, going all-in on the same vision. 11. You Are Never Finished By nature, a good place is one that responds to the needs and the preferences of the community, and these are ever-changing variables. Amenities wear out, new technologies are invented, social customs are continuously shifting — placemaking doesn’t simply “end,” it’s an ongoing process. Thus, to create good places, the ability to adapt and overcome change is of the utmost importance. Final Thoughts Placemaking is still growing as an urban development concept. It will adjust itself to the characteristics of the particular time and location. Perhaps, these 11 principles will also change with it. That being said, whenever and wherever placemaking is conducted, one thing stays the same: Just like UX design, that focuses on the users, placemaking focuses on the community. First and foremost, placemaking is done with the community — the people — in mind. That’s the most important mindset you need to make people-centered places.
https://medium.com/age-of-awareness/11-principles-of-placemaking-how-to-design-people-centered-places-b84e7e705a1f
['Aushaf Widisto']
2020-10-03 04:13:58.643000+00:00
['Creativity', 'Design', 'Urban Planning', 'Placemaking', 'Cities']
Ford is known as one of the greatest entrepreneurs of all time. But was he such a great leader at all? How do you spot a weak leader?Was Henry Ford a Good Leader?
Ford is known as one of the greatest entrepreneurs of all time. But was he such a great leader at all? How do you spot a weak leader?Was Henry Ford a Good Leader? Maximilian Perkmann Follow Dec 14 · 4 min read There is an often-told story of what happened when some intellectuals denounced Henry Ford, claiming he didn’t know much. Ford challenged them to come and ask him anything they liked. He listened to their questions and, when they were through, he simply reached for several phones on his desk and called in some of his bright assistants and asked them to give the answers the intellectuals sought. Model T — Pixabay.com He ended by telling the panel that he’d rather hire smart people to come up with answers, so he could leave his mind clear to do more important tasks, tasks like thinking. One of the quotes credited to Ford goes: “Thinking is the hardest work there is. That is why so few people engage in it.” Was he such a great leader? The Other Perspective John Maxwell poses the story of Henry Ford as a negative example in his chapter about the leadership principle of “empowerment”. According to him, the company lived for a long time on the revolution that has been the Model T and its innovative form of production. Henry Ford has invented assembly line production with the Model T. However, the company faced a hard time with Henry Ford in the following years. By blocking the ideas of capable employees and with his eccentric behavior Ford almost destroyed the company. Only his son Edsel Ford was able to rescue the company. Because of Edsel, several employees remained in the company. Henry Ford’s grandson “Henry Ford II” took over the management in the next generation. In the beginning, he made some very good decisions and, above all, hired excellent managers. Unfortunately, he fell into negative patterns too, causing some great damage through intrigue. You can read the exciting details about this in the 21 Leadership Principles. I do not want to go into more details, but I just want everyone to have a serious look in the mirror. Do you see some parallels to the Ford family? Are You a Weak Leader? A weak leader is driven by fear. The fear of no longer being the center of attention, the fear of being replaced, and the fear of losing control. Photo by Alfred Aloushy on Unsplash In fact, a great leader does not need to fear replacement. When people lead excellently, an exciting paradox comes into play: those who work to make themselves superfluous are given more and more responsibility. Provide a safe framework The most valuable thing a leader can give is a safe framework where they don’t have to worry about things that aren’t their job. Everyone has a limited amount of resources in their head. So if someone needs their brain capacity to manage uncertainty, they won’t have that capacity available for creative performance. If a leader provides a safe framework, then employees can concentrate fully on their creative work. Theodore Roosevelt simply says that “the best boss is the one who has enough brains to select good leaders who can achieve his goals, and then does not interfere with their work.” Insecurity in Leaders How do you spot insecure leaders? Resistance to change Change means uncertainty. That’s why “weak” people often struggle with change. However, self-confident people are aware that preventing change is much more dangerous in the long run than the short-term uncertainty of a change process. Even if a change does not bring the desired result, we can always continue to change until it fits again. Lack of Self-confidence If you don’t know who you are, you can’t stand firm when there is a problem. Being self-aware and being able to confidently represent decisions is a must for every leader. Micro-Management Most of the time, the good development of an organization depends on the leader doing a few big things well. Insecure leaders tend to get lost in a thousand small details and “micromanaging” their employees. Competition Insecure leaders become nervous when others increase in authority in the organization. As a result, they try to reduce their influence or to Take Away “Leadership, is the ability to bring out the best in people. The technical skills of business are easy. The hard part is working with people.” — Rich Dad, Cashflow Quadrant: Rich Dad Poor Dad Working with people, taking responsibility and trust are some serious skills when being in a leadership position. To conclude, I want to bring up another quote from Harry Truman, 33rd president of the United States: It is amazing what you can accomplish if you do not care who gets the credit — Harry S. Truman Interested in leadership? Have a look at my article Agile Leadership: A Mindset, Not a Method
https://medium.com/illumination/was-henry-ford-a-good-leader-311fc36e7381
['Maximilian Perkmann']
2020-12-20 21:37:07.998000+00:00
['Business', 'Startup', 'Entrepreneurship', 'Leadership', 'Management']
Exploring Wellness Newsletter
Exploring Wellness Newsletter Check out the latest stories from making your own lattes to finding your running motivation. Photo by Corinne Kutz on Unsplash Welcome to the first-ever newsletter for Exploring Wellness! I had intended to write a regular newsletter when I began this publication, but life got in the way, and here we are. I plan to provide periodic updates to you highlighting our feature articles. Although, don’t expect them to come on any sort of regular sort of basis. Until my kid is back in school full time and e-learning is a thing of the past, I’ve been careful not to overdo expectations for myself. We’re all facing stress these days, and we need to remember self-care. Sometimes, self-care means going easy on yourself and your expectations.
https://medium.com/exploring-wellness/exploring-wellness-newsletter-5c6fd3c0d329
['Jennifer Geer']
2020-10-04 19:43:31.395000+00:00
['Medium Publications', 'Health', 'Newsletter', 'Wellness', 'Lifestyle']
Make LinkedIn Part of Your Content Promotion Strategy to Increase Your Reach
3. Distribute Your Content in LinkedIn Groups LinkedIn has a plethora of interest groups that you can join and take part in. Since these groups are tied to specific interests, industries, and demographics, they contain an already segmented audience to target. This can help increase your influence within a specific area, as well as establish thought leadership. Share Your Content in Groups as a Link Post This one can be a bit tricky, but it can also be rewarding when done correctly. The first part is choosing your groups wisely. You need to pick groups that aren’t so unmoderated that they are nothing but spam, but not so moderated that they don’t allow you to share your content at all. Ideally, you want a group that fits your content so well that when you do share your content, it isn’t considered spam. It’s considered helpful. One way to ensure your content is seen as helpful and not spam is to interact with group members before dropping in content links. Another is to interact with the group members that do like and comment on your content when you do share. LinkedIn groups will give you what you put into them. Engage with members who have high-quality content like yours, and they will likely do the same when you start to promote your content. Share Your Content in Groups as a Link in a Comment For the groups that are so heavily moderated that you can’t actually start a link as a group discussion, try sharing your content as a comment in the group instead. Once you’re in the LinkedIn group, use the search box to see if there are any questions that relate to your content. If you find any, see if you can create an answer that leads to a link to your content. At a bare minimum, the person who posted the question will get notified of your answer. If you provided a valuable answer, they might reward you with a thank you and a share of your content outside of the group.
https://medium.com/the-innovation/make-linkedin-part-of-your-content-promotion-strategy-to-increase-your-reach-2632f16af48d
['Esat Artug']
2020-12-08 21:52:01.861000+00:00
['Marketing', 'Writing', 'Business', 'Social Media', 'Freelancing']
Build Quality Microservices and Apps
Build Quality Microservices and Apps Ensure governance by including these five elements in your requirements document Photo by Gabrielle Henderson on Unsplash Fragile apps and microservices are often the results of vague requirements, and developers left to their interpretation and devices. A well written technical requirements document provides sufficient information to ensure IT governance and implementation consistency across teams. Proper governance leads to better product quality, increased efficiency through clarity, and improved stakeholder collaboration, e.g. testers. Software quality should be ensured before the coding starts, and not wait for quality assurance (QA) to raise issues to fix—as it would slow down the entire delivery process. In the following, I’ll share five attributes of good technical API/microservice specification that has served me well during my stint as a developer doing the implementation; and as a tech lead managing the governance and requirements. #1: Short description and intended callers Microservices are often designed and built based on bounded contexts. Each microservice at the data layer would be in charge of its own set of tables or documents within a database, e.g. products, users, etc. according to microservice design principles. In the short description, specify the following: Database/table names to query — prevent overlapping of queries and joins to other tables managed by other teams/services. — prevent overlapping of queries and joins to other tables managed by other teams/services. Downstream microservices invoked — applies to logic layer or stateless microservices to communicate and manage dependencies. With the above information specified, the developer can then communicate with the relevant database and microservice owners to align requirements and bridge implementation gaps. #2: Microservice details and non-functional requirements The seemingly minor details such as transport encryption protocols and response headers could potentially lead to deployment issues, e.g. certificate incompatibility, failing security tests, etc. especially in a high-security environment. Therefore, it’s crucial to indicate such details in the requirements document: Transport authentication/encryption — specify whether if it’s mutual TLS, HTTP/HTTPS, etc. so that the developer can manage the application configuration and certificates in the codebase. — specify whether if it’s mutual TLS, HTTP/HTTPS, etc. so that the developer can manage the application configuration and certificates in the codebase. Microservice supported actions — indicate if the application is supporting GET/POST/PUT/DELETE requests to limit redundant exposure of endpoints. — indicate if the application is supporting GET/POST/PUT/DELETE requests to limit redundant exposure of endpoints. Microservice response headers — ensure that appropriate response headers are returned from the microservice to avoid exposing details that would threaten security, e.g. server type, etc. — ensure that appropriate response headers are returned from the microservice to avoid exposing details that would threaten security, e.g. server type, etc. Microservice endpoints and query params — provide examples of the specific endpoint pattern, e.g. https://[NAMESPACE_URL]/[SERVICE]/v1/data?{param}={value}, to standardise API endpoint designs. By indicating such details, it allows the developer to build the application with basic security considerations and governance in place. #3: Request and response details The request and response details potentially make up the bulk of the document. The information would also likely be disseminated to other collaborators to assist with service designs, testing, etc. The key elements to include in this section are: Request headers — some optional yet important examples include UUID headers to facilitate request tracing and accept-language for localisation. — some optional yet important examples include UUID headers to facilitate request tracing and accept-language for localisation. Request query parameters — with the attribute name(s), required/optional, data type, and a short description of the attribute. — with the attribute name(s), required/optional, data type, and a short description of the attribute. Response body — include attribute names (e.g. status, status.code, data, data.uuid), data type, and a short description for the attribute. I would recommend the above information to be presented in a table format to facilitate readability for non-technical stakeholders. The next section on sample requests/responses will cater to the technical stakeholders. #4: Sample requests and responses For certain stakeholders, merely providing the request and response description and attributes would not suffice. Having a sample output response in JSON format enables the reader to quickly contextualise and visualise the desired output of the service with “actual” data. To facilitate clarity, include the following in the sample request/response section of the document: Microservice action, URL and headers — a suggestion would be to imagine you’re querying the actual service via CURL and include the essential details. — a suggestion would be to imagine you’re querying the actual service via CURL and include the essential details. Response with HTTP code — for each HTTP code(s), include the respective response payloads to ensure that errors are properly handled; this is especially essential for services deployed in a mesh architecture. You can refer to a sample snippet below. Image by Author #5: HTTP and business response codes Considering the high interdependency of services in a microservice architecture, errors would have to be properly handled with appropriate response codes to facilitate debugging and tracing. The list of response codes also helps with testing of the microservice application. Things to include: HTTP response codes — details such as code number (403), name (Forbidden), type (Client Error), description (Invalid authorisation credentials). — details such as code number (403), name (Forbidden), type (Client Error), description (Invalid authorisation credentials). Business response codes — include code number (8989), scenario (access token expired), description (the token is more than one day old). In most cases, the solution architect/lead would have a list of error codes established during the solution design phase to reference against and standardise.
https://medium.com/the-internal-startup/build-quality-microservices-and-apps-79544948579a
['Jimmy Soh']
2020-06-23 16:06:04.929000+00:00
['Software Development', 'Startup', 'Technology', 'Software Engineering', 'Programming']
Here What’s Trait Have Common In Highly Intelligent People
There is different character ascribes that set significantly shrewd people apart from others. Amazingly, these characteristics are not related with a person’s IQ level yet are connected with one’s outrageous slants, emotions, and social collaborations. The vast majority don’t comprehend their regular endowments henceforth they rarely utilize their full imaginative potential. Just significantly cunning people really understand their inventive limits and use them to their full cutoff. Following is a smart undertaking to layout the overall picture for everyone to get his/her ability and to grasp the qualities of especially intelligent people.They appreciate that knowing everything is ludicrous. It is incredible to hope to be familiar with each and every subject. Significantly vigilant people grasp it absolutely and never postponement to tell others if they are not aware of a subject. Since when they don’t know something, they take in it from others and get however much data as could reasonably be expected. They are flexible Significantly intelligent people conform to their present situation by changing their practices. They adjust enough to their ecological factors and make themselves familiar with different thoughts. Interest Academic interest is something that makes an individual uncommonly sharp. Such people make themselves open to new experiences and gain from each piece of life. They are open-minded intelligent people are open in their considerations and are versatile to recognize other’s characteristics and examinations. They respect others’ fantasies. They hold their own feelings until they have enough verification and recognize things as they appear to be. Exceptional limitation Before an endeavor, intelligent people clarify their destinations and plan for them. They generally investigate elective methodologies and remember the results. They have outrageous restraint which causes them to tackle their issues and move towards their objectives. Ludicrous restriction Preceding an endeavor, intelligent people clarify their targets and plan for them. They for the most part explore elective methodology and recall the results. They have a ludicrous limitation that urges them to deal with their issues and move towards their targets. They are individualistic Astoundingly intelligent people value their own discussion, discussing their musings, and orchestrating with them. They are less pulled into socialization than others and feel that it’s more accommodating to consider issues that have significance for the duration of regular daily existence. Inconceivable attention to what’s really clever Intelligent people are not depleting; they have extraordinary familiarity with what’s really interesting. As shown by examines that were done, capable joke specialists were among the most raised scorers on extents of verbal information. They can relate viably to all focuses Significantly intelligent people draw equivalent lines between separating musings and dreams. They recognize old musings and change them to conform to prospects. They are more thoughtful They grasp others feeling and almost feel what others encountered. Significantly shrewd people are continually excited about social events new people and picking up from their experiences. They are fretful intelligent people wonder a ton about different things. They are reliably anxious to know the clarification behind the presence of everything and consider conditions from different focuses. They dawdle Intelligent people dodge futile issues and delay in more huge endeavors. They require some genuine energy with huge work and keep noodling different open doors for the results. They license more special intends to go into their mind as opposed to running customary frameworks. If you understand that how by and large will be versatile and acknowledge the status quo, there’s a fair chance you’re significantly watchful. Instead of being rigid about what ought to happen, they remain mentally versatile, liberal and can without a doubt adjust to life, paying little heed to what gets throw their systems,. It basically shows that you can handle issues and find plans quickly. As should be self-evident, it’s not connected to being book-splendid. Being thoughtful, sympathetic, and intelligent are through and through signs of information. So in case, you notice any of these qualities in yourself, you could be a particularly savvy person. Incredibly wise people don’t endeavor to go about like they know it all. In all honesty, a sign of information is seeing the way that you don’t have any acquaintance with it all. If they can’t achieve something they don’t endeavor to go about as they can,. Or maybe they realize their cutoff focuses and can leave it alone known. This licenses them to be accessible to acquire from others or possible situations. An incredible working memory and general understanding are significantly related,Exactly when you have a good working memory, this infers you have head working aptitudes, incredible flashing memory, the ability to focus a lot. It is like manner suggests you have scholarly versatility, and can without a very remarkable stretch change beginning with one thing then onto the following. Having limitations suggests you have advancement. You understand how to control your sentiments and inspirations so they won’t make any harm. According to intelligent people having balance means that knowledge since it infers you will undoubtedly think before you talk or act. When these people experience trouble in their lives, they also work to settle the issue and reduction the bother quickly, she says. In our present reality where people banter with exhibit what their personality is, significantly canny people are the opposite. As opposed to gloating about their accomplishments or telling people how right their emotions are, they’re regularly peaceful and mindful. Exactly when you can take everything in, you can see things that others missed like honest patterns. It’s definitely not hard to expect that uncommonly astute people like to scrutinize. Regardless, being intelligent isn’t connected to having the option to encounter different books a day. It’s connected to having an oddity about everything regardless.intelligent people attract their inclinations and posture requests like who, what, when, where, how, why, and think about how conceivable it is that.They like getting some answers concerning others, social orders, animals, history, and the world unhindered. While having worship for examining isn’t generally a sign of knowledge, it shows that you like learning and you’re intrigued.Knowledge has been described from different perspectives: the breaking point as for reasoning, getting, care, learning, exciting data, thinking, masterminding, informativeness, essential thinking, and basic reasoning. Even more generally, it might be depicted as the ability to see or incite information and to hold it as data to be applied towards flexible practices inside an atmosphere or setting. There are conflicting musings in regards to how understanding is assessed, going from the likelihood that information is fixed upon birth, or that it is adaptable and can change depending upon an individual’s disposition and efforts. Several subcategories of information, for instance, exciting understanding or social knowledge, are seriously chitchatted concerning whether they are regular kinds of intelligence. They are generally thought to be undeniable cycles that occur, anyway there is speculation that they tie into standard knowledge more than as of now suspected.Despite their usage of the word intelligent a couple of terms may have practically nothing or nothing to do with the referred to mental cycles.
https://medium.com/illumination-curated/here-whats-trait-have-common-in-highly-intelligent-people-d5e6fc703b6b
[]
2020-12-23 11:07:05.313000+00:00
['Traits', 'Intelligence', 'People', 'Self Improvement', 'Science']
Python HOW: Farewell Anaconda! Take Full Control of Your Development Environment
Setting up python with pyenv, venv, pipx, and vsCode on Windows and macOS Photo by Kevin Ku from Pexels Anaconda and Miniconda are amazing python distributions that get you up and running out-of-the-box. Once you start deploying your projects into production, however, you will defiantly need more control. In this article, I go through some of the best tools to make that transition happens! 🤘 All the CLI commands used in this article are in PowerShell for Windows, and zsh for macOS Before starting up 🚿 If you’ve anaconda or miniconda installed, uninstall it as detailed here. For Windows, also: Delete $HOME\Documents\WindowsPowerShell\profile.ps1 if you have previously ran > conda init powershell if you have previously ran Disable the built-in python launcher: search for Manage app execution aliases ▶️ disable App Installer aliases for python Install python 🐍 Download the latest 64-bit python Release for Windows or macOS ( 3.9.0 at the time of writing), and install it (💀without adding python to PATH 💀). The installation directory is: To use, specify the full path, e.g: $HOME\AppData\Local\Programs\Python\Python39\python.exe ⚠️ Because we haven’t added python to the system’s PATH , each time we want to use it, we have to give the full path. We can indeed do that. However, different projects require different versions of python, which makes this impractical. Solution? 👇 pyenv: python version management tool 🐍🐍 “pyenv lets you easily install and switch between multiple versions of python globally and locally” by using shim executables (more on shims here) To understand how pyenv works, let’s first install it: Windows: install pyenv-win with pip in the home directory, add an environment variable PYENV_HOME , and add bin & shims directories to the system’s PATH , and rehash shim : Full installation instructions here. Installation directory is $HOME\.pyenv MacOS: install with brew , add pyenv init to shell ( init adds bin and shims to PATH , installs autocompletion, and rehash shim ), and install some recommended python build dependencies: Full instructions here. Installation directory is /usr/local/Cellar/pyenv Now we’ve pyenv installed, close and reopen your CLI, then let’s use it to install python versions 3.7.9 and 3.8.6 : The same applies for zsh. Full usage commands are here pyenv installs python versions in the following directory: To make version 3.8.6 global for example (i.e. register it with PATH through shims ), we use the global command: For zsh use which instead of where.exe in line 9 Great! now we can make any version of python global 🎉 (we didn’t talk about making it local but you can check the local command) ⚠️ However, not only that each project requires a different versions of python, but it also requires for that version to be isolated in a virtual environment with all the required packages for the project to run. This allows us to take a snapshot of all the project’s requirements for reproducibility and for deployment. Solution? 👇 venv 📩 “The venv module provides support for creating lightweight virtual environments with their own site directories”. The module was introduced in python 3.3, and as of then, it’s included with any installation of python We can easily create an isolated virtual environment using the venv command. The key point is: the created environment will have its own python binary 🐍 which matches the version of the binary that was used to create it Here, for example, the virtual environment we create will also have python 3.8.6 : The same applies for zsh Before installing any dependencies for that project, however, we need to activate the virtual environment. Luckily, activation scripts for powershell , bash , and batch are also created for us in the sub-folder Scripts 💃 For zsh, use .venv/Scripts/ activate in line 2 To take a snapshot of your project dependencies, use the freeze command in pip : For zsh use cat instead of Get-Content in line 5 If you already have the project’s requirements.txt , go through the same steps but instead of installing the dependencies as before, do: pip install -r requirements.txt to install them from the requirements file Note: to delete the virtual environment, simply delete .venv folder ⚠️ There’re few tools that you should be using for any project, for example, code formatters and linters, that shouldn’t be included as a dependency,. It wouldn’t make sense to install them for each project then delete them from requirements.txt (and their own dependencies!). Solution? 👇 pipx “pipx focuses on installing and managing Python packages that can be run from the CLI directly as applications”. pipx does that by creating an isolated environment for that package (and its dependencies), that we can then accesses from the CLI globally for any project 🙌 To understand how pipx works, let’s first install it (full instructions here): Windows: install with pip from the main python installation (i.e. 3.9.0 ), and run ensurepath to add pipx directories to the system’s PATH : Installation directory is $Home\AppData\Roaming\Python\Python39\site-packages MacOS: install with brew , and run ensurepath to add pipx directories to the system’s PATH : Installation directory is /usr/local/Cellar/pipx The directories that ensurepath actually adds to PATH are its binary directory (so we can run it from the CLI), and the directory where packages’ binaries would go (so we can run them from the CLI), these are: Directories in lines 2&6 have the pipx binary. Directories in lines 3&7 will have installed packages binaries Now we’ve pipx installed, close and reopen your CLI, then let’s use it to install the following helper packages: Black code formatter with “a strict subset of PEP 8 coding style” code formatter with “a strict subset of PEP 8 coding style” isort for sorting imports alphabetically, and automatically separated into sections and by type for sorting imports alphabetically, and automatically separated into sections and by type flake8 code linter for style guide enforcement The same applies for zsh. Full usage commands are here Each package will be installed in an isolated virtual environment inside $Home\.local\pipx\venvs , and have its binary in $Home\.local\bin (which is already in PATH thanks to ensurepath ). To use any of these applications, we can simply run it in the CLI: The same applies for zsh ⚠️ If you’re using vsCode, you might be wondering “how to tell vsCode where to find the global packages installed with pipx so I can use them with any project? Solution? 👇 Setting up vsCode Settings for pipx packages You can tell vsCode to use all the helper packages we installed using pipx , and more importantly, where to find the binaries 📦 in the settings vsCode saves settings in a settings.json file, Ctrl+Shift+P to bring the Palette, then search for Preferences: Open Settings (JSON) : Define the formatting provider as black and provide its binary path (more on formatting here). As black defaults to 88 characters per line, we use the same length for the editor’s ruler (more on line length here) Replace $HOME with your HOME path (vsCode doesn’t define $HOME). For macOS, use / instead of \\ Define the imports sorting path for isort (vsCode uses isort as an import sorting provider by default as you can see here). “ Black also formats imports, but in a different way from isort defaults which leads to conflicting changes” 😵. To fix this, we can pass few arguments to isort to make it consistent with black (more details here) Replace $HOME with your HOME path. For macOS, use / instead of \\ Enable flake8 for linting and provide its binary path (more on linting here). “There are a few deviations that cause incompatibilities with black ”. To fix this, we can pass few arguments to make flake8 consistent with black (more details here) Replace $HOME with your HOME path. For macOS, use / instead of \\ Settings Sync You can sync your settings, keybindings, and installed extensions across your machines by following the instructions here (you need a Microsoft or a Git account). However, all the binaries paths for the packages we installed using pipx shouldn’t be synced between Windows and macOS. Luckily, we can ignore these by adding them to settings.json : Note: you can add any settings in settings.json to the list of ignored settings above What is next? Start using Docker 🐋 for Python Become a Scikit-learn plumber 🚿 Happy coding!
https://medium.com/swlh/python-how-farewell-anaconda-take-full-control-of-your-development-environment-6c4f8103980f
['Gabriel Harris Ph.D.']
2020-12-14 16:13:43.216000+00:00
['DevOps', 'Data Science', 'Python', 'Data Engineering', 'Mlops']
A Founder’s Story: Chris Li, CEO of BioBox Analytics (1/3)
I’ve always loved computers and software. I remember booting one up in my aunt’s office back in 95' and loading an 18Mb video game from a floppy disk. But I never thought I’d be building software one day as a career. My goal was medicine or life-science research. Fast forward to my 20’s and I’ve just received an opportunity to work as a research student in a brand new research institute. Dream do come true. Photo by Cookie the Pom on Unsplash Biology, meet Bioinformatics First week on the job, my boss says to me, “We’re a small lab, and bioinformatics is too expensive right now for us. Your project this summer is to do bioinformatics.” I had no idea what that meant, but I stood there, nodded and accepted the challenge enthusiastically. After googling, “What is bioinformatics”, I sat back into my uncomfortable lab chair, and thought to myself, “What did I just get myself into?”. I thought life-science research was about tissue-culture, mouse-models, complex biochemical manipulations, yet here I am trying to figure how to use a Makefile to compile this program? “do Bioinformatics” Looking back now, I still chuckle at the notion of “do bioinformatics”. Bioinformatics is the happy love-child of computing, software engineering, advanced probability and statistics, and biology. It’s a discipline borne out of necessity through a tectonic change in life-sciences research — next generation sequencing (NGS). Find any major life-sciences high impact research paper in the last decade. You’d be hard-pressed to see one that didn’t use some form of bioinformatics. It’s now a foundational aspect of life-sciences research. Through NGS tech and advancements in computation biology, we’re now able to decipher information about biology that lead to the discovery of new genes, disease-causing mutations, and fundamental biological processes at an unprecedented rate. But there is a cost to this knowledge, as an unintended side-effect emerged. These advancements made software literacy and programming chops one of the most sought after skills in biological research. Fortunately, in my story, the summer of doing bioinformatics ignited in me a deep curiosity, passion, and borderline obsession with the cross-section of software, stats, ML, and biology. A few years later, by the time I entered grad-school, I was fully self-sufficient in bioinformatics and was able to leverage those skills to blast through the first two years and generate enough data for a reclassification. But throughout my grad school experience I saw first-hand what happens if you were less fortunate and didn’t have the time to train in bioinformatics. Photo by Brett Jordan on Unsplash My email inbox and backlog of work was consistently full from collaborator and colleagues requests for bioinformatic support. I can’t count how many late nights I’ve had with colleagues crunching through numbers with them, whipping up figures, and translating their biological research question into a computational pipeline. Don’t get me wrong, I did this gladly and would do it whenever asked, because this is what science is about. Scientists help each other in the pursuit of knowledge, not trading tit-for-tat favours. But I could see their frustration from losing the autonomy of conducting/performing this work by themselves. The frustration from a request to collaborators or bioinformatic services being unanswered for weeks, only to have the results/figures come back different from their expectations, and trigger another round of this toxic cycle. Photo by Q.U.I on Unsplash It started with a napkin One consequential Friday evening, me and my now co-founders met up after work and went across the street to the bar. A few too many pitchers later, we began discussing these issues and on a bar napkin, we drew out a hypothetical system to solve this problem. At the time, we didn’t think it would grow into what BioBox is today. We just wanted the emails to stop so we could get back to our own projects. It was a small simple web-app that loaded their data, and gave them the ability to run basic stats tests, plotting, and simple analyses. After a month of tinkering on weekends, I sent it out to a few colleagues and the results were amazing. Inbox — 0 new emails. Then the feature requests started coming in. After a little more research into how pervasive this problem was, the three of us decided take the leap and we left our careers behind to found BioBox. It’s been almost 2 years (at the time of this writing) since we committed to this path. The purpose of our company is to build a platform that provides autonomy back to the biologist by giving them all the tools they need to execute their bioinformatic analyses. In so doing, freeing up the bioinformaticians from requests like, “Please make my plot more red”, and getting their time back to focus on the things they love doing, like developing new algorithms, tools, and pipelines. We live in a time where sequencing your entire genome is cheaper than your iPhone We are blessed to live in an era of scientific progress where we’ve generated and collected more biological data than ever before. But having data is not the same as having knowledge. Transforming data into knowledge relies on the creative/innovative thinking from our biologists and the efficiency/ingenuity from our bioinformaticians. Pushing the boundaries of science is like rowing a boat upstream. It takes work, commitment, sweat, and energy from our scientists. At BioBox, we’re not rowing the boat for you, we can’t. Only you can. But what we can do is give you the best oars, boat, and gear to support you along your journey. This is the singular mission for us here at BioBox.
https://medium.com/bioboxanalytics/a-founders-story-chris-li-ceo-of-biobox-analytics-1-3-a2e5fd110521
['Christopher Li']
2020-12-10 13:49:44.249000+00:00
['Biotechnology', 'Genomics', 'Software Development', 'Founder Stories', 'Startup']
Globally Autoscaling Web Services with Health Checks
Globally Autoscaling Web Services with Health Checks Season of Scale Season of Scale “Season of Scale” is a blog and video series to help enterprises and developers build scale and resilience into your design patterns. In this series we plan on walking you through some patterns and practices for creating apps that are resilient and scalable, two essential goals of many modern architecture exercises. In Season 1, we’re covering Infrastructure Automation and High Availability: In this article I’ll walk you through how to globally scale your web services on Google Cloud. Check out the video Review So far we’ve looked at how Critter Junction was able to launch a new app on Google Cloud. We covered the various compute options Google Cloud has to offer — some including powerful autoscaling capabilities. It really just depends on your language requirements, level of control, access to the OS, and other application characteristics like containerization. Today let’s take a look at how they can enable their apps to gracefully handle peaks and dips in traffic. Preparation is everything Critter Junction is becoming very popular with more users than ever. The game is all about playing daily, collecting items and furniture to decorate your house, and interacting with other players. As we saw in the previous article they chose to run their Layout App on Cloud Run. But, they still chose to migrate some game servers to Compute Engine. As their traffic grew, they were struggling to provision additional instances globally at any given time of the day. This led to overutilized compute and created constant pressure on their operations team. So now they’re looking for an automated way to handle their growing users and maintain performance to keep their users coming back daily. In other words, how can they set up autoscaling instances that check for unhealthy instances and replace them when needed? Global Load Balancer The answer is Google Cloud’s global load balancer and managed instance groups to scale and distribute the traffic automatically. This keeps operations team happy and the users satisfied with performance advantages. Managed instance groups provide features such as autoscaling, autohealing, auto-updating and regional (multiple zone) deployments. To understand this better, let’s step back and understand how a Compute Engine instance is created. Instance creation You create a custom image for your application which is then used to create an instance. To make this reusable, you create an instance template. With an instance template, not only can you set up configuration of the VM, but you can also run startup scripts to pull down the latest version of your code when the machine starts up. You can also attach disk templates with all the software dependencies your app requires, or you leave it as an empty shell that gets populated by a CI/CD pipeline. These templates then automate the creation of the Compute Engine instances at scale through managed instance groups. MIG + Health Check Walkthrough Let’s see how this works with a simple web app example! Create firewall rules In the Google Cloud console, create a firewall rule under VPC networks with the following attributes: Allow HTTP traffic to the app you’re about to deploy. to the app you’re about to deploy. Provide a name: default-allow-http Select a default network. For Targets, select the specified target tags. Set the target tag as http-server . . Set the source filter to IP ranges and provide 0.0.0.0/0 to allow access for all IP addresses. to allow access for all IP addresses. For ports and protocols select TCP and enter 80 . and enter . Now click Create. Create an instance template Head over to Compute Engine and create an instance template with the following attributes: Give it a name: instance-template. Select machine type. Set boot disk image to Debian9. Check Allow HTTP traffic. Under management tab find Automation and add the startup script. sudo apt update && sudo apt -y install git gunicorn3 python3-pip git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git cd python-docs-samples/compute/managed-instances/demo sudo pip3 install -r requirements.txt sudo gunicorn3 --bind 0.0.0.0:80 app:app --daemon This script causes each instance to run a simple web application during startup. 2. Finally click create. Create and instance using your new template Now that you have an instance template, you can create an instance group using this template. Create an instance group on the Compute Engine instance groups page with the following attributes: Give it a name. Under location, select multiple zones. This protects you from zonal failures. This protects you from zonal failures. Select a region and under instance template select the template you just created. Now set autoscaling mode to Autoscale. Set the Autoscaling policy to CPU utilization . You can also set policy to HTTP load balancing or monitoring metrics. . You can also set policy to HTTP load balancing or monitoring metrics. Set the target CPU usage to 60%. Set the minimum number of instances to 3 It’s recommended that you provision enough instances so that if an entire zone was to go down the remaining instances still meet the minimum number required. Set the max number of instances to 6 to make sure you don’t incur additional cost. to make sure you don’t incur additional cost. We will set the cool down period to 120 seconds. Make sure this number is higher than the time it takes for CPU utilization of the VM to initially stabilize. Skip setting a health check for now but we’ll cover that in the next article. 2. And click create, then wait for few minutes until all the instances are running 3. Then go to VM instances and click on the external IP of the instance to see the demo web app page. Traffic load-test Now that we have it all set up, let’s generate traffic so we can see the autoscaling in action. Open Cloud Shell. 2. Create a local bash variable using the export PROJECT_ID command. 3. Run this bash script below. export MACHINES=$(gcloud --project=$PROJECT_ID compute instances list --format="csv(name,networkInterfaces[0].accessConfigs[0].natIP)" | grep "autoscaling-web-app-group") for i in $MACHINES; do NAME=$(echo "$i" | cut -f1 -d,) IP=$(echo "$i" | cut -f2 -d,) echo "Simulating high load for instance $NAME" curl -q -s "http://$IP/startLoad" >/dev/null --retry 2 done This script increases load which leads to increase in CPU utilization for our demo app. When it reaches the target value of 60% the autoscaling starts increasing the size of our instance group. 4. Now navigate to your monitoring tab in our instance group and you can see the increasing number of instances as the CPU usage is increasing. You should be able to see the scale down effect by running a similar bash script that decreases the load leading to decrease in CPU utilization. And after a few minutes of stabilization period, the autoscaler prompts to decrease the instance group size which is visible in the monitoring tab. Use load balancers at each layer Critter junction has global users. They want the users from Singapore to end up on Asia east web server while the ones in US end up in the US central region. For this they would use global load balancing which routes traffic to the nearest web server instance. which helps reduce latency and improve performance. From there the internal load balancer distributes the traffic to manage and maintain load across the backend. These instance groups in different regions autoscale using an HTTP load balancing policy to scale seamlessly regardless of where the traffic is coming from. Launch day success Not only was Critter Junction able to automate the scaling of their Compute instances using autoscaling and managed instance groups, they were also able to improve performance by serving traffic from instances closest to their users using the Global Load Balancer. But there’s one more step to scaling — identifying the instances that are unhealthy, and replacing them automatically! So stay tuned for the next episode where we will cover how Critter junction can set up Autohealing and keep their users happy. And remember, always be architecting. Next steps and references:
https://medium.com/google-cloud/globally-autoscaling-web-services-4b650cc6fc49
['Stephanie Wong']
2020-09-01 23:25:08.253000+00:00
['Software Development', 'Scalability', 'Software Engineering', 'Google Cloud Platform', 'Cloud Computing']
Bread Can Be Nutritious, But There Are Rules
Bread Can Be Nutritious, But There Are Rules And modern bread breaks all of them. Modern-day bread is the consequence of shortcuts made at every stage of its production. As a result, the human staple for millennia has now become junk food. It’s low in nutrients, high in energy, easy to eat, cheap to buy, and ubiquitous. Modern bread is also hard to digest and problematic for a burgeoning group of people. The old fashioned, ancient food is none of those things, for a whole host of reasons. When the soil is rich, and the grains are processed appropriately; the end-product is a world apart from the highly processed, hyper-palatable pap you see on the shelves today. For many, bread can be nutritious if we use the keys, handed down to us by our ancestors to unlock the nutrients inside the grains, and banish their chemical defenses. Here’s why you may want to make some changes to your daily bread, read on. Grains in the human diet Grains, the main ingredient in bread, have been in the human diet for a minimum of 105,000 years, budding in parts of the ‘fertile crescent’, an area spanning the Middle East in which ancient humans flourished. We’ve been milling grains for at least 23,000 years, and storing them before domestication for about 11,300 years. But, the first evidence of bread making was collected by a meticulous archaeologist, sweeping charred bits from around a fireplace that were later dated to 14,000 years old; the oldest bread crumbs in the world. The archaic flatbread was made from wild einkorn, a type of unadulterated wheat that’s making a comeback as artisan bread in flashy restaurants. Other grains that have traditionally been used to make bread include, rye, spelt (Dinkel), corn, Teff, Kamut, emmer, and many others. Each has its idiosyncrasies, from growing, processing, cooking, to flavour, and nutrient content. However, one thing they all had in common was how they were carefully processed. You may think ‘processing’ denotes a modern action, but humans have processed grains in low-tech ways, for important reasons, over millennia. Grains of all kinds, to varying degrees, contain defensive chemicals that human digestive tracts are not well equipped to deal with. Collectively, they are known as anti-nutrients. We don’t need to evolve to eat grains Herbivores have a digestive system that effectively grinds, soaks, and then ferments grains — pre agriculture these would have been wild — and other plants so they can extract the nutrients effectively. Humans are not herbivores, and so you shouldn’t skip the steps, which amount to predigesting hard to metabolize components using simple tools. Tool usage has a direct influence on the stresses provoking adaptations and evolution. By predigesting grains in low tech ways, and making them easier to assimilate, our digestive systems have not evolved to perform the same task. the ‘oysters & corn tortilla’ gang absorbed none of the zinc in the meal. The basic steps of soaking, sprouting, and fermenting can be performed with just a bowl, some water, and little else. To this day, those with the least possessions ensure these traditions are honoured, when shortcuts are applied, problems arise. When the poorest populations rely on food aid, rather than preparing the foods themselves in traditional ways, a serious nutrient deficiency disease can manifest. Nutrients & Antinutrients Grains, when the soil permits, contain diverse B vitamins and minerals. So, if you eat them, you’ll obtain those nutrients, right? Sadly, it’s never that simple. Grains and other plants, including nuts, seeds, and legumes, contain chemicals that help ensure the seed survives to sprout into a plant and prolong their species’ existence. They include phytates, lectins, oxalates, gluten, and others. These troublesome components prevent micronutrients from being absorbed and cause inflammation in some people. A study by the USDA, in collaboration with the Institute of Central America, demonstrated something quite remarkable about the effects of antinutrients in normal foods. Using blood to evaluate the bioavailability of zinc from oysters, the scientists tested three groups: Oysters only, oysters & corn tortillas, and oysters & black beans. Have a look at the line chart below. Image from ‘Studies on the bioavailability of zinc in man.’ After the test meal, the ‘oysters only’ group measured high for zinc, the level rising straightaway as expected because the shellfish are excellent sources of the critical mineral. The ‘oysters & black beans’ group had reduced levels of zinc absorption, but the ‘oysters & corn tortilla’ gang absorbed none of the zinc in the meal. The researchers also demonstrated a similar, but not as dramatic, effect with white bread in place of corn tortillas. When corn (maize) was first brought to Europe by Christopher Columbus, after his successful 1492 mission to the Americas, it took off quickly. Corn has a higher yield than the European grains of the time; rye, wheat, and barley. But it carried with it a hidden curse. The Aztec and Mayan civilizations had learnt, probably the hard way, that corn needed processing before eating. The next logical step was to blame the victims by claiming they were Vampires right up until the 20th Century when scientists discovered the truth and the simple remedy; appropriate food preparation. Low-tech Processing The ancient Central Americans had a trick up their sleeves. They used limewater, an alkaline solution, to soak corn overnight before cooking. The limewater breaks down the antinutrients in corn, giving our digestive tracts the ability to extract the nutrients that are, without the preparatory step, locked away and untouchable. Although the ancient people’s couldn’t tell you what the mechanisms were, they knew it was a step not to be missed. However, those growing the hardy crop in the Dark Ages, Europe, had no idea. Starting centuries before science advanced enough for answers, people were afflicted by a nightmarish disease called Pellagra, see the image below. Sadly, the European settlers in the Appalachians, USA, were not as savvy as the ancient Central American cultures who knew how to make corn digestible. Image Credit. The vitamin B3 (niacin) deficiency disease is classified by four ‘Ds’: Dermatitis, diarrhoea, dementia, and finally death, and reached epidemic proportions at times. By 1735, the learned of the time suspected a connection with corn, but couldn’t puzzle it out. After all, the Central Americans ate more corn, but pellagra seemed not to touch them. The next logical step was to blame the victims by claiming they were Vampires right up until the 20th Century when scientists discovered the truth and the simple remedy; appropriate food preparation. The traditional techniques; sprouting, soaking, fermenting, and cooking, are low-tech ways of processing foods that have been done for millennia and are still taken seriously by traditional cultures, and foodies. Modern science, as ancient wisdom had done with trial and error, has determined how effective these simple techniques are for improving the bioavailability of nutrients within foods, and deactivating the chemicals that do cause real harm to some.
https://medium.com/beingwell/bread-can-be-nutritious-but-there-are-rules-c4933e3a5dc5
['Tim Rees']
2020-12-28 20:18:16.745000+00:00
['Health', 'Diet', 'Lifestyle', 'Science', 'Food']
Creating a Near Real-Time Financial News Dataset With AWS Lambda
AWS Lambda’s logo Stock prices fluctuate over time depending on market sentiment. Firms can gauge the current and historic market sentiment for individual stocks or entire markets by using financial news articles. With these articles, firms can use natural language processing techniques such as named entity recognition and sentiment analysis to measure outlook for specific stocks or the market as a whole. These methods help to tag articles about publicly traded companies like Microsoft or Netflix and to calculate a sentiment tag or rating indicating if the financial article is positive, neutral, or negative. This article will go over how you can compile a dataset of financial news from CNBC Finance in an S3 bucket that updates daily. To find the top financial news article of the day from CNBC Finance, we can use the requests library in Python to send a GET request to https://www.cnbc.com/finance/. We then parse through the response using Beautiful Soup. We identify “cards” on the website which contain links to the individual articles. We want to retrieve the URLs and record them in a list. For each URL we retrieve, create an Article object which contains the date and text of the article. To run this automatically with AWS, I will use AWS Lambda and store the results in an S3 bucket. Create an S3 bucket. Navigate to the IAM console and create a role that grants access to your S3 bucket. Log into the AWS Lambda Console and click “Create function”. Select “Author from scratch”, select a Python 3.8 runtime, name your function, and click the “Create function” button again. Since we are using two Python libraries that aren’t included in the AWS Lambda Python 3.8 environment, we have to download and zip our code to include in our deployment package. The libraries we have to add are requests and Beautiful Soup. From the command prompt, create a new directory on your computer named cnbc_dependencies and navigate to it. Download the requests package to that directory with pip install requests -t . Make sure to include the period, which downloads the package to the current directory. Download Beautiful Soup. pip install beautifulsoup4 -t . We need make a few modifications to our code first to insert the article text files into S3. We need to specify the handler function in our code which Lambda invokes. Name your Python script lambda_function.py. With these changes in place, the code should look like this: Copy your Python file to your directory and then zip the contents of the directory. You can make modifications your code later from within the text editor in the AWS console if needed. Which files should be included in the zipped deployment package? From the Function code editor in the Lambda console, click on Actions > Upload a .zip file. Scroll down to the Basic Settings section of your Lambda function and look at the Handler setting. “lambda_function” should be the name of your Python file. “lambda_handler” should be the name of a function in your Python file. Lambda will look for this Python file and function when the Lambda function is invoked. Now let’s add an automated trigger to run this function daily. Navigate to the top of the Lambda function configuration page and click the “Add trigger” button. Select “EventBridge (CloudWatch Events)”. Create a new rule with a Cron schedule expression of “cron(0 13 1/1 * ? *)” to run everyday at 1300 UTC (0900 EST). Click “Add” to add the trigger. To verify your Lambda function has access to both CloudWatch and S3, click on the Permissions tab and view the Resource summary. Everything should be good to go now! The articles will be saved as text files and organized by year, month, and day. One possible improvement is reducing the amount of memory required to run this Lambda function. The minimum amount of memory you can use is 128 MB, but this function has been using memory in the 250–350MB range because I’m storing all the responses from each article web page before sending to S3. If I send the article to S3, delete the article from memory, then send a request for the next article, I could scale the memory I use down to 128 MB.
https://medium.com/swlh/creating-a-near-real-time-financial-news-dataset-with-aws-lambda-509e2fe53261
["Daniel O'Keefe"]
2020-10-17 20:17:52.119000+00:00
['Python', 'AWS Lambda', 'AWS', 'NLP', 'Finance']
How to Start a Business in an Afternoon Using Python and Dash
Creating index.py Now it’s time for the fun part… The code! Start by loading dependencies for Dash and the component libraries: import dash import dash_core_components as dcc import dash_html_components as html import dash_bootstrap_components as dbc from dash.dependencies import Input, Output, State from dash.exceptions import PreventUpdate import flask from flask import Flask Notice I import flask too. That will make hosting the Dash app simpler, so I recommend including it. Instantiate the Flask server and Dash. To use Dash bootstrap components, set the external_stylesheets value to dbc.themes.BOOTSTRAP server = Flask(__name__) app = dash.Dash(__name__,server = server ,meta_tags=[{ "content": "width=device-width"}], external_stylesheets=[dbc.themes.BOOTSTRAP]) app.config.suppress_callback_exceptions = True Notice I set the config option suppress_callback_exceptions to True. This Dash project does not actually require any callbacks. Regardless, I include this bit of code in case I decide to build a multi-page website with advanced features in the future. The Dash App Layout The template layout is a simple pattern using Dash HTML and Dash Bootstrap components. No callbacks are necessary since the page doesn’t use any interactive Dash components: Website Template Notice the Headers and footers are created using dbc.Jumbotron. A Jumbotron is a lightweight component that helps content stand out. It takes up a lot of space on the screen which makes it great for showcasing buttons and messages. You can simply replace the bold values with your own, but I recommend playing around and adding more components to customize the header and footer. You can even use an image or insert a custom logo! header = dbc.Jumbotron([dbc.Container([ html.H1("Your_Website_Here.Com", className="display-3"), html.P("Find super deals at Your_Website_Here.com"), html.Hr(className="my-2"), html.P(" " " "), html.P(" ") ], fluid=True) #end container ], fluid=True)# end jumbo footer = dbc.Jumbotron([dbc.Container([ html.H1("Your_Website_Here.com"), html.P(''), html.P("123 Fake ST NE", style= {'text-align': 'center'}), html.P("MN 55445", style= {'text-align': 'center'}), html.Hr(className="my-2"), html.P(''), html.P('Copyright © 2020 Your_Website_Here - All Rights Reserved.') ],fluid=True)#end container ] ,fluid=True)#end jumbo Notice I use html.P as placeholders. That way it is easy to add a tagline or more text if needed. To programmatically create the dbc.Card components that store the image link, I create a helper function named make_card(): def make_card(alert_message, color, cardbody, style_dict = None): card = html.Div([html.P(" ") , dbc.Card([dbc.Alert(alert_message, color=color) ,dbc.CardBody(cardbody)])#end card ,html.P(" ") ,html.P(" ") ])#end div return card Notice the function uses dbc.Alert to automatically create a colored header for the card. Next I’ll create a function to help build the series of cards inside the dbc.Container between the jumbotrons. def create_body(items): b = [] for item in items: b.append(dbc.Col(make_card(item[0], "primary", item[1]))) return b Notice I wrap the make_card function with dbc.Col so the cards are spaced apart in the UI. Notice the make_card function takes in a list of lists and uses the first value, item[0], for the card’s dbc.Alert message and the second list value, item[1], for the card’s dbc.Cardbody. Now it is time to populate the cards using the items I want to promote! Add Products by Creating a List of Links In a text editor, paste the html that copied from clicking the Highlight HTML button after searching for items on the Affiliate page. It will look something like this, but it will appear as a long string: Notice the link contains 3 components: 1. An HTML “a” tag with a href value. 2. An HTML “img” tag with src value inside of the “a” tag from part 1. 3. An HTML “img” tag outside the “a” tag. I will only use the first 2 parts. Using the Dash HTML component library, it is easy to recreate the link’s component structure. In addition to the link, I give the link a name. For example, if I have 1 item in the item list: item_lists = [ ["item_name", html.A(id = 'item1', href= value ,children = [html.Img(src = value)])]#end item 1 ] #end item list Notice the list begins with item_name, followed by the affiliate link. Notice, I give the html.A component a unique ID. That will make it easier to use in callbacks. Construct the Layout Now that the layout components can be constructed using the functions, and the item_lists is populated with affiliate links, I’ll put it all together and set the app.layout. def create_layout(): layout = html.Div(style={ 'background-image': 'url("/assets/people-2557483.jpg")', 'background-position': 'center', }, children = [ header , dbc.Container(id = 'card-cont', children = [dbc.Row(create_body(item_list))], style = {'background-color':'white', })# end container , footer ] #end children ) #end div return layout app.layout = create_layout() if __name__ == '__main__': app.run_server(debug = True) Notice the function create_layout pulls the background image from the assets folder. Notice I pass the item_lists into create_body function inside the children argument for dbc.Container. Congratulations! You just created your first affiliate website! Landing Page Template Complete Code Here is the complete website template code! You will need to populate the item_lists with your own links. import dash import dash_core_components as dcc import dash_html_components as html import dash_bootstrap_components as dbc from dash.dependencies import Input, Output, State import dash_table from dash.exceptions import PreventUpdate import flask from flask import Flask server = Flask(__name__) app = dash.Dash(__name__,server = server ,meta_tags=[{ "content": "width=device-width"}] , external_stylesheets=[dbc.themes.BOOTSTRAP]) app.config.suppress_callback_exceptions = True header = dbc.Jumbotron([dbc.Container([ html.H1("Your_Website_Here.Com", className="display-3"), html.P("Find super deals at Your_Website_Here.com"), html.Hr(className="my-2"), html.P(" " " "), html.P(" ") ], fluid=True) #end container ], fluid=True)# end jumbo footer = dbc.Jumbotron([dbc.Container([ html.H1("Your_Website_Here.com"), html.P(''), html.P("123 Fake ST NE", style= {'text-align': 'center'}), html.P("MN 55445", style= {'text-align': 'center'}), html.Hr(className="my-2"), html.P(''), html.P('Copyright © 2020 Your_Website_Here - All Rights Reserved.') ],fluid=True)#end container ] ,fluid=True)#end jumbo def make_card(alert_message, color, cardbody, style_dict = None): return html.Div([html.P(" ") , dbc.Card([dbc.Alert(alert_message, color=color) ,dbc.CardBody(cardbody)])#end card ,html.P(" ") ,html.P(" ") ])#end div def create_body(items): b = [] for item in items: b.append(dbc.Col(make_card(item[0], "primary", item[1]))) return b def create_layout(): layout = html.Div(style={ 'background-image': 'url("/assets/YOUR-BACKGROUND.jpg")', 'background-position': 'center', }, children = [ header , dbc.Container(id = 'card-cont', children = [dbc.Row(create_body(item_lists))], style = {'background-color':'white', })# end container , footer ] #end children ) #end div return layout item_lists = [["coca cola t-shirt" , html.A(id = 'item1', href="item link", children = [html.Img(src="image link" )])] , ["coca cola t-shirt" , html.A(id = 'item2', href="item 2 link", children = [html.Img(src="image 2 link" )])] ]# end item list app.layout = create_layout() if __name__ == '__main__': app.run_server(debug = True) Now that the landing page is completed, it is time to explore hosting the Dash App. Hosting the Dash App Before getting into it, to host the dash app you will want two things: A domain name A web server If you’re interested in learning how to set up users and authentication, check out my tutorial, How to Setup User Authentication for Dash Apps using Python and Flask. Buying Domain Names To get a domain name for the web site, use a domain name provider like GoDaddy. Domain names are typically pretty cheap! My UJR domain name was UltimateJerkyReview.com A domain name is an identification string that defines a realm of administrative autonomy, authority or control within the Internet. Simply search google for cheap domain names, shop around and find the best deal you can! Try to make the domain name relevant to the products you’re trying to sell. Finding a Web Server It is fairly easy to find virtual hardware on which you can host a web page. I’ve previously reviewed a few hosting options for DIY web projects. I think the company hetzner.com offers some of the best prices for the amount of compute power available. I’ve used DigitalOcean a few times too. Both are stable and affordable options! It is best to do your own research, but I’d recommend either of those options to start. To serve the website to external users across the web requires some kind of web server back-end. Nginx is a lightning fast, open-source web server framework that is easy to use and perfect for this use-case! Along with Nginx, I am using uWSGI to set the website to run as a service that runs in the background when the Linux operating system boots up. Although it isn’t difficult to host the website, the process contains many steps. Review the complete step-by-step guide to hosing Dash applications on Linux in my hosting guide: Once the site is hosted, look into protecting your domain using a service like Cloudflare. Cloudflare offers web-infrastructure and website-security services including DDoS mitigation, Internet security, and distributed domain-name-server services. It is completely free to set up too!
https://medium.com/swlh/how-to-start-a-business-in-an-afternoon-using-python-and-dash-48a8cb08f290
['Eric Kleppen']
2020-12-15 15:19:48.701000+00:00
['Programming', 'Entrepreneurship', 'Business', 'Web Development', 'Marketing']
The Majority of Genre Fiction Follows This Format. Does Yours?
Framing your structure Imagine a rectangle divided into four equal blocks. Image provided by the author Now label each section. The first block is the Preparation Phase, the second the Reactive Phase, the third the Proactive Phase, and the fourth the Conclusion Phase. Image provided by the author Easy, right? Next, we’ll deal with the lines between the blocks. The line between the Preparation and Reactive Phase we’ll call Game Changing Moment #1. Game Changing Moment #2 is the line dividing the Reactive Phase from the Proactive Phase. And, you guessed it, Game Changing Moment #3 divides the Proactive Phase from the Conclusion Phase. Image provided by the author This is the basic framework for all good stories, especially genre fiction. Readers subconsciously look for this structure and identify with each of these key points. When this framework is present, the story seems complete, and our readers feel that sense of emotional satisfaction. Let’s look at each segment in more detail. The Preparation Phase Image provided by the author This section encompasses the first quarter of your novel. This block sets up the story by introducing your characters and setting. Readers get to see what your character’s daily routine is like prior to the cascade of events that sets their life on a different course. One character should emerge as the hero at this point. By showing their normal existence, readers can then identify with and emotionally respond to them at that moment when events spiral out of control. The Preparation Phase is also the place to identify the stakes the hero will be playing for and introduce any inner demons that may hinder them later on. Game Changing Moment #1 Image provided by the author This is the moment where the story really gets going and is arguably the most important event in your entire book. Game Changing Moment # 1 introduces the primary conflict. The main character’s new quest, goal, or need is firmly established. This inciting incident should also inform the reader what is at stake. The Reactive Phase Image provided by the author The second phase of the story shows the hero responding to this big conflict. This first reaction is often unfocused and unorganized. They do not yet have the tools needed to attack the problem and are instead reacting. This blind reaction may take many forms — running, hiding, observing, or avoiding. Basically, our hero needs to buy some time to figure out what’s going on before they can plan their response. Game Changing Moment #2 Image provided by the author This event is what the hero needs to move on from the passive Reactive Phase. It usually enters the story in the form of critical information. The hero’s initial viewpoint of the conflict should change upon receiving this new information. This is also the moment when the stakes begin to raise. However, our hero, armed with this revelation, now goes on the offensive. They can proactively deal with the conflict instead of being at the mercy of events. The path forward is clear. The Proactive Phase Image provided by the author While the Reactive Phase saw our character instinctively reacting, in the Proactive Phase, our hero is actively working to resolve the conflict. They may try and fail multiple times. But each time this happens, the stakes become higher, and the tension grows stronger. If you’ve properly set up your Preparation Phase, the reader should root for the hero to keep trying after each failed attempt. Remember those inner demons? Now is the time to put them into action. They can be another reason our hero doesn’t initially succeed. Naturally, the opposing force is also responding to our hero’s attempts. Since their goals are in opposition, the success of one means the failure of the other. After a series of small victories and defeats, just when it seems that the hero may finally gain the upper hand, the all is lost moment enters the story. Without any new information, they have gone as far as they can. Failure is imminent. Game Changing Moment #3 Image provided by the author Now is when the reader expects that final plot point that drives the story to its exciting conclusion. New information is revealed that provides the hero with everything they need to succeed. Game Changing Moment #3 should enter your novel about three-fourths of the way in. From this moment on, no new information should be revealed. All the pieces of the puzzle are in play. The Conclusion Phase Image provided by the author The final segment of any story is the dramatic climax leading to resolution. The ultimate conflict between hero and opposition takes place and the hero’s goal is achieved. At this point, the main character’s inner demons are also laid to rest.
https://medium.com/the-brave-writer/the-majority-of-genre-fiction-follows-this-format-does-yours-722d58009428
['Jennifer Mittler-Lee']
2020-12-21 17:03:16.164000+00:00
['Creativity', 'Writing Tips', 'Fiction Writing', 'Self Improvement', 'Writing']
Importance and Value of Cross-Pollination in the Workplace
Importance and Value of Cross-Pollination in the Workplace To build a high-performing team, we need to find creative ways to cross-pollinate in our business organizations. Cross-pollination can be an essential consideration for aspiring entrepreneurs in new ventures and business leaders for transformation. In this article, my aim is to highlight the value proposition and the importance of cross-pollination for high-performance teams in new ventures and transforming business environments. I’d like to share a simple example which helped transform a competitive, rigid, and difficult culture to a pleasant learning environment. Cross-pollination is a metaphor taken from botany. The literal meaning of it is the transfer of pollen from plant flowers with different DNA which enable the creation of new types of plants carrying different attributes. Cross-pollination is a powerful metaphor to understand the importance of diversity to create fusion in the workplace. Fusion is an enhanced form of collaboration. Fusion refers to joining different things with different attributes or functions together to co-create or re-create a single new entity or form. Fusion relates to concepts such as integrating, blending, merging, amalgamating, and bonding. These terms are compelling norms for enabling inclusiveness and creating a diverse work culture in business organizations. The metaphor of cross-pollination refers to sharing and interchanging of ideas, thoughts, information, and tacit knowledge for the enrichment of team capabilities. One of the critical benefits of cross-pollination in the workplace is to maintain continuous learning. I shared my experience of continuous learning in a story overviewing prominent adult learning theories. I Simplified Prominent Adult Learning Theories For You. Here is my cross-pollination story for your enjoyment. You can find more of these stories on my News Break profile.
https://medium.com/illumination-curated/importance-and-value-of-cross-pollination-in-the-workplace-c5b046001c00
['Dr Mehmet Yildiz']
2020-12-28 16:13:02.842000+00:00
['Writing', 'Business', 'Entrepreneurship', 'Culture', 'Lifestyle']
Cloud Native Architecture Fundamentals
Overview of Cloud Native Cloud-Native became the buzzwords in a very short time and the biggest trends in the software industry but before we even talk about the Cloud-Native application, let’s first understand more about cloud computing fundamentals. In simple terms, cloud computing is the delivery of computing services — including servers, storage, databases, networking, software, analytics, and intelligence — over the Internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale. You typically pay only for cloud services you use, helping you lower your operating costs, run your infrastructure more efficiently, and scale as your business needs changes. Typically, you use cloud computing in three ways IaaS, PaaS and SaaS. For the scope of this blog, the above information should be sufficient to get started. Cloud-Native is the way or approach to build and deploy applications that take great advantage of cloud computing services using various cloud computing models. The Cloud-Native principles and architecture help you to build Apps faster, better and shorten the path to production. it is about achieving overall speed, scale, agility and competitive advantage. Unlike the continuous hype that drives our industry, Cloud-Native is for-real. Considering the Cloud-Native Computing Foundation (CNCF), a group of more than 200 major corporations with a charter to make Cloud-Native computing ubiquitous across technology and cloud stacks. As one of the most influential open-source groups, it hosts many of the fastest-growing open-source projects in GitHub. They include projects such as Kubernetes, Prometheus, Helm, Envoy and gRPC. Four Key Pillars of Cloud Native Architecture 1. Microservices Cloud-Native systems embrace Microservices, a popular architectural style for constructing modern applications. Built as a distributed set of small, independent services that interact through a shared fabric. Microservices share the following characteristics: Domain orientated approach to implements a specific business capability within a larger domain context. Each Microservice should be developed and deployed autonomously and independently. Each Microservice is self-contained, encapsulating its own data storage technology (SQL, NoSQL) and programming language/tools. Each Microservice runs in its own process and communicates with others using standard communication protocols such as HTTP/HTTPS, gRPC, WebSockets, or AMQP. 2. Container Nowadays, containerization has become a very popular technology for deploying apps. The container provides portability and guarantees consistency across environments. By encapsulating everything into a single package, we isolate the Microservice and its dependencies from the underlying infrastructure. Kubernetes/K8s is a container orchestration and management system built by Google. When operating at scale, container orchestration is essential. Scheduling, Networking, Monitoring, Failover etc is a common orchestration task of an orchestration engine. 3. DevOps DevOps creates a culture and an environment where building, testing, and releasing software happens rapidly. It helps you in automating the process of platform provisioning and application deployment. Hence, it makes infrastructure and deployments consistent and gives a repeatable characteristic. Cloud-Native apps should be managed by DevOps tools like Jenkins, Dockers and Kubernetes. 4. CI/CD Continuous Integration and Continuous Delivery helps you in making frequent releases and speedup the go-to-market process. At the same time, this enables you in moving incremental software changes into production constantly through automation. Continuous delivery makes the act of releasing software robust and reliable so that, organizations can deliver software more frequently with less risk and get customer feedback faster. Best practices and guidelines for implementing Cloud-Native applications Modern Design, the 12 — Factor app 1. Codebase — One codebase tracked in revision control, many deploys 2. Dependency — Explicitly declare and isolate dependencies 3. Configurations — Store configuration in the environment 4. Backing Service — Treat backing services as attached resources 5. Build, release, run — Strictly separate build and run stages 6. Processes — Execute the app as one or more stateless processes 7. Port binding — Export services via port binding 8. Concurrency — Scale-out via the process model 9. Disposability — Maximize robustness with fast startup and graceful shutdown 10. Dev/prod parity — Keep development, staging, and production as similar as possible 11. Logs — Treat logs as event streams 12. Admin processes — Run admin/management tasks as one-off processes API is the only medium for apps to communicate Apps communicate using APIs. When you’re building an application, we should think about how it will be consumed by applications running in the same ecosystem, and start by designing an API strategy. A good API design makes the API easy to be consumed by app developers and external stakeholders. It’s a good practice to start by documenting the API using the OpenAPI specification before you implement any code. Some of the good open source tools are APICURIO, Swagger, etc can be leveraged for designing and development. Authentication and Authorization Security is a wide area to cover as it includes operating systems, networks and firewalls, data and database security, application security, and identity and access management. But here let’s focus on security from an application’s point of view. APIs provide access to the applications in your enterprise ecosystem. You should therefore ensure that these building blocks address security considerations during the app design and build process. Data in transit: Use TLS1.2 to help protect data in transit. You might want to use mutual TLS for your business applications. This is made easier if you use service mesh like Istio/Linkerd on Kubernetes Engine. Application and end-user security: Transport security helps in providing security for data in transit and establishes trust. But it is a best practice to add application-level security to control access of your app based on who the consumer is. The consumers can be other apps, UI, vendors, partners, etc. You can enforce security using API keys (for consuming apps), certification-based authentication and authorization, JSON Web Tokens (JWTs) exchange, or Security Assertion Markup Language (SAML). OAuth 2.0 is the industry-standard protocol for authorization. Some of the Critical Design considerations Beyond the guidance provided from the twelve-factor methodology, there are several critical design decisions one must make when designing a distributed system. Resiliency: In a distributed architecture, what happens when Service B isn’t responding to a network call from Service A? Or, what happens when Service C becomes temporarily unavailable and other consumers’ services are blocked? Service to service communication: How will front-end client and any consumer applications communicate with backed-end core services? Will you allow direct communication or, will you abstract the back-end services with a gateway façade that provides flexibility, control, and security? What is the right way for back-end core services to communicate with each other? Will you allow direct HTTP calls that lead to coupling and impact performance and agility? Or will you consider implementing the event-driven architecture and decouple the services using streaming platforms like Kafka or RabbitMQ. Data distribution: In Microservices, by design, each Microservice encapsulates its own data which means each service has its own database allowing operations via its publicly exposed interface using API. If yes, how do you query data or implement a transaction across multiple services? Cloud-Native deployment is the way forward to achieve business growth and flexibility in today's Multi-Cloud environment. Thanks and happy learning!!
https://medium.com/walmartglobaltech/cloud-native-architecture-fundamentals-ac13f979916d
['Rupesh Patel']
2020-12-01 05:35:02.891000+00:00
['Engineering', 'Cloud Native Application', 'Microservices', 'Architecture Design', 'The Twelve Factor App']
A Ship Wreck Of A Crime
A Ship Wreck Of A Crime A ship filled with gold sank. 130 years later, it was found. SS Central America Image from the public domain The SS. Central America set sail for New York City on September 3, 1857. It carried 10 short tons of gold, as well as 101 crew members. Captained by William Lewis Herndon, the ship was not expected to encounter trouble from mother nature. Three days after it departed, a tropical storm developed off the east coast of the Bahamas. It picked up strength over time and became a category 2 hurricane on September 9. The same day the SS Central America entered the Atlantic Ocean. As the storm intensified, the crew worked to keep the ship afloat. By the 11th of September, the sails were ripped and torn from the winds. The boat was pushed back and forth, thrown off course. Making matters worse, the boiler threatened to quit working, and water poured in from a broken seal. Morale amongst the crew was low. Many believed they were going to die on the ship. When the storm seemed to subside, they cheered. Bucket brigades started trying to reduce the amount of water in the ship. Their efforts failed, as the water continued to rise thanks to the second part of the storm. Hurricane 2 of 1857 succeeded in sinking the SS Central America. While there were survivors, the majority of those on board drowned. The gold they were carrying was lost with the ship. After The Disaster The world went into mourning when people learned the SS Central America sank. Families and loved ones mourned the loss of life. William Hendron was hailed a hero for trying to save as many of his crew as possible and go down with his ship. Authorities in the United States rewarded his courage by naming two navy ships after him. Hendron, Virginia, was also named for the captain. Also of note, his daughter married Chester Arthur, a future president. As the mourning subsided, practicalities replaced them. Word spread that $8 million in gold went down with the ship. Investors were skittish about the news, helping usher in the Panic of 1857. The financial crisis nearly destroyed the stock market. Insurance companies paid claims to those who owned the gold bars. The matter was thought to have been settled. Until it wasn’t. Searching For The Gold On the 133rd anniversary of the SS Central America’s sinking, a crew set out to find the wreckage. The effort was led by Tommy Thompson, a treasure hunter from Ohio. He used the Bayesian Theory to locate the ship. Using a remotely operated vehicle, the crew discovered artifacts and the gold. An appraiser valued the found gold between $100 and $150 million. The recovery crew celebrated their victory! Life would be good for them; they were certain. The celebration was short-lived. After the discovery of the gold, the insurance companies that paid damages in 1857 sued the crew. They argued the gold belonged to them since their money had covered the cost of losing it. Crew members, including Tommy Thompson, argued that the gold was abandoned. As the judge looked at the case, tensions between the sides rose. Finally, the verdict came in. 92% of the gold belonged to the crew, 8% would go to the various insurance companies. With the decision, everything seemed to return to normal. Tommy Thompson released a book about his adventure discovering the SS Central America. Everyone was living the good life. Money was plentiful, and nobody had any complaints. That changed in 2005. On The Run Those who invested in the 1988 expedition sued Tommy Thompson in 2005. They alleged he had not paid them the money they were owed from the find. In fact, he had absconded with the money and not even spoken to several of them after taking their initial investment. The next year, several crew members filed a lawsuit. They alleged that Tommy owed them money from the discovery. He had shut them out as well. As evidence was presented in the lawsuits, it was discovered that the treasure hunter had an off-shore bank account. With that discovery, some of the investors began to argue they were scammed. They believed Tommy misled them intentionally and never planned on sharing the loot with them. Proving their point, Tommy Thompson disappeared in 2013. After a judge issued an arrest warrant for him, police went on the hunt. They looked everywhere; they believed he could be hiding. Clues were maddeningly short on supply. It seemed incomprehensible that a person could vanish into thin air, even with a lot of money. Do Not Pass Go Authorities found him in Boca Raton. Tommy Thompson was brought into court and faced the judge. He was asked why he didn’t appear before the court; no answer was given. The judge sentenced him to 2 years in jail and a $250,000 fine. As part of the deal that was struck, Tommy was to tell the court where the gold was hidden. He refused to do so. The court reminded him it was part of the deal to tell them where the gold was and assist in bringing it back to the U.S. They were met with more refusal from the treasure hunter. By mid-December 2015, the judge was tired of Tommy’s game. The treasure hunter was found to be in contempt of court. This charge usually only requires a defendant to be in jail for 18 months. As with everything else about the case, nothing was simple or traditional. Over the past five years, Tommy has been intermittently asked about the gold. There is hope that he will come out and tell them where it can be found every time. Each time, he disappoints them and says he has no idea. Even suggesting that he forgot. Which earned a rebuke from the judge. In an official court record, his honor asked how someone could design and patent a submarine but not remember where they put gold. In October 2020, Tommy appeared in court via video. He and the judge went through their usual routine, frustrating the court even more. It was noted that the former treasure hunter has spent 1,700 days in jail and owes more than $1.8 million in fines. It appears as though the hunt for the SS Central America gold may be on once more. This time, there are people alive who can help find it.
https://medium.com/crimebeat/a-ship-wreck-of-a-crime-887f4d5809b0
['Edward Anderson']
2020-12-17 10:32:40.711000+00:00
['True Crime', 'Weather', 'Culture', 'Science', 'History']
7 Habits That Helped Me Lose 20 Pounds During The Pandemic
1. I Used Jordan Syatt’s x12 Rule to Decide My Daily Calorie Limit Even though I just called this a rule, I don’t mean to say you need to stick to this every single day. I try to live by the 80/20 rule. I went into this phase admitting I’m human. The only way to make lasting change is to understand this is not a diet, this is your life. Make small, incremental changes you can sustain for life instead of dramatic changes you’ll give up on in a few days. How did I decide on a calorie limit? I follow Jordan Syatt online, he has an insightful video on YouTube to help you calculate how many calories you should eat for fat loss. I encourage you to watch the full video but here’s the short answer so you can start now. Take your goal body weight in pounds and multiple it by 12. Example: 150 lbs x 12 = 1800 calories a day. For your protein intake, he recommends you take your goal body weight and multiply it by 1. Example: 150 lbs x 1 = 150g of protein per day. I rarely ever met this amount yet still achieved progress. Track everything you eat and stick to your calorie limit 80–90% of the time for 30 days and you will see a difference on the scale and in your clothes. It’s important to include foods you enjoy within your calorie limit. For example, I find a way to include french fries (specifically the Alexia brand) or a protein cookie nearly every day as a treat. The main reason most diets fail is people go to the extreme and eliminate every “bad food.” This might work for a while but eventually, your self-discipline will run out. This is a lifestyle, so find ways to incorporate foods you enjoy and you will find success. 2. I Prepared and Ate Every Meal at Home All restaurants were closed. The only option for food was the grocery store. Not only did this save money, but it naturally encouraged good nutritional habits. While everyone was panicking and stocking up like crazy on canned foods, I shopped the abundant produce aisle. 3. I Tracked Everything I Ate And I mean everything. I used the free app Lose It! to scan and input every bite of food I consumed. Tip for tracking: Buy a food scale. The only way to be exact with what you’re putting in your body is to measure. Measuring cups can be misleading, use weight in grams instead. Most people have no clue how much they’re overeating. Simply by tracking what you eat for a week will help you lose weight. Once you realize how much a serving size actually is, you’ll start choosing healthier options. 4. I Ignored Every Single Fitness Metric Except One Before the shelter-in-place order, I went to the gym 5–6 times a week, focusing primarily on strength training and reached 10,000 steps most days. During shelter-in-place, my activity dwindled to almost nothing. Rather than beating myself up about this, I chose to focus on one thing at a time. Physical activity was never a problem for me. Instead, I narrowed in on the most difficult part of weight loss: Food habits and the mindset behind them. Spending all my time home alone, except for my cat, meant I had a lot of time to think. The motivation behind choosing unhealthy food vs a healthy one became obvious to me. I used a sweet treat or salty snack to ignore a stressful situation at work or feel better after a bad day. I allowed food to soothe my anxiety and depression. By working with a therapist, I learned better ways to cope with my stress and anxiety that didn’t involve food. Despite the fact there is a world pandemic going on, I actually found more peace at home. I focused on my mental health, investing in myself, and making better decisions, all actions I know will lead to lifelong change including weight loss. 5. I Didn’t Only Rely on Myself I believe in working with experts especially if you want to reach a new level. I knew I needed guidance so I searched a few nutritionists in my area, called and made an appointment. Recipe by Ronaldo Linares; plate photography by Renée Comet The first nutritionist I met offered a few nuggets of wisdom, such as adding more protein into breakfast by drinking a protein shake. She also educated me about the plate method. The plate method is based on a 9-in diameter plate to help you keep portions sizes in check. Fill 1/2 your plate with non-starchy vegetables, 1/4 with grain and starchy vegetables, and the last 1/4 is your lean protein. I’ll admit I’m still working on getting to this ideal plate but as I said before I aim for the 80/20 rule, not perfection. Especially while we’re stuck in a rut at home, getting an informed outside opinion on your eating habits can give you exactly the perspective shift to change your eating habits. 6. I Didn’t Expect Instant Results I weigh myself every morning. Not because I expect to lose weight every day but because I like to look at the data. In quarantine, you’re the only accountability partner you have. It also helped me realize how and why weight fluctuates. For example, if I consumed more carbs one day the scale might go up a pound the following morning but eventually, it would go back down in a day or two. I also highly recommend you take before and after progress measurements. The scale isn’t the full story. You may notice the scale doesn’t budge but you lose inches in your thighs and waist. If you focus on food and psychology first, you will notice a difference in your clothes. Remember this isn’t temporary, it’s a lifestyle. Focus on small, attainable changes and give yourself time to see results. 7. When My Weight Loss Slowed, I Didn’t Panic You’ll lose the most in the first 2 months then less the following. I lost 15 pounds in the first 6 weeks, 2 pounds in May, and 3 pounds in June. You might feel disappointed with slow progress but it’s still progress. If you take into account that your life will be throwing you off — especially in a pandemic — an average 2-pound weight loss per month actually is the average. Setting attainable expectations will keep you going. It doesn’t matter if you lose 30 pounds in the first month if you gain it back the next when you lose motivation — by setting attainable goals, you can continue to make small changes you can live with.
https://medium.com/in-fitness-and-in-health/7-habits-that-helped-me-lose-20-pounds-during-the-pandemic-6a516c4892cc
['Monica Galvan']
2020-11-26 21:48:01.784000+00:00
['Health', 'Wellness', 'Lifestyle', 'Nutrition', 'Fitness']
Turing Test vs Chinese Room Argument
Albert Einstein once famously remarked that “ The measure of intelligence is the ability to change.” Since the past two centuries, there has been a constant effort to define AI both in Medical and AI community. Two of the most famous attempt to tackle this are Turing Test and The Chinese room argument. Turing test is a method of measuring AI on whether they are capable of thinking like humans. The Turing Test is a deceptively simple method of determining whether a machine can demonstrate human intelligence: If a machine can engage in a conversation with a human without being detected as a machine, it has demonstrated human intelligence. In simple words even if you are not a human but act like one you are human. On the other hand, according to American philosopher John Searle said that the Turing test was inadequate. The argument and thought-experiment are now generally known as the Chinese Room Argument. Imagine youself in a closed room. Now suppose a girl passed a chinese letter to you. You don’t know anything about chinese but you have chinese to english(and vice-versa) computer program. With the help of that you decode and converse with the girl. The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but could not produce real understanding. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. Computers at best can simulate what we can understand. Which one is more appealing to you? I think most people will go will Searle because that just makes us think that we are much more interesting and complex to decode.😯 Photo by kazuend on Unsplash Well, both theory might intrigue you on giving you thought on which is better I would also like to put a few cons of each theory. Turing did not explicitly state that the Turing test could be used as a measure of intelligence or any other human quality. He wanted to provide a clear and understandable alternative to the word “think”, which he could then use to reply to criticisms of the possibility of “thinking machines” and to suggest ways that research might move forward. One of the most commonly raised objection is that even though the person in the Chinese Room does not understand Chinese, the system as a whole does — the room with all its constituents, including the person. So here I conclude my thoughts on this. What's your take on both? Do you think anyone can be standalone used as metric of measuring intelligence? Here are a few of my other blogs.
https://medium.com/ai-in-plain-english/turing-test-vs-chinese-room-argument-4e7592c3277
['Parth Chokhra']
2020-10-22 16:27:57.540000+00:00
['Machine Learning', 'Philosophy', 'Artificial Intelligence', 'AI', 'Data Science']
Best Ph.D. Programs in Machine Learning (ML) for 2021
Source: Derivative from original on Pixabay, created with photoshop. Best Ph.D. Programs in Machine Learning (ML) for 2021 These are the best universities to pursue a Ph.D. in machine learning, with research rankings in AI and machine learning research. Last updated, December 19, 2020 Considering various factors such as the research areas, research focus, courses offered, duration of the program, location of the university, honors, awards, and job prospects, we came up with the best universities to help prospective students choose. This article is most suited for individuals who would like to pursue a Ph.D. with a focus on machine learning and need some guidance on their decision making. 📚 Check out our simple linear regression for machine learning tutorial. 📚 For the list of the universities, please jump at the end of the article. Note: The universities mentioned below are in no particular order.
https://medium.com/towards-artificial-intelligence/best-universities-to-pursue-a-phd-in-machine-learning-ml-academic-program-8fa31eee3b6d
['Towards Ai Team']
2020-12-19 22:13:28.087000+00:00
['News', 'Artificial Intelligence', 'Technology', 'Future', 'Education']
Why I Became a Writer
Traveling the world was always a top priority for my wife and I. We wanted to eat sushi in Japan, baguettes in France, pizza in Italy. Neither of us had really traveled anywhere; we had been dreaming about traveling the world since we first started dating. Here’s what happened that changed everything: At work one day, I mentioned my dream to travel the world with some coworkers. And I’ll never forget how my coworkers reacted: They scoffed. They scoffed. The Epiphany When I told my coworkers my dream, they straight up laughed at me. They told me to keep dreaming — I’d never be able to get the time off work. Sure, maybe a long weekend trip somewhere local. But to get a week off in a corporate job? Let alone several weeks? Ha! Maybe after 5–10 years, when I’d accrued enough vacation time. Welcome to the real world kid, they said. This is how it is. And that’s when I was like…fuck this. I decided right then and there. Come hell or high water, I’m going to be a writer and never, ever, ever work in a terrible place like this ever again. But how? Right around the same time, I stumbled upon an article that gave me the answers I needed. I still remember it vividly; it was titled Why Most People Will Never Be Successful. The article said that most people were simply not willing to do the work to get what they wanted. Most people won’t be successful because they never fully commit to doing the work to achieve their dreams. That article punched me in the face. It was like a light went on in my head. I saw things more clearly than ever before. It felt like waking up from a 25-year old dream. For the next few months, I began going through an incredible mental transformation. I needed to be a writer — OK. At the time, all I had was some silly blog with, frankly, nothing good on it at all. I realized I hadn’t been taking myself seriously. I just journaled silly thoughts here and there, even a whole eBook, of junk that wasn’t helping anybody. I knew it was time to get serious. I also knew it was time to get the hell out of my 9–5 job. Every day when I walked into work, it felt like I was carrying a big, gross, slimy, acidy black ball of dread lodged firmly in my stomach. The Plan: So my wife and I got serious. We started working on a secret plan to escape. We told no one — we didn’t want anyone at our jobs finding out. We secretly got our Teaching English as a Foreign Language (TEFL) certificate so we could finally travel the world (and get paid!). It took months. Remember, I was working full-time and still finishing my master's degree. What little free time I had left vanished. But it was worth it. We finally had a plan. I finished my last online class on October 18th, 2017. The next month, I told my boss I was leaving. My last day at work felt like a dream — I knew I was never coming back. Ever. Then we sold everything we had and moved to South Korea to teach English. We didn’t speak the language, didn’t know anyone there, and had no experience traveling. But we couldn’t be more excited. Still, I knew I hadn’t “made it” yet. I was keenly aware: if I didn’t become a serious writer while in South Korea, I’d be forced to come back to America and go find another horrible soul-sucking desk job again. Not an option. So I started posting on this little site called Medium. Like, all the time. Before, I would post a few times a month on my little blog that no one read, then not publish again for months out of depression and frustration. But in Korea, I started publishing content almost every day. It was a really busy schedule — teaching and grading homework and navigating a new country/food/culture/language/job — but desperation made me incredibly motivated. Come hell or high water, I would become a successful writer. I started consistently waking up at 5:00 am (!) before long-ass school days just to write another article. Every break I had at school, I’d scarf down some instant ramen, walk over to Starbucks in the freezing South Korean winter, and work on another article. Write, write, write. Things started…happening. I started getting on the “top 20 most popular Medium articles” like, all the time. I remember one day, my wife and I were sitting in a Korean coffee shop on our day off. When I opened my computer, I saw that I had over 600 views in a single day. That was the most I’d ever, ever, ever had. I knew something was happening, something special. So I kept writing, kept publishing. But then, heartbreaking disaster. The disaster I had signed up for a writing course hosted by my favorite Medium writer. He was one of the most successful writers on the platform, and it’s not exaggerating to say I was a freakin’ disciple of his. My notebooks and journals were filled with quotes by him. I devoured his content, and wanted to be just like him. I practically wanted to be him, and my writing style was very influenced by his. So when his course came by, I jumped at the chance to spend hundreds of dollars to buy it. In my eagerness and excitement, I had begun stupidly, unintentionally plagiarizing this guy’s work. Not a lot — a line here, a quote there. It wasn’t out of malice or deceit; I basically worshipped the guy! I wanted to be just like him! I was quoting and citing him all the time in my work already. But a few times, I just didn’t. In my attempts to imitate his style, I began mimicking him a little too much. Someone in the course called me out. I just wanted to say, I’ve been seeing some plagiarism and copying in here. I’m not going to name names, but unless this person comes forward, I’m going to say something soon, the ominous comment read. Wow, sucks for that person, I remember thinking. I didn’t even realize they were talking about me! The next day, I wake up and use the restroom, as I usually do. I’m sitting on the toilet, and I see a notification on my phone: my favorite Medium writer (the guy who made the course) sent me a personal message!! I was ecstatic. Even though I was undeniably, indisputably his #1 fan in the whole world, there were over a hundred students in the course, and I knew he had lots of customers just like me. Still, I couldn’t believe he had actually sent me a message! But when I read the message…oh man. I still remember how stunned and horrified I felt. I know you’ve been plagiarizing my work, it read. Something like that; That’s unacceptable. I don’t appreciate people stealing my work. We’ve decided to kick you out of the course. That was it. Short and sweet. Final. I remember enormous beads of sweat breaking out all along my forehead. My temperature spiked. I wanted to puke. I wanted to cry. Me, plagiarizing? Me? I’m your biggest fan! I would never do that! Not intentionally, at least! It’s all a huge misunderstanding! But it was about to get worse. I logged back in the course, intent on making a public apology for my oversight and tell everyone how sorry I was for making such a stupid mistake. But people had already beat me to it. Hey, the copycat is named Anthony Moore! one comment read. My favorite Medium writer commented, too. Don’t worry, I spoke with him. We’re kicking him out. My blood ran cold. I kept reading — it got worse. Wow, that guy likes you so much, he practically wanted to BE you! the student mocked. I know X) my hero wrote. They were laughing at me, making fun of me. Using my name. Telling the whole world I was a plagiarizer. That I was no good, that I didn’t deserve to be there. I spent the rest of the day in stunned, somber silence. I couldn’t eat. I could barely concentrate. To have my hero, my hero, shit-talk me in front of the whole class, making fun of me and rejecting me…it was one of the worst days of my life.
https://medium.com/publishous/why-i-became-a-writer-bc2effe0bff1
['Anthony Moore']
2020-05-11 22:39:48.140000+00:00
['Medium', 'Anthony Moore', 'Entrepreneurship', 'Business', 'Writing']
Converting a React App to Typescript
Converting Our App to TypeScript We can now begin using TypeScript by renaming our existing files to use the TypeScript file extension. Let’s start by renaming our entry point to index.ts from index.js . Also, remember to make sure to update your entry point in webpack.config.js . After you make your update remember to kill your server and restart it. After you make the change, you should see the following: Rule: You can only use JSX in files ending with the .tsx extension, not the .ts extension Since our index.ts file uses JSX syntax, the compiler fails to recognize it. It will only recognize JSX in files ending in .tsx . If we rename our entry point to index.tsx , the app works. By contrast, if you try renaming your cart.js to a .ts file, you will encounter no problems with your build since JSX is not used. Understanding Types vs. Interfaces in TypeScript TypeScript does have a degree of type inference, but for the purpose of learning, we will manually type our application. There are two ways to define types: interfaces and types. They have a number of differences. You can use what you prefer, but Microsoft generally recommends the use of interfaces where possible. In general, interfaces are useful for objects. They allow you to denote the shape of the data you are using. Types (or type aliases as they are known) are, in my opinion, better for functions and individual properties. Below are two videos with some more information on differences and possible approaches. Good overview video In-depth comparison of Interfaces vs Types in Typescript. Optional but great video! Adding Types to Plain JS Files In this section, we will begin adding types to our project. We will focus on adding types to our files not containing JSX like our utils file. I like to centralize the location of our type definitions so they can be easily imported and reused. We will first add a types folder in our src directory. Once this is done, we will take a look at our product.js file in our constants folder. As you can see our product information is an array of objects with a number of properties like id, quantity, etc. Below is an example. Let’s now create an interface that defines the expected properties of a product. Let’s create a file called product.ts in our types directory. Once we have created the file, let’s write our first interface. Below is the declaration. Our first Interface !— Defined in types/product.ts Now that we have defined an interface, let’s use it in our application! Let’s type our list of products using our interface. This is located in constants/product.js . Our first step is to rename the file to product.ts to unlock the power of TypeScript. Next, we have to import our Product interface. We will then annotate our list to denote that its type signature is an array of Product . Here is a look at what our updated file should look like. We have added in some comments as well. Annotating our list of products using our Product interface After making this change and starting the app you should see … an error! Reason — ”description“ property is not a part of all our products We can fix this by making the “description” property optional. Additionally, we can also define a ProductList type. We will head to our types/product.ts file and update it with the following code. Our updated product type declaration file Upon restarting the app, it should now correctly compile. If you want you can also replace Product[] with our new ProductList type. In the next section, we will take a look at typing functions. Adding Type Signatures to Functions In this section, we will be typing our cart utils file. This file is responsible for calculating the total of our cart. As always our first step is to rename our cart.js file to cart.ts . We should note that this file uses lodash. If we want to leverage the type safety that TypeScript allows us, we need to import type definitions for lodash. These definitions will let the compiler know what types our lodash functions expect as arguments and what type they return. Instead of having to manually type these functions, there is a project on Github named DefinitelyTyped. There are a large number of type definitions for a ton of different common packages. Let us first install the types for lodash with the below script. Installing type definitions is as easy as an npm install for most popular packages Let us now import our product types that we defined. Let’s now type our first function to calculate the total cost of a product. We will see two approaches to do this: inline or predefining a type alias. Type annotating our function in-line. Type annotating a function using a pre-defined type alias Whichever approach you use is up to you. I like to use inline typing generally if the function is not reused. This allows the typing to sit next to the logic, functioning as documentation. I usually use the type alias approach for functions that are passed as props or reused. We can now continue to type the rest of our file. Below is our fully typed file. Our fully typed cart util file Adding Types to React Components We will now begin to look at adding types to our React component. First, let’s install the type definitions for the React libraries that we are using. We can do so with the below script. We now want to rename our component files to the .tsx extension. After we rename our files, we want to restart our server. You should then see the following error.
https://medium.com/swlh/converting-a-react-app-to-typescript-f72b65d798bc
['Jayson Alzate']
2020-12-17 20:36:51.665000+00:00
['Programming', 'JavaScript', 'Web Development', 'Typescript', 'React']
Will My Daughters Ever Drive a Car?
Me and Sofia — foto by the author Will My Daughters Ever Drive a Car? My daughters will grow as Autonomous Passengers in a new kind of mobility but will they need a driver's license? I am the father of three amazing daughters: Stella, a beautiful and smart 13 years old teenager, Sofia, an incredibly smart 3 years old and last but definitely not least… Emily, just one 1 old but already able to show all her strong and curious character. These girls do their best to keep me busy with all those kinds of stuff that fathers must deal with every day and maybe it is too early to get worried about it.. but recently after a pleasant conversation with a friend that works like me in the transportation industry, I’ve started to hear in my mind an almost philosophical question: My daughters are digital natives, but they still use a very analogic and traditional piece of paper to draw their amazing and fantastic animals and flowers at home and a lot of paper notebooks while learning at school… so the question is: will they need a driver’s license in 2038? Well, considering how things are developing today, probably not… Let me tell you why… Photo by Ishan @seefromthesky on Unsplash The dream of a new mobility Imagine waking up early for work, getting ready, reading some news online on your tablet while having breakfast, without worrying about traffic and the chance of being late for having been in the traffic jam for forty minutes. You access an application on your cell phone and request a car to take you to work, and in a few minutes, it arrives. A mechanical voice says a polite “good morning, welcome”, confirms that the destination is the one entered in the application. The vehicle starts the journey by driving automatically. The AI algorithm will calculate the shortest route considering the variables of traffic, traffic lights, number of vehicles, accidents, and works on the way. Whatever else may affect the trip. Comfort will be a priority: air temperature, water if you feel thirsty, and various playlists to choose from will be available. This scenario seems to be the future of mobility. My daughters will grow in this reality and considering that, as they are deemed Native Digital today… they will grow as Autonomous Native Passengers in a new kind of mobility, ruled by Autonomous Vehicles. Photo by Adrian Williams on Unsplash Safety as a horizon for new mobility. I love to imagine this autonomous scenario for the future, which may not happen anytime soon, considering that full automation faces “very complex” problems. It could end up reducing these vehicles to some very restricted applications. But I am a dreamer, and I believe that Autonomous Vehicles’ technology is an indispensable tool for zero accidents on roads worldwide. This will be the leading driver (pun intended) for massive adoption. I was born in Brazil at the end of the ’70s and had the not desirable fortune to grow in a country where traffic incidents used to count more deaths than some military conflicts. Over the years, I’ve had lost an unacceptable number of friends due to traffic incidents. Recent studies show some improvements, but the situation is still dramatic, even with some increase in the number of fatal incidents. According to the World Health Organization, in 2018, car accidents were the leading cause of young people’s death up to 29 years old. To give you an example, in the USA, 94% of accidents, which caused 1.35 million deaths, had their causes related to human errors. It is argued that, by 2025, autonomous cars will represent 4% of the total vehicles sold in the world and 75% in 2035, within 15 years. This could drastically reduce the number of incidents across the globe. I will tell you why… Photo by Dan Gold on Unsplash Autonomous Vehicles will drive better than you do. Autonomous vehicles will not get tired, distracted, or intoxicated while driving. They will be controlled by AI systems connected to redundant sensors and other top-level electronic equipment. They go from one place to another, as the user instructed, and while on the way, they will collect all the necessary environmental information, such as signals, pedestrians, and other vehicles, while being directed by satellite systems to make a safe and optimized trip. This technology has been developed globally, in universities and research centers, and in the automotive industry itself. Many brilliant minds are putting a tremendous amount of money and effort into making it happen safely and efficiently. Photo by Lerone Pieters on Unsplash Rethinking our cities If this technology succeeds, and I really believe it will, not only will my daughters’ driving experiences will change, but the whole metropolitan model wherever they will decide to live in the next 20 years will change, too. According to a study carried out in the United Kingdom, shared VAs will increase the urban space by 15 to 20%, mainly by eliminating parking areas. Research is being published on the international scene with optimistic projections of the autonomous transport application in different instances. In the next 20 years, I see that it will be much more comfortable and pleasant to live in cities once they start adapting to Autonomous Vehicles. This transformation is already taking place and it will allow no accidents on the road in the next 20 years, but this will depend on a process of profound change based on three pillars where shared and connected mobility, autonomous and electrical will rely on technology in vehicles also outside of them. My daughters will grow in a complex metropolitan ecosystem formed by drones that take people or feed agriculture, robots, and various vehicles of different sizes that can take many people and just one (the so-called micro-mobility), wireless electrical charges, and other technologies we already start to see today. But even with all this available automation, will they be able to or required to drive? The question is still open.. and maybe to help us to answer that, we should look through the next point: Photo by shun idota on Unsplash What are the advantages of AVs? As we can see today, the main advantages of Autonomous vehicles are safety enhancements but also time savings. For example, those who today drive for an hour to get to work could dedicate themselves to work and do video conferences in their vehicles… Hopefully, my daughters in 2038 will use their commuting time to study, watch some programming language on youtube or simply chat with their friends and colleagues, thanks to the increased comfort and security delivered by the AVs. But probably, they will not have a long commuting time since AVs are expected to reduce traffic and reduce accidents due to the remarkable optimization that AI algorithms will apply to our mobility. For sure, their health will have a great benefit from less pollution in our cities, because 100% of AVs in the future will be electric. Photo by takahiro taguchi on Unsplash And what about the driver’s license? But what happens if, instead of all these features and improvements provided by full automation, my daughters will decide to put themselves behind the steering wheel, considering that still, a steering wheel will be available in AVs vehicles? Well… I really don’t see this as an alternative in 2038 when Emily, my younger daughter, will be 18 years. According to research carried out in the UK in 2019, drivers of self-driving automobiles will need certification to adapt to new vehicles. According to the survey, the driver’s license will continue to be indispensable in the future since vehicles will require human intervention in certain circumstances. This kind of study is beneficial to put some light on the limitation of technology today. However, I still believe that in the next 20 years, many of the open questions still represent a roadblock to the full adoption of AVs will find their reliable answers. We will reach a level of standardization and safety requirements in our roads that will be unacceptable to allow humans to drive. Probably we will move the ownership of the driver’s license from humans to the algorithms directly. But this is another story that I will cover in a future article. Photo by Gabe Pierce on Unsplash Conclusion Self-driving vehicles will become part of our lives. When controlled by algorithms instead of humans, vehicles will lose their current value as a fetish or status symbol and finally become a tool. AVs’ benefits are numerous and significant, and their adoption will represent a great challenge to many business models as we know today. Maybe it is too early to imagine VAs on a large scale on the streets, but it is time to put some questions and start to prepare the ground for them in all spheres. Autonomous Vehicles will be a reality very soon. In 1908, Henry Ford revolutionized the way we see cars, making them a symbol of utilitarianism, comfort, and status. Now, after just over a hundred years, AI will reinvent mobility and start a new era. Read more about it If you want you can read more about Autonomous Vehicles in these articles:
https://medium.com/swlh/will-my-daughters-never-drive-a-car-6e579158717a
['Jair Ribeiro']
2020-12-06 18:00:22.287000+00:00
['Mobility', 'Artificial Intelligence', 'Future', 'Autonomous Vehicles', 'Autonomous Cars']
Serverless Alternative: Executing Python Functions using AWS, Terraform, and Github Actions
Serverless Alternative: Executing Python Functions using AWS, Terraform, and Github Actions Automate the deployment and execution of a Python function without worrying about package size, execution time, or portability Photo by Alex Knight on Unsplash What’s better than Serverless? Serverless is all the buzz these days and for good reason. Serverless is a simple, yet powerful cloud resource to execute function calls without worrying about the underlying infrastructure. But every superhero has their kryptonite and recently, I’ve ran into a few issues with AWS Lambda Serverless Functions: Package Size Limitation: My Python dependencies are larger than the 50 MB compressed (and 250 uncompressed) size limits. Execution Time Limitation: My Python function takes longer than the 15 minute limit. Lack of Portability: AWS Lambda functions aren’t easily portable to other cloud vendors. The obvious alternative is provisioning an EC2 instance to install the dependencies and execute the function, but I don’t want the server to be on all the time. Time is money, and EC2 instances running 24/7 cost money. I don’t want to manage the deployment, manually turning on and off the instance and executing the function. I also want to have function portability in case I want to deploy this function in a different cloud. Ultimately, I want to automate the process of provisioning an EC2 instance, executing the Python function, then destroying the EC2 instance and all underlying infrastructure. (If you simply turn-off the EC2 instance, you will continue to pay for the volume). Enter Terraform and Github Workflow. Terraform and Github Workflow are tools any modern DevOps or Cloud engineer need to build and deploy applications. Terraform quickly provisions cloud infrastructure to execute the function. Terraform scripts are also easily portable to other cloud vendors with changes to the services used. Github Workflow manages the deployment. We are also using a Github repository to hold all the Terraform and Python code used by Github Workflow. Here is a video of me running the Github Actions showing how the function is executed and Terraform makes changes in the the AWS console: Github Workflow deploying AWS infrastructure using Terraform and executing a Python Function Outline: AWS Setup Terraform Script Github Secrets Github Workflow YAML Setup Executing Python Function Conclusion AWS Setup The first step is to setup AWS so we have the right user permissions and key pairs to use for the Terraform scripting later. I won’t delve too deeply into user permissions here. For this tutorial, I simply created a new user in IAM and gave my user administrative access (I don’t recommend this; you should always provide a user the least amount of access required for the user to accomplish tasks). Copy the access and secret key somewhere to be used later in this tutorial. Next, you want to create a PEM key to use in the terraform scripting and for Github Workflow to access AWS. While on the AWS services homepage, select “EC2”. On the left side of the console, select the “Key Pairs”. On the top right of the screen, there is a button which states “Create Key Pair”. Enter the name of the key, and select “PEM” as the file format. Finally, hit the “Create Key Pair” button to create the PEM key. Your browser should automatically download the private key. Place this key somewhere accessible since it is integral to the entire process. You will also need the public key that corresponds to your private key. To get this, open terminal, change directory (cd) to the location of the private key, and run the following script: ssh-keygen -e -f aws_private_key.pem > aws_public_key.pem The result of this script should output the corresponding public key. You can copy this to your favorite code text editor. This public key will be important later. Note: I recommend testing the keys before running Terraform scripts by creating an EC2 instance and trying to SSH into the instance with the PEM key that we just created in AWS. Terraform Script Now that we have AWS properly configured, we can create Terraform scripts to provision the resources needed to execute the Python function: Notice that we included an S3 bucket which isn’t really needed, but I wanted to provide some additional scripts just in case this resource is applicable for your project. Also notice that the public key we created in the previous step can be entered into “<the rest of your public key>”. The egress and ingress rules are not secure, they allow anyone with valid credentials to connect to the instance. But since the purpose of this tutorial is to provide an example, I haven’t configured security properly. I selected a random AMI, but makes sure to find the right image for your workload. N ote: I recommend test running terraform scripts on your local machine before creating Github Workflow. I created a folder on my Mac desktop and added the path to the Terraform executable in my Bash profile before successfully initializing Terraform. You can run the Terraform related Github Workflow actions defined later in this tutorial on your terminal. Please use this link to install Terraform. export PATH=/path/to/terraform/executable:$PATH Note: if you are completely new to Terraform I recommend this LinkedIn Learning Course on Terraform. Github Secrets Before using the Github Workflow to run the terraform script, we need to setup Github secrets with a few keys related to AWS and Terraform. Here is a screenshot of my secrets: My Github Repo Secrets The “SSH_KEY” secret contains the private AWS Key automatically download when creating a key pair on the EC2 console. You can output the private key value by entering this command: cat aws_private_key.pem The “TF_API_TOKEN” key needed is for the Terraform API that Github Workflow will use to execute the scripts. Use this link to gain access to the Hashicorp Terraform API token (you may need to create an account). Github Workflow YAML Setup Now that our Github secrets are properly configured, we can create the YAML file in Github Workflow: At a high level, when this YAML executes upon a new push to the Github Repository, a new “runner” is created, which is a newly created virtual environment on a Github host which “runs-on” the operating system you define. Then it seeks to complete all the “jobs” defined in parallel. In this case, I only have 1 job and thus all the “steps” (consisting of “actions”) are completed sequentially. Each “step” builds upon one another which means that any changes made in previous steps is accessible to future steps. Now some of the “actions” completed in each “step” “uses” pre-defined actions, these are actions created by others that can be imported. In this tutorial I am using 4 actions. The Github Workflow syntax is confusing. I recommend spending some understanding the key terms I put in quotes. N ote: This is a pretty good introduction to Github Actions. I also recommend this Github Actions course on LinkedIn Learning. The YAML file commands are dense so I will focus on some of nuances and peculiarities of the code starting from the top and working down: On line 42, we must change permissions for the key in order to use them for SCP and SSH later. On line 53, we must import the private key to Terraform before being able to provision infrastructure on AWS. On line 59, I am using “auto-approve” to automatically create the infrastructure. If you try to run this command without “auto-approve” the terminal requires a “yes” to create the infrastructure creation. On line 62 and 65, we are setting environment variables that are needed in future steps. The command on line 62 stores the infrastructure created by Terraform in a JSON format. Then the Python script on line 65 iterates through the JSON text and creates a variable for the EC2 public ip address that we SSH into later. Each time we run this workflow, a new EC2 instance with a different Public IP address is created. Thus, we need a script to get the Public IP address that we SSH and SCP to later. Here is the Python Script which I call “tf_parse.py” in the YAML: There is probably a lot of head-scratching on line 71. Why would anyone add time to the workflow? This took me the longest to debug. My assumption was that once Terraform completes the infrastructure, I can SSH and SCP to the instance. I was wrong. You need some time for the EC2 instance to initialize before running the subsequent commands. I’ve added 20 seconds, but it may take more or less time depending on the type of instance you’ve provisioned. You need some time for the EC2 instance to initialize before running the subsequent commands. I’ve added 20 seconds, but it may take more or less time depending on the type of instance you’ve provisioned. On lines 78 and 79 I’ve added some additional parameters to prevent the terminal requesting authorization to add the host name. Here are the functions you can use if you prefer greater readability: Note: Use the functions above by entering the following command in the YAML file: Finally, the command on line 83 prevents Terraform from destroying the aws_key_pair in the next step. Here is a useful resource to output all the Terraform states in case you want to prevent the destruction of other resources. Executing Python Function The Python function is being executed on line 80 on the AWS EC2 instance. For this tutorial, I am executing a basic Python function, but the sky is the limit. If you want to install some dependencies before running the script, check out line 50 and beyond in the YAML file from my previous article on creating a CI/CD pipelines on AWS. Note that the dependencies need to be installed on the EC2 instance and not the Github Workflow “runner”. Conclusion This tutorial showcases how to automate the deployment and execution of a Python function using AWS, Terraform, and Github Workflow. We highlighted some the problems with Serverless functions and how this workflow can be a reasonable substitute or replacement. However, its important to remember that we pay for the time that the Terraform initiated EC2 instance is running. It also takes much longer to use Terraform to provision the instances and run the function when compared to simply executing a Serverless function. Remember, we have to provision the underlying infrastructure every time we want to execute the function. Another reason I prefer Terraform and Github Worflow is because AWS Lambda functions lack portability. Once Lambda Functions are used, its difficult to transport that function elsewhere. This is due, in part, to the syntax restrictions for Lambda Function returns, Lambda Handlers, Layers, and other configurations. Also leveraging AWS API Gateway to invoke the function further prevents portability to another cloud vendor. Terraform makes it easier to find the corresponding services in another cloud vendor and deploying the workflow. Serverless functions remain powerful tools for creating scalable services in the cloud but there are significant flaws and disadvantages. What other possibilities are enabled by this structure of infrastructure creation and deployment? How about managing infrequent, time-insensitive services with this workflow? With some changes to this tutorial, we create and deploy the underlying infrastructure for the application servers, load balancers, S3 Bucket, and then destroy those instances when the services are completed. This might be crucial for any startup with large data-intensive applications seeking an effective way to mitigate costs for their DEV and TEST environments or even PROD.
https://towardsdatascience.com/better-than-serverless-executing-python-functions-with-aws-terraform-and-github-actions-9967509b030f
['Ary Sharifian']
2020-09-23 17:50:10.070000+00:00
['Terraform', 'Python', 'Github Actions', 'Serverless', 'AWS']