title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
How to Get in the Perfect Mood for Coding
How to Get in the Perfect Mood for Coding Improve your productivity with these emotional tips Photo by Fabian Møller on Unsplash Sitting in front of a computer and “just code” is not always that easy. We are humans too, with worry days and tired mornings. Programming is such an emotional game to win. You have to control yourself to disclose your concentration abilities and get the job done. Think about all you’re asking yourself for when you’re doing it: Study a problem and ideate a solution for it. Manage DRY principles, maintenance, scalability and simplicity. Try not to get distracted when you have Google ready to answer any dumb questions you might have at that moment. Work under the pressure of an incoming deadline. Coding is a mental game too. And you should care about being in an appropriate mood for it. So that your days can be productive. Here’s a list of my advice for reaching such a mood and getting the most out of your days.
https://medium.com/better-programming/how-to-get-in-the-perfect-mood-for-coding-21173dd084d
['Piero Borrelli']
2020-10-28 10:47:44.715000+00:00
['Mental Health', 'Work', 'Programming', 'Startup', 'Technology']
9 Writing Lessons I Learned From Drafting My First Fiction Novel
9 Writing Lessons I Learned From Drafting My First Fiction Novel Tips to consider if you’re interested in crafting a new story Photo by Ben Karpinski on Unsplash The idea first started over the summer. I had just finished reading a book by a musician named Andrew Peterson where he talked about the creative process and why it was important for adults to retain the imaginations of their youth. He quoted two of my favorite authors, C.S. Lewis and Tolkien. That day in early June, in my daily journal, I wrote down a question that was starting to creep into the background of my brain: What would it look like to try to write a fiction novel? I’m an avid reader and have been writing for years now. I’ve written drafts of non-fiction books and worked in a myriad of the stages of publishing other people’s books, but I had never attempted to write my own fiction story. Like most ideas, I sat on it and didn’t do anything about it. A few months later, a friend invited me to participate in a creative writing group focused on completing a specific writing project of our choice throughout the fall. I threw out that I’d write a series of short stories, still partially afraid to commit to the idea of an entire novel, and started working in my spare time. As most writers can attest, once I got going, I kept going. I had a story I liked and characters that were starting to take shape. When I found myself thinking about the idea and the events about to unfold within the novel even when I wasn’t writing, that’s when I knew I was hooked. By the end of October, I was sitting on nearly 60,000 words and a partial story. I had never really participated in National November Writing Month but thought this would be a good moment, considering I was already at work on a novel. Over the first 21 days of November, I wrote the remaining 75,000 words and finished the draft of my first fiction novel. While writing, I learned so much about the ideas that go into getting a fiction novel down on paper. After combing back through the process and noting the speed bumps, stalls, and sticky spots, I’ve come up with nine core ideas to consider if you want to write a fiction novel as well.
https://medium.com/better-marketing/9-writing-lessons-i-learned-from-drafting-my-first-fiction-novel-bef878d564f0
['Jake Daghe']
2020-12-18 14:01:00.503000+00:00
['Fiction Writing', 'Storytelling', 'Fiction', 'Advice', 'Writing']
Understanding gender detransition
Gender detransition refers to reversing a gender transition, and reidentifying as the gender assigned at birth. I recently came across a detransitioner, Peter, to talk about something that I struggled to understand. Detransition has been used interchangeably with ‘sex change regret’, which is not the same thing. As a transwoman, I wished to better understand detransition, and have a respectful dialogue about it with Peter. It was not an easy thing to do. Peter detransitioned after questioning his political views, and started attending church once he started his detransition. On the other hand, I transitioned nearly a decade ago, and during that time my political views went from progressive to conservative, and I recently became a regular church-goer. Only that I’m not detransitioning, nor interested in doing such. I caved in to detransition once due to social pressure, but it felt very wrong and distressing very quickly. Never taking such morally repugnant action again. So I asked Peter, who went from male to female then back to male, what do changes in political and religious views have to do with gender identity and dysphoria, or lack thereof: “I discovered that I was indoctrinated into leftist thinking, and when I started questioning the trans issue I came to the opinion that it also is an agenda, and I no longer believe in the whole concept that people can change their ‘gender’, that biological sex is it, and doesn’t change. I experienced gender confusion from age 5, which became a regular feature of my life. I found out that I was a DES baby. My mum had taken the potent estrogen when I was in the womb, so I put my confusion down to the effects of the drug. When I found that out I wanted to detransition even more.” DES refers to diethylstilbestrol, a drug that was prescribed to prevent miscarriages during the 1940–70s, albeit ineffectively. Peter referred to this video to elaborate: https://www.youtube.com/watch?v=3fjmnyq0n2s. I contested that gender confusion is not the same as gender dysphoria, but he clarified that he now calls it confusion, after having called it dysphoria post-transition. The conversation continued: Dana: “Why do you think you called it dysphoria at the time, instead of confusion? Was it because you were indoctrinated into leftist thinking? If so, please elaborate.” Peter: “Because it seemed to be an apt description at the time, now I call it confusion because I believe it’s more accurate. I would have benefited more from some kind of biological sex affirming treatment instead. When I found out about the category ‘transgender’ I thought that was me because of the confusion I felt. Now I think giving transgender affirming treatment is wrong, people should be helped to come to terms with their biological sex.” Dana: “Do you think that because you believe that your experience is applicable to the experiences of others, including mine? Because my experience was dysphoria, not confusion. I’m sorry that the healthcare profession failed by not vetting you enough for transition candidature.” Peter: “To me that is just semantics, I saw my experience as dysphoria before, now I judge it as confusion. The health profession doesn’t really vet people, it just relies on people’s self-assessment. At least that was my experience. I don’t think people should be given affirming treatment, instead they should be helped by biological sex affirming treatment, because I think the entire concept is wrong, that you can’t really change gender/sex. When I talk about detransitioning I often get accused of not being a real trans, but that wasn’t the case. I was trans for 20 years before I stopped believing the validity of the concept.” Dana: “It isn’t semantics. Confusion refers to uncertainty about what is happening, intended, or required. Dysphoria refers to a state of unease or generalised dissatisfaction with life. When you said, “now I judge it as confusion”, it implies that you misjudged your experience. In other words, your experience wasn’t dysphoria. It seems to me that said healthcare professionals didn’t do their jobs properly, and for that I am sorry that they failed you.” Peter: “Yes, but the way I’m using the words to describe my experience is entirely subjective to me, so at the time dysphoria was an accurate description, now looking back I prefer the word confusion. It seems to me that I was treated no differently from any candidate for treatment, in fact now it is even easier than ever, and affirming treatment is being pushed as the only option.” Dana: “If it’s subjective, then what’s the objective description? What evidence or research do you know of that supports alternative options?” Peter: “It’s difficult to be objective when we’re talking about people’s feelings. There has been research by different people, for example Zucker’s clinic in Canada had a high success rate with biological sex affirming treatment. Unfortunately that kind of approach has been shut down and research suppressed.” I referred Peter to my gender transition memoir: https://link.medium.com/FWlVa3MyYX. I wanted to understand from his perspective, how biological sex affirming treatment would’ve helped me as an alternative. I also pointed out to him that at present, the literature points in the direction that being trans is likely to be innate, that gender identity is usually known by ages 3–5. Even Kenneth Zucker, an American-Canadian psychologist and sexologist who’s (in)famously fallen out of favour with the trans community, has agreed that at age 3, children begin to self-label and form their gender identity. Zucker further elaborated in a 2015 CAMH Gender Identity Clinic for Children Review, that “at age 15, the gender dysphoric child’s dysphoria will most likely to persist, 70%-80% to be specific”. As an authority in North America on this subject matter, he was known to prescribe puberty blockers and later HRT for trans adolescents. I put to Peter that I think he’s misunderstood Zucker’s research. Zucker’s sacking from the clinic is not proof alone that “that kind of approach has been shut down and research suppressed”. Peter disagreed with my assessment: “Biological sex affirming treatment means helping people feel comfortable with their biological sex. I’ve heard differently about Zucker. His research found that 80–95% of children with gender issues naturally came to accept their biological sex without any treatment, so that implies gender identity isn’t innate and can change over time. It’s not just about Zucker’s sacking and his clinic being shut down. Any biological sex affirming approach or research into it is routinely squashed. I read your memoir. Interesting. My mum was more supportive, but I wish she had retained a traditional Catholic belief like your parents. Looking back I would’ve preferred the approach of your parents, though I would’ve hated it at the time.” I referred to Peter research, supporting his case, that hasn’t been quashed, which includes https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0202330 and https://www.tandfonline.com/doi/full/10.1080/00332925.2017.1350804. But I argued that trans children should be allowed to socially transition genders, and trans adolescents should be allowed to hormonally transition genders in a careful and medically appropriate manner. I then elaborated: “Despite attempts to prove that trans children and adolescents can grow out of being transgender, the proof of that, which has been thrown around in public discourse, is flawed. As I’ve pointed out to you Peter, some studies seem to show that lots of young trans children change their mind. What these studies do is that they randomly take a group of children from gender clinics and follow them, only to seemingly find that most aren’t trans when they grow up. But what does that mean? It means that a lot of these studies are just studying children, at random, that attend these gender clinics, without differentiating between those who have a gender dysphoria diagnosis, those who identify as trans, with or without diagnosis, and those who don’t identify as trans at all. All these children attended these gender clinics for a wide range of reasons, not just for gender dysphoria diagnosis. So the next time you hear the argument that “60–90% of children will naturally grow out of it”, it’s because that 60–90% weren’t trans to begin with. In fact, many of these 60–90% are LGB(-T)QIA in some way, shape or form, just not T. The 10–40% don’t deserve to be forgotten — they deserve gender identity presentation alignment as appropriate, not denial of transition treatment. It’s worth noting that during the child’s formative years, the most rapid cognitive and emotional growth occurs. We now know that children’s physical and emotional environments dramatically impact the development of their nervous system. This is especially true of the brain and has profound implications for their psychological health as adults. Let’s get it right for the 10–40%.” Despite that, Peter couldn’t agree, claiming that from his reading and reasoning, transgenderism is an agenda that’s being pushed, with the science behind it questionable. Specifically, most studies he’s looked at are allegedly biased towards affirming treatment, and that dissenting views are not given adequate consideration. Admittedly, I don’t believe detransitioners have been served well, given that WPATH’s (World Professional Association of Transgender Health) Standards of Care don’t offer detransition guidelines, even though a majority of WPATH surgeons want to see such guidelines included. I did find it interesting that Peter proceeded to accuse me of entrenching my views and showing no interest in learning about his views. I then clarified: “To be fair, it appears that both of us have entrenched views to varying degrees, whether right or wrong. Hence this conversation. I’ve been hearing your views, but that doesn’t mean you’re free from any challenges from me. Otherwise why would I ask to begin with? A good learner doesn’t rest on their laurels, that’s for sure. If you haven’t already, be prepared to be challenged, because I’ve been prepared for your challenges. Transgenderism could be an agenda that’s pushed, but not necessarily. Please elaborate on why the science is questionable. Is the bias towards affirming treatment because there’s an agenda, or is it because the evidence for non-affirming treatment consistently unconvincing? I’ve provided you with references to recent dissenting views which have been given adequate consideration, I think. Personally, I find them unconvincing, but I’m happy to hear you argue otherwise.” And so he did. Peter argued that the entire medical field has bought the affirming paradigm, and that doctors who dissent are consistently silenced or fired: “The dissenting science is often shut down before it can do any studies. Meanwhile, the affirming camp refuse to look into detransition or any dissenting views, saying they are too dangerous to be given air time, as they could cause trans people to be suicidal. But this is just one small aspect of the sexual revolution that has been raging for decades. This video is a doozy, looking at the subject from a social and political philosophical point of view: https://www.youtube.com/watch?v=QPVNxYkawao.” I watched the YouTube video, which was a presentation by Rebecca Reilly-Cooper in critically examining the doctrine of gender identity. Rebecca appeared ignorant of the neuroscience behind gender identity, and growing genetic research on the matter. I put to Peter that preliminary neuroscientific research have come about over the years and more recently, which indicate that the brains of trans adults and children resemble their gender identity, not their apparent ‘biological sex’. If the brain acts as a sex organ, which it does, perhaps trans people are indeed intersex. If this sounds incomprehensible, it’s because we’re currently in the middle of an explosion of brain research, which has greatly enhanced our understanding of the human mind. Stay tuned for more to come. There is also a piece of preliminary genetic research recently which indicate that “certain ‘versions’ of 12 different genes were significantly overrepresented in transgender women”. One study published a few years ago looked at identical twins and found that when one twin is transgender 40% of the time, the other twin is too, which is genetically significant. There are even case reports of twins raised apart and both coming out as trans. Of note, the Royal Children’s Hospital in Melbourne, Australia, has seen more than 700 children diagnosed with gender dysphoria, and only 4% of those children ‘grow out of it’. 96% of those diagnosed as trans as children remained so at late adolescence. On that alone, it appears that the medical field hasn’t bought the affirming paradigm completely, as they do acknowledge the 4%. So the real question is, what do we do about the 4%? I put to Peter that I think he has the answer. I also put to him that: “I do agree that dissenting doctors should not be silenced or fired. But the silencing or firing that has happened is not necessarily an indicator of a cover-up. The dissenting science is given consideration in ongoing longitudinal studies. If the longitudinal studies will support dissenting views, then it will show.” Pete reiterated that it’s clear that dissenting studies are strongly discouraged, and that amongst gender affirmers, such as myself, there is a bias against dissenting views and a bias towards the affirming paradigm, ensuring their studies have questionable scientific value. I disagreed, and he continued: “I have noticed the presence of bias in the scientific world frequently. In the field of transgender studies there’s a glut of light and fluffy affirming studies and no or very little scientific dissent apparent, which is necessary for good science. There are accusations of self-peer reviewing and bias in peer reviews. I wouldn’t trust their conclusions when they seem desperate to prove ‘born that way’. The subjectivity of gender as discussed in the Rebecca Reilly-Cooper video exposes the trans movement as ideologically based, and one that is currently elevating the subjective feeling of gender identity over the reality of biological sex, changing our society. This is typical of the leftist thinking that also produces ‘science’ that denies the existence of race, and shuts down studies in the area by calling it ‘racist’. Dissenting science in the trans field is simply called ‘transphobic’. You may claim I must not have been a real trans (and that is the reason I detransitioned), but I can assure you I had the dysphoria we spoke of earlier, transitioned, and lived happily for 20 years before questioning the entire concept. My experience is evidence that it is possible to find a way to end the dysphoria and accept our biological sex, and I can assure you that I wasn’t ‘born that way’, and I bring the message that others can also affirm their biology over their gender identity, that it can be overcome. Overcoming dysphoria is being able to feel comfortable with the biology you were born with, in my case taking testosterone has helped with this, which leads me to think that low levels of testosterone contributed to my hatred of being male. Which also suggests people who are given opposite sex hormones would contribute towards strengthening their trans identity. Overcoming the need to go against your innate biology would be different for each person, because there are many causes, including psychological/trauma that contribute towards gender dysphoria. Reparative therapy is a good model to use, as well as the hormonal side. Such therapy should be encouraged, in fact there is no reason why biological sex affirming treatment shouldn’t be tried first and studied for effectiveness, and studies of this sort are very few: there is a huge ideological bias against this treatment.” I have no idea what my testosterone levels were by early adulthood. Regardless, wouldn’t you think then that my male puberty would’ve reduced my gender dysphoria, not increase it? It is my position that increasing testosterone levels only made my dysphoria worse. It is the psych’s role to address comorbidity issues, which they do. Of course, the not-so-good psychs won’t address comorbidity issues, but there are bad apples around wherever you go. I did not experience any psychological trauma growing up that contributed towards my dysphoria. Rather, it was the other way around: the neglectful decision to not treat my dysphoria in childhood by means of transition exacerbated my dysphoria further than needed. I wanted to know from Peter what effective reparative therapy looks like, especially for effectiveness: “Everyone’s different. I’m not here to judge or dispute the experience that is real for you, but to use my experience and logical conclusions to say there is another way to look at it, if they so choose. I know of many cases of successful reparative therapy and heaps more ex-homosexuals who have reclaimed their lives through Jesus, and I know many detransitioners who have decided to leave transgenderism, and feel much better for it, including me.” I tried detransition once due to social pressure, and I am never trying it ever again. I can’t see this matter any other way other than transition for myself. My final word: detransition stories can be far more complicated than tabloid headlines would have you believe, sometimes distorted and abused. Peter is a pseudonym, and we agreed to disagree.
https://danapham-au.medium.com/understanding-gender-detransition-98768223a800
['Dana Pham']
2019-07-30 21:05:02.715000+00:00
['LGBTQ', 'Mental Health', 'Psychology', 'Transgender', 'Detransition']
What Nobody Tells You About Your Worthiness
Let’s talk about inherent worthiness for a sec. “You are inherently worthy.” What does that statement make you feel? Like, really feel? For years, this statement made me feel…kind of good? (I guess). But there was always a question quick on the heels: Then why don’t I feel like it? The answer: Because I didn’t actually believe it. How could I? I’ve spent my life believing my worthiness is something to be proved through good performance. And even deeper, I had a hunch there was something inherently wrong with me — literally the opposite of inherent worthiness. All the more reason to constantly prove my worthiness so folks won’t be tempted to pull back the curtain to see the wounded child for the wizard I made them think I was. Someone telling me I am inherently worthy wasn’t enough to rewrite a lifetime of the opposite belief. No matter how often I hear it and no matter how hard I try to believe their words, I never do. Who can relate? We are legion, those of us with hearts convinced of our unworthiness. But rest assured, this is just a chapter of the story of us coming back to our light. There’s a way to feel worthy again. It both requires nothing and everything from you. What blocks you from experiencing your worthiness. Whenever I sink into the feeling of my inherent unworthiness, I go to the darkest place inside of me. A dragon lives in this place and its terrifying. Recently, though, I realized this dragon was the manifestation of my power turned inward on me because I didn’t believe it was safe to experience my power in the outer world. Once I realized this, I got curious about it and started listening to what it had to say. My mistake, because it began asking me some very hard questions. As you read them, know the dragon inside of you is asking you the same ones: How many more moments of your one life do you want to spend feeling unworthy? How many more moments do you want to spend comparing yourself to others? How many more moments do you want to spend proving your worth instead of living your life? None. None more moments. “So are you going to count yourself worthy, or not?” asked the dragon, its voice echoing deep into the dark caverns of my heart. After much consideration, I replied, “I’ll have to get back to you on that.” Wah-wah. But at least I was honest. There were too many reasons not to count myself worthy. For instance… What about the terrible things I’ve done? If I count myself worthy despite those, isn’t that letting myself off too easy? Doesn’t that give me more precedence to do terrible things if I’m worthy no matter what? What about that broken part of me I can never seem to fix? Don’t I need to do more work to fix that broken part? How can I be worthy and broken? Counting myself worthy seems like spiritual bypassing or living in a fantasy. What if people don’t like who I am after I count myself worthy? What if I end up even more lonely after counting myself unconditionally worthy? If I count myself worthy, it means I don’t have to play society’s worthiness games — the games that make up the foundation of our culture. Who am I if I am not trying to prove my worth or make myself more likable and palatable to others? What if people don’t like the new me — the me after I’ve stepped out of the matrix? What if I find myself more lonely than before? The dragon answered these rebuttals with the following: What about the terrible things I’ve done? A big part of your transition from surviving to thriving is forgiving yourself for what you’ve had to do in survival mode. Forgive yourself for what you’ve done when you believed you were unworthy. People who believe they are unworthy do things unworthy of their true character. Count yourself worthy, so you can break the cycle and do things that are worthy of your true character. What about that broken part of me I can never seem to fix? The only way to experience your wholeness is to count yourself worthy even in your imperfection. And at any rate — realizing you are already whole is what triggers your parts to begin to healing themselves. Counting yourself worthy even though you feel broken is the point: You do not have to be fixed in order to be worthy. You just have to be you. What if people don’t like who I am after I count myself worthy? Ah, here we are. The heart of it. You’ve been taught it isn’t okay to be you, and the more you you are, the more abandoned you’ll be. The real tragedy is that you had to believe them in order to survive, escape from crippling shame, get your needs met, and experience what sliver of love you could. It’s natural you are afraid of counting yourself worthy in a world that has told you it isn’t safe to do so. But in order to thrive, you must be brave enough to count yourself worthy, otherwise you’ll stay stuck surviving for the rest of your life. It’s a risk to count yourself worthy no matter what, but the biggest thing that blocks you from thriving is your commitment to the perceived safety of survival mode. Count yourself worthy, and thrive. Experiencing your worthiness requires everything and nothing. Experiencing my worthiness requires nothing from me in the sense that there is nothing I need to prove. There is nothing I need to “keep” up. I can just be my natural self in each moment. Yet experiencing my worthiness requires everything from me in the sense that I must step out of the home I’ve made for myself in survival mode. I must take the risk to drop the armor of my defenses and learn what it means to embody my essence. I must be brave enough to count myself worthy, just because I am me, and say goodbye to everything that does not serve this truth. I must be brave enough to thrive. How much more comforting it would be to stay in survival mode, playing the same old worthiness games, safe behind the armor of my defense mechanisms. But we have one life to live. How many more moments will you live in survival mode when you could count yourself worthy and thrive right now?
https://medium.com/real-1-0/what-nobody-tells-you-about-your-worthiness-4a090e923d43
['Jordin James']
2020-11-26 17:22:56.972000+00:00
['Self', 'Spirituality', 'Psychology', 'Mental Health', 'Advice']
3 Levels of Prototyping and When to Use Them
Prototypes are created to save time, communicate better and focus on how the product will actually work. Prototypes are often created early on and used for user testing or done through code to understand the feasibility of a technology. Both are extremely important parts of the product development process. They help with the understanding of user flows, feeling out interactions, communicating the desired experience with the broader team, used to raise money and more. “If a picture is worth a thousand words, a prototype is worth a thousand meetings” — IDEO Level 1: Click Through Prototypes The prototype shown above is composed of around 25 images that are linked together from invisible buttons that you can tap on. You can see that some screens slide in from the side or bottom and you can scroll with fixed navigation. These are basic functionalities that help mimic the feel of a truly mobile experience. Many design programs such as Sketch, Figma, and XD allow you to build click through prototypes right in their apps. Invision is a popular online tool that allows you to create and share these prototypes with the world and there is even a tool called POP that allows you to make prototypes from drawings on paper. Pros — Very quick — Easy to create — Easily shared — Free tools available Cons — Limited interactions — Static images only — Can’t access device inputs like camera and keyboard — No logic — No gestures — Can become hard to maintain Why create a click through prototype? Even though click through prototypes have limitations, they still serve a vital role within the design process. I like to use click through prototypes early on in the design process to find answers to the early questions. I will often prototype out several takes of an experience to see how the content fits and feels on a mobile device or how I can break it up into steps or screens. These prototypes are great for exploring early concepts, user testing, getting buy in from team members, and communicating an overall strategy. Even at this low fidelity state, a hands on experience is far easier to understand than written notes or several slides in pitch deck. With click-through prototypes, it is common to do them atvarious levels of fidelity. Pen and paper all the way through high fidelity screens are both fine to use. The main purpose is to understand the flow and get a feel for how the mobile application connects all the parts.
https://medium.com/swiftkickmobile/3-levels-of-prototyping-and-when-to-use-them-735f17bf84e2
['Andrew Acree']
2020-08-10 16:22:25.703000+00:00
['Mobile App Development', 'Mobile Apps', 'Prototyping', 'Design', 'UX']
Why Pandas itertuples() Is Faster Than iterrows() and How To Make It Even Faster
Introduction In this article, I will explain why pandas’ itertuples() function is faster than iterrows() . More importantly, I will share the tools and techniques I used to uncover the source of the bottleneck in iterrows() . By the end of this article, you will be equipped with the basic tools to profile and optimize your Python code. The code to reproduce the results described in this article is available here. I assume the reader has a decent amount of experience writing Python code for production use. Motivation Imagine you are in this scenario: You are a data scientist tasked with building a web API to classify whether a picture contains a cat given a batch of images. You decide to use Django to build the API component and to keep things simple, embed the image classifier code in in the same codebase too. You spend a couple of weeks working on this project only to find that your web app is too slow for production use. You consult your colleague who is a software engineer for advice. That colleague tells you that Python is slow and that for anything API-related, Go is the tool of choice. Do you rewrite everything in Go (including learning a new web framework) or do you try to systematically identify what is causing your Python code to run slowly? I’ve seen many data scientists who favour the former option despite a very steep learning curve because they do not know how to troubleshoot their code’s running time. I hope this article will change that and will stop people from needlessly abandoning Python. Problem Statement To make things more concrete, we will use this scenario as a running example for the rest of this article: You’d like to populate the content of a container based on the content of a dataframe. For simplicity, let the container be a dictionary keeping track of the count of observations in the dataframe. For example, if this is the dataframe you are given: Figure 1: Sample dataframe then the content of the dictionary will look like this: Figure 2: Sample output The dictionary’s key can be a tuple of (col1, col2) or another dictionary where the first key is col1 and the second key is col2. The exact implementation details don’t matter. The point here is that you want a dictionary that tracks the count of all possible pairs in col1 and col2. Solution Iterrows() Solution Here’s what an iterrows() solution would look like given the problem statement described in the preceding section: Figure 3: Solution using iterrows() big_df is a data frame whose content is similar to Figure 1 except that it has 3 million rows instead of 5. On my machine, this solution took almost 12 minutes to execute. Itertuples() Solution Here’s what an itertuples() solution would like: Figure 4: Solution using itertuples() This solution only took 8.68 seconds to execute which is about 83x faster compared to the iterrows() solution. Analysis So why is itertuples() so much faster compared to iterrows() ? The starting point to understand the difference in speed is to run these solutions through a profiler. A profiler is a tool that will execute a given code while keeping track the number of times each function is called and its execution time. That way, you can start your optimization process by focussing your attention on the function(s) that consume the most time. Python comes with a built-in profiler that can be conveniently called from a Jupyter notebook using the %%prun cell magic. Let’s reduce big_df to just 1,000 rows and look at what are the top 10 functions that took the most time to execute in total under each solution. Here are the results: Figure 5: Top 10 functions in the itertuples() solution with the longest total execution time Figure 6: Top 10 functions in the iterrows() solution with the longest execution time There’s a lot of information to unpack here so for brevity, I will focus on the parts that are relevant to our problem, starting with Figure 5. I encourage the reader to read the profile module’s documentation to understand what the rest of the output means. According to Figure 5, the itertuples() solution made 3,935 function calls in 0.003 seconds to process 1,000 rows. The function that took up the most execution time was _make which was called 1,000 times, consuming 0.001 seconds of the execution time. This function belongs to the collections module and is defined here. _make just creates a tuple out of an iterable and since we have 1,000 rows, it makes sense that this function gets called 1,000 times (the iterable in each call being a row in our dataframe). Noting that the total time that this solution took is 0.003 seconds and rest of the functions took 0 seconds, let’s proceed to analyzing the output in Figure 6. Figure 6 shows that the iterrows() solution made 295,280 function calls in 0.254 seconds. Compared to the itertuples() solution, all top 10 functions in the iterrows() solution have non-zero tottime values. Moreover, the actual call to iterrows() is not even in the list of 10 top functions that took the longest to execute. In contrast, the call to itertuples() in the itertuples() solution is ranked at position 7 in Figure 5. This suggests that there is a lot of overhead associated with the call to iterrows() . Looking at the list of functions being called, we see that these overhead pertains to type checking code, e.g. is_instance and _check in the first and second row respectively. You can verify that this is the case by manually stepping through an execution of iterrows() using a debugger. So there you have it. The reason iterrows() is slower than itertuples() is due to iterrows() doing a lot of type checks in the lifetime of its call. Now let’s see what we can do with this insight. Application: Building A Faster Solution Suppose we didn’t know the function itertuples() exists. What can we do to improve the row iteration performance? Well, in the preceding section, we have identified that the bottleneck is due to excessive type checking so a good first attempt at a solution is to create a data structure that does not do type checks. Here’s an example: Figure 7: An attempt to iterate faster than iterrows() Line 3 of Figure 7 shows that we create our rows to iterate over by simply zipping the relevant columns. This solution only took 5 seconds to execute over 3 million rows, which is almost twice as fast as the itertuples() solution. Let’s call our solution the custom solution and profile it to see if we can identify the source of the speedup. Here’s the top 10 functions that took the most time to execute in our custom solution on a dataframe of 1,000 rows: Figure 8: Top 10 functions in the custom solution with the longest execution time What is striking about Figure 8 is that it shows the custom solution only made 233 function calls in 0.002 seconds. This is surprising to me since I expected at least 1,000 calls since we are still iterating over 1,000 rows. Let’s see which function is called the most by sorting the ncalls column in descending order: Figure 9: Same thing as Figure 9 except sorted by ncalls Figure 9 shows that the most called function is isinstance which was called only 39 times. This still does not provide any useful information to figure out how the iteration was done with a total of less 1,000 function calls. Another useful profiling technique is to profile the lines of our code i.e. see how many times each line is executed and how long it took. Jupyter has a line magic called %lprun which comes with the line_profile package. Here’s what the line profile looks like for our custom solution: Figure 10: Line profile of the custom solution (Time column is in microseconds) As expected, we see that the iteration does happen 1,000 times (line 12). This suggests that iterating n rows does necessarily mean having to call a function n times. So the next logical question to ask is: Who is calling _make in Figure 5 1,000 times and is there any way we can avoid/reduce the number calls? Fortunately for us, Python comes with a pstats module that allows us to dig deeper into the output of a function profile run. I refer the reader to the code accompanying this article for details on how to get this information. Anyway, here’s all the functions that called _make : Figure 11: The functions that called _make In this case, the output is not useful at all ( <string:1(<module>) refers to the top-level code for the “script” passed to the profiler, which is the content of the entire cell implementing the itertuples() solution). Another approach to figure out who is calling _make is by inserting a breakpoint inside _make and then executing the solution inside a debugger. When the breakpoint is hit, we can trace the frames to see the chain of calls that led to _make . Doing so reveals that the 1,000 calls to _make originate from the call to itertuples() itself, as shown here. The following figure shows the most interesting part of itertuples() : Figure 12: The origin of the 1,000 calls to _make Figure 12 shows that there are 1,000 calls to _make because line 927 returns a map that basically calls _make for each row in the dataframe. The interesting part of this snippet is that the call to map is nested under an if statement where one of the condition is that the name parameter in itertuples() must not be None . If it is, then it will return an iterator that iterates over the zipped columns in the dataframe … which is the same thing as what our custom solution does! The documentation of itertuples() says that if the name parameter is a string, then it will return named tuples with the given name . If name is None , then it will return regular tuples instead. Our code will work just as well regardless of itertuples() ’s return type. So let’s prefer regular tuples over named tuples so that we can skip the 1,000 calls to _make . This is what happens when we set the name parameter in itertuples() to None : Figure 13: Iterating over 3 million rows with itertuples(name=None) Figure 14: Function profile of iterating over 1,000 rows with itertuples(name=None) Figure 15: Line profile of iterating over 1,000 rows with itertuples(name=None) The itertuples(name=None) solution is competitive with our custom solution. It took 5.18 seconds to iterate over 3 million rows whereas our custom solution only took 4.92 seconds. Conclusion This article has shown the reader how to use a Jupyter Notebook to: Figure out which function calls are taking the most time to execute, and Which lines in a code snippet is taking the most time to execute It also has illustrated the need to be adept with using a debugger to step through code and reading documentation to identify optimization opportunities. I hope you will consider applying the techniques described in this article the next time you face “slow” Python code. Let me know in the comments if you have any questions. References Documentation On Python’s profile Module
https://medium.com/swlh/why-pandas-itertuples-is-faster-than-iterrows-and-how-to-make-it-even-faster-bc50c0edd30d
[]
2019-10-20 17:34:32.113000+00:00
['Jupyter Notebook', 'Pandas', 'Programming', 'Data Science', 'Python']
How to Create Your First REST API With Deno
Create an API With Deno What better way to start playing with Deno than by creating our first REST API. With this little tutorial, I am going to create a very simple array of movies and the five methods to list, search, create, update, and delete elements. The first step is to create an index file, in this case app.ts . The first thing will be to load Oak, a middleware framework for Deno’s HTTP server. Oak is inspired by Koa, a middleware for Node.js. It seems that they continue with the pun. In the end, it helps us make writing APIs easier. It is a fairly simple example that is practically self-explanatory. The server will listen on port 4000 and load the routes defined in the router.ts file that we will see right after. In the file ./api/controller.ts , I will put the definition of the functions for the different endpoints. It’s time to define the routes in the router.ts file. Here we will also import the Oak router and the definitions that we will create in the controller.ts . We instantiate a router and define the five commented routes: getMovies — Returns all the movies — Returns all the movies getMovie — Returns a movie given its ID — Returns a movie given its ID createMovie — Creates a new movie — Creates a new movie updateMovie — Updates an existing movie — Updates an existing movie deleteMovie — Deletes a movie Now it’s time to create the controller.ts file to define the API methods and the test database. interface Movie { id: string; title: string; rating: number; } Then, the movies array: And now, the different methods, starting with the one that lists all the movies. It is really that simple: /** * Returns all the movies in database */ const getMovies = ({ response }: { response: any }) => { response.body = movies; }; Let’s go to the next one, the one in charge of returning a movie from an ID that we can pass as a parameter. If we try to launch the request with Postman, we will see that it works. It is the turn of the createMovie method to create a movie. The code is the following: If we launch the test request, the server will reply with a message containing the recently created movie data. If we then launch the request to return all the movies, we will see how the new one appears correctly. It is the turn of the updateMovie method to update a movie. The code is: We launch the corresponding PUT request with Postman, and we will get the correct response. And finally, we only have the deleteMovie method that, in this case, deletes a movie from a given id. What I do is use the filter () to update the array, keeping all the movies with a different id than the one sent. We try with Postman … And effectively the movie with id = 1 just disappeared. You can download all the source code for this example in this repository from my GitHub.
https://medium.com/better-programming/how-to-create-your-first-rest-api-with-deno-296330832090
['Marc Pàmpols']
2020-05-26 14:07:35.353000+00:00
['API', 'Typescript', 'Deno']
Breaking down Google Cloud IAP
But not literally Recently I’ve had the problem of securing a custom ETL tool for a client project built using a combination of AppEngine and Kubernetes applications on Google Cloud Platform (GCP). I need to control access to these apps from external networks, and ideally integrate with an existing IAM policy. Enter IAP! Identity Aware Proxy (IAP) is GCP’s offering to lock down applications that would otherwise be publicly exposed on the cloud. The sell is pretty sweet, just turn it on and within minutes you get a free wall of Google’s good stuff surrounding your shamefully exposed app. The security guy stops sweating and everyone is happy, right? Well… kinda: It does what the box says, however there are some design decisions that will leave you a little bit stumped. In this post I’ll cover how it works behind the scenes, and how you can integrate with a micro-services architecture built around service accounts. I’ll provide a summary of the good and the bad at the end if you’re after a TLDR. Otherwise, let’s see how it works, and more importantly, how you can use it. Build the Wall! Turning on IAP is pretty simple. In fact, let’s do it right now. First things first, I’ve deployed a basic flask app that will print out the user’s identity. For testing purposes I’ve done this in App Engine Standard, as it has the users API baked in to validate identity. IAP is available in Security > Identity-Aware Proxy. Once you get here you’re greeted with a screen like this: It tells us that we need to configure our OAuth consent screen before I can protect my app. This screen is presented to unauthenticated users when they hit a secured endpoint, and provides information like your company name, homepage, and privacy policy. Unfortunately there is no way to automate this configuration yet which means that an IAP-centric app cannot use a fully-automated deployment. Anyway, once we’ve configured that then we can turn IAP on for all App Engine apps in the project. This will instantly lock down all AppEngine apps completely. Including the service URL below will allow authorised users to access that URL. After this when I hit my App Engine endpoint I’m prompted to log in and: Oops — my account doesn’t have access to the App Engine resource yet. On the IAP page under access you’ll find the add button, which will let you give IAP permissions to users at the project level. Behind the scenes this is just applying the “IAP-Secured Web App User” IAM role to the account you provide. This means that we can apply these permissions to already existing groups or roles. Once that change has propagated we get the result: It worked! But how does it work? IAP is the central pillar of BeyondCorp, enabling remote authentication to online-services without a VPN. It can be enabled for App Engine or a Cloud Load Balancer (supporting GCE or GKE instances), and configured via IAM to allow fine grained access-control. Google says it can do authorisation as well, however IAP permissions are applied at the project level. If you turn on IAP for two applications, and grant a user IAP access, they will be able to access both. If you want to get more low level, for example at the application or even endpoint level, you’ll have to build some scaffolding around what IAP gives you. At a high level, IAP has two main layers: Resource Authorisation — an Oauth2 flow to generate a signed access token. IAP will use this token to validate identity. App Validation — verifying a user’s identity using signed headers generated by IAP. This provides an additional layer of security if someone manages to bypass IAP (or if you forget to turn it on ;) ) Depending on what you’re protecting and the language you’ve written it in you’ll have various levels of support to implement these two layers. The simplest use case is authorising as a user, as demonstrated in the example above. This process can be done programmatically (see here), however if you’re thinking about automating sign-in then you should probably be using service accounts. Unfortunately, automating this process is more complicated; if your app isn’t written in Java, C#, Python, or PHP* then things are going to get cURLy. * At the time of writing these are the only available languages in the docs. Bizarrely, this list doesn’t include Node.js. Although Cloud Functions now supports Python, adding IAP tokens to a Node 6 app is a nightmare (see this handy stack overflow thread). With Node 8 you should in theory be able to leverage the google-auth-library, although the example requires a Service Account key file, instead of leveraging the application credentials. Resource Authorisation In order to pass IAP border control and make a request against a protected app, we need to generate an OpenID token signed by Google and add it to our request as a header. To do that, we need to generate a JWT signed by the service account that our app is using. Fortunately, the examples in the docs implement this logic for you, so you don’t need to worry about it — unless you’re writing Node 6. Unfortunately, I had to implement this in Node 6. We used the following approach, based on the above stack overflow link. Note that I had mixed results with this approach — if you get the ambiguous “401: Error Code 13” then the only advice I have for you is to start from scratch on a new project. I wish I was joking. Anyway, here’s what that snippet is actually doing: Get service account access token from instance metadata store Create a JWT header and claim set for our OpenID request Sign that JWT using our service account access token and the Sign Blob API Use signed JWT to get OpenID token Attach that OpenID token to our request as a header Frankly, if I were to do it again I’d just do it in Python (or one of the other supported languages) and use Google’s sample code. Once you’ve generated a token, you can make an authenticated request by adding it to the “Bearer” header. A request made to an IAP protected endpoint will be redirected to the IAP gateway, where the token is decoded and validated. If valid, IAP will then create or replace the x-goog headers which can be used by the app to validate identity. App Validation In order to prevent nefarious parties (or l337 h4xors as they’re known in the industry) from accessing your app, IAP uses two layers of security: The first layer requires a token as generated above, which is used to validate the user’s identity. The second stage involves encrypting this identity into a second token using keys managed by the IAP service. This token can then be decrypted using public keys, which is what makes IAP so secure. This is because spoofing this token would require knowing the private keys used to encode the IAP assertion. As long as your app performs this validation, it is locked down even if IAP isn’t turned on. A Quick Note on GKE Before I wrap up, a quick note on securing a GKE app with IAP. You’ll need a DNS name and an Ingress load balancer, as covered in the tutorial here. However, there’s a chicken and egg problem with this deployment that’s a pain to deal with: In order to decode a JWT token inside a GKE app you need to know the backend service ID of that app. This means that if you want to decode the IAP header inside your app and get the requester’s account you need the load-balancer to already exist. Since this isn’t possible you are left with three options: Redeploy the app with the correct configuration once the load balancer is created Use dynamic config, for example by using a config map linked to a file Look up the backend service ID using the Instance Metadata server at runtime, also known as the “O’Reilly” option. Wrapping Up IAP provides a fast and secure way to lock down your apps, however there are a few caveats: Not being able to automatically create an Oauth2 consent screen will prevent any deployments that use IAP from being fully automated. Access is provisioned per account for all IAP-protected services at the project level. Access control at the application level will require additional scaffolding on top of what IAP provides. The support for token generation in Javascript needs a lot of work. Using this with Kubernetes will complicate deployment requirements That being said, it still passes with flying colours. Authentication is hard, and having it as a fully managed service more than makes up for the overhead in deployment. Some of these problems are also minor fixes, and will likely become available as the product matures. If you’re searching for a quick-and-easy to lock down your services looks no further. A Note on Cloud Endpoints If you do need something with more fine-grained access control and better language support have a look at Cloud Endpoints. It doesn’t use the same 2-layer model as IAP, and you will have to manage swagger specs at the application level, however it does offer a more extensible service. https://www.servian.com/gcp/
https://medium.com/weareservian/breaking-down-google-cloud-iap-e3b23a8bddc7
['Mayan Salama']
2019-07-08 04:23:24.057000+00:00
['Identity Aware Proxy', 'Kubernetes', 'Microservices', 'Google Cloud Platform', 'Authentication']
Introduction to Technology Hits
Background I am a technologist for almost four decades. Over the years, I developed a large readers group based in various academic, industry, and public platforms. My readers want to follow my technology-related content from a single and curated source. These loyal followers requested me to curate compelling, insightful, and engaging content and disseminate them in a logical and digestible way. The best to achieve this goal is to turn the content to stories and make them available via a publication. Medium is an ideal platform for this request. I thought the best way would be to establish a publication focusing on all technology covering various reader requirements. I am also connected with thousands of writers. For example, one of my publications on Medium is supporting to around 6,000 writers. I can guess what you are thinking. Yes, there are thousands of publications about technology. You may ask, how would my publication be different and add value to readers? I designed this publication by considering the requirements of my readers. By analyzing their needs and wants using a Design Thinking method, I classified my audience under seven categories. Information management, knowledge transfer, and content harvesting are my special interests. I am an inventor and innovator in these areas with industry credentials. I announce this new publication today which coincided with the 5th milestone of my significant publication on Medium. Scope The seven personas in technology domains depict the scope of this publication. They are: technicians, philosophers, entrepreneurs, entertainers, artists, storytellers, and futuristic leaders. Since Medium allows a maximum of seven tabs in the publication interface, I added them as the amplifiers of stories that can be conveniently consumed by these personas using tags. I will provide a comprehensive guide to the effective use of the publication. The publication banner depicts the implementation of the functions based on the seven major personas. image screen capture of the publication banner by author My aim is to harvest and publish stories within the scope of these seven logical functions. These seven functions can cover almost anyone who has an interest in any aspect of technology Let me briefly explain the coverage of each domain and give you an idea of who can contribute to this unique publication with the type of stories. Technical Stories in this domain include technical, engineering, and scientific aspect of technology. For example, the definition and description of a technology, architecture, design constructs, security, tools, operations, processes, and procedures can be part of this section. How to type of stories can fit into this domain. Use of technology in all scientific disciplines such as medicine, biotechnology, neuroscience, engineering, environment, climate, and so on can be part of technical function. Technology professionals such as data scientists, enterprise architects, solution designers, technical specialists, software developers, and system administrators can share their stories in this section. Philosophical This function is dedicated to philosophers and deep thinkers. Ideas reflecting the pros and cons of technology constructs can well suit this domain. There are many readers interested in the philosophical aspect of technology. For example, ethics for artificial intelligence and robotics is a popular topic. There are undergraduate and postgraduate degrees offering courses on the philosophical aspect of technology. Students of these degrees are welcome to submit their academic yet engaging stories. Entertaining Entertainers use technology widely. The contributors and consumers of this function can be game enthusiasts. Computer games are widespread globally and establish an extensive industry outlook. Service providers use technology to entertain their customers. Social media stories especially podcasts and YouTube can be part of this section. Yes, you can introduce your YouTube channel and podcasts in a story. Entrepreneurial This function can serve entrepreneurs in Startup companies. This section can be used by those technology leaders who plan digital ventures. All business, economic, and financial aspect of technology can be part of this function Artistic Technology and art are interrelated. Many artists use technology to express their artistic thoughts and feelings. You may have heard about famous digital poems. Digital painting and digital music are widespread. You can submit stories about the use of technology for all art forms including design work in this section. I want to cover stories on how technology impacts poets, musicians, and painters. Personal This publication is home to storytellers writing various aspects of technology. You can share your personal experience with technology tools, processes, and services. You can share personal stories reflecting your thoughts and feelings about technological devices such as smartphones, smartwatches, security devices, cooking, gardening, various IoT devices serving different purposes. Futuristic This section is for thought leaders, inventors, innovators, and strategists. You can share your wild ideas on how the future should be from the technological aspect. Another great topic can be the transformational effects of technology on human life. Ideas for the next generations can be discussed in stories submitted to this section. In short, anyone writing any aspect of technology can make this publication home for their stories. Benefits As an Editor in Chief with a strong technology background and publishing experience, I will orchestrate editing and publishing activities with the help of several experienced editors. As contributors to this publication, we will value your content and support you to achieve your writing goals in the technology domain. We will leverage 50K followers of ILLUMINATION to amplify your messages and showcase your outstanding stories to discerning readers on our special collection called ILLUMINATION-Curated. As a unique value, we will let you transfer your stories among three publications with ease. This flexibility can give your stories maximum visibility to the interested readers of other two large publications. In addition, we will allow you to publish your old curated stories distributed to technology related topics in the past. This opportunity can give your old curated stories a second life by introducing them to a new audience. You can also join our Slack group to collaborate with hundreds of other Medium writers contributing to ILLUMINATION and ILLUMINATION-Curated. We will create special sections and community clubs to support your writing goals. If you are interested in becoming a contributor, please send a request via this link. Alternatively, you can leave a brief comment on this story showing your interest to participate. I am excited about this initiative and look forward to collaborating with you. You are welcome to join my 100K+ mailing list, to collaborate, enhance your network, and receive technology newsletter reflecting my industry experience.
https://medium.com/technology-hits/introduction-to-technology-hits-7665b8d5e950
['Dr Mehmet Yildiz']
2020-12-14 15:19:51.134000+00:00
['Artificial Intelligence', 'Technology', 'Data Science', 'Entertainment', 'Writing']
24 Most Controversial Books of All Time
24 Most Controversial Books of All Time Readers.com infographic details most challenged/banned books of all time “What is freedom of expression? Without the freedom to offend, it ceases to exist.” Salman Rushdie, among many others, finds a book of his on this list of the 24 most controversial books of all time. There are a few conspicuous absentees (Joyce’s Ulysses, for example). Which books were you most surprised not to see?
https://medium.com/electric-literature/24-most-controversial-books-of-all-time-70e484941082
['Nicholas Politan']
2016-07-25 16:23:09.939000+00:00
['Writing', 'Free Speech', 'Infographic', 'Books']
Pictal Health — Purpose, Vision and Values
For the last few months I’ve been working on a new company to help patients organize and visualize their health stories — Pictal Health. I come from a human-centered design background, and in my past work I have often used design principles to provide creative constraints and help my team make good decisions over the course of a project. So in designing this new venture, I am trying to use similar principles and statements to speak clearly about what Pictal Health is trying to do, the impact we hope to make, and how we want to work. Below is Pictal Health’s purpose, vision and values, which the book Story Driven helped me develop. While the specific products or services we create may change over time, I hope these core statements will remain fairly consistent. So far they have helped me get clear about what I’m working on, make better use of my time, and make better decisions; I hope they also help others understand what I’m up to.
https://medium.com/pictal-health/pictal-health-purpose-vision-and-values-1d1dfa1007ec
['Katie Mccurdy']
2019-05-31 14:15:26.575000+00:00
['Healthcare', 'Startup', 'Design Process', 'Design']
7 Pieces of Terrible Writing Advice You Should Never Follow
Why Should You Trust Me? So far, I’ve said nothing unique. Every writing guru claims they have the secret sauce and every other guru doesn’t. What makes me different? Here’s your first clue. I’m not going to promise any of this will work exactly the way I say it will. Good advice is nothing more than a suggestion. Nobody can promise you anything because there are too many variables in life. Most terrible writing advice centers around some guarantee that if you do what you’re told, things will work out in some precise way. Of course, luck is involved when it comes to writing. But, there are ways to increase your odds of building an audience and having blog posts go viral. I have a bag of tricks, but I don’t know exactly what will happen after I hit publish. No one does. Anyone who’s promising you their “proven secrets to virality” is a charlatan. I offer useful strategies that tend to pay off in the long term because long-time scales are more predictable. I’m living proof of that. My blog posts have been read by millions, I’ve published two books with a third coming out this fall, and tens of thousands of people read my work on a monthly basis like clockwork. But I’ve also been writing for five years. Many, many, many people who write about writing edit out the part where they were stuck and frustrated and show you the “roadmap for success”, based on a starting point that’s not real — the point where they got traction instead of the very first time they wrote. I won’t do that. You’ll get unfiltered straight-to-the-point tips, the opposite of terrible writing advice. A great starting point for success is learning what not to do. Avoid these strategies at all costs.
https://medium.com/better-marketing/7-pieces-of-terrible-writing-advice-you-should-never-follow-f2153531aed5
['Ayodeji Awosika']
2019-09-18 02:05:50.088000+00:00
['Creative Writing', 'Content Marketing', 'Writing', 'Marketing', 'Writing Tips']
4 Biggest Myths About Anxiety Everyone Believes
2. You need to understand the origin of your anxiety One of the biggest misconceptions about anxiety is that it’s necessary to understand its origins in your life in order to deal with it effectively. For example, I had a client once who came to see me because she was having panic attacks anytime she drove on the freeway. She told me that she was convinced that the origins of her panic were in her childhood and her father’s habit of driving while intoxicated. And she hoped that by exploring these childhood memories together we would be able to free her from her panic attacks. Now, it’s not hard to see how a child might develop some significant driving anxiety as a result of being driven around by an intoxicated parent. So my client’s ideas about how to resolve her anxiety were understandable. But as I tried to explain over the course of a few sessions, the way out of her driving anxiety was going to have very little to do with her past and everything to do with her present. Because here’s the thing: The original cause of anxiety is rarely the maintaining cause. In my client’s case, it’s very possible that, as a result of her father’s drunk driving, she developed a habit of worrying a lot while driving. But when you think about it, her father wasn’t causing her driving anxiety now as a 45-year-old woman. What was causing her driving anxiety and panic now was the habit of worrying about her own anxiety while driving. At my client’s request, we spent weeks and weeks exploring every nuance of her past and memories about her father and his drinking and driving. And while there were some interesting tidbits to be gleaned, my client’s driving anxiety and panic persisted. No matter how much insight she got into the origins of her anxiety, the habit of worrying while driving persisted, and along with it, her panic attacks. And the reason was straightforward: While her father’s drunk driving may have been the initial cause or trigger for her driving anxiety, it was her habit of worrying and catastrophizing in the present that was maintaining it. This meant that we could explore her past until both of us were blue in the face, but until we took care of the habits in the present that were maintaining her anxiety, she would continue to have panic attacks while driving. If you really want to free yourself from anxiety, it’s your present, not your past, that holds the key. What’s more, the original cause or trigger for anxiety is not only unhelpful, most of the time it’s completely unnecessary for addressing anxiety in the present: Understanding why your mother didn’t love you as much as you wished she had won’t change the fact that you’re in the habit of worrying about what other people think — and as a result, experience a lot of social anxiety. Understanding how your learning disability as a teenager led to feelings of inadequacy won’t change the fact that you’re in the habit of putting yourself down with constant negative self-talk — and as a result, experience a lot of performance anxiety. Understanding that worrying about the future was a normal consequence of your traumatic childhood won’t change the fact that you’re in the habit of catastrophizing and worrying about the future now — and as a result, experiencing a lot of generalized anxiety. There’s nothing wrong with exploring your past and trying to understand how it’s shaped who you are today. But if you’re serious about feeling less anxious, you need to understand the habits that are maintaining it in the present and address those head-on.
https://medium.com/personal-growth/4-biggest-myths-about-anxiety-everyone-believes-222090ac841e
['Nick Wignall']
2020-12-19 19:56:49.064000+00:00
['Self', 'Psychology', 'Anxiety', 'Life', 'Mental Health']
As a Writer, You Need to Get Into Idea Mode
You did it again, didn’t you? You let yourself run out of ideas of things to write about. Every time this happens, you promise you won’t let it happen again. You put Idea Generation on your to-do list. You write it in big red letters and draw circles and arrows around it. You make it a Really Big Deal. But when it comes time to do it, it’s like trying to go to sleep so Santa Claus will come. You just sit there, staring at the screen. Writer’s block? Hell, you have thinker’s block. But here’s a talent I have developed; a sort of superpower. Instead of idea generation being this active task you have to accomplish, it becomes more passive. Which makes more sense if you think about it. After all, you’ve tried it the other way too many times. Okay, think of ideas. Go! It doesn’t work, does it? That’s not how ideas come to us. At least, that’s not how the really good ones come to us. They show up out of nowhere. While we sleep. In the shower. Driving down the road. Pretty much anytime, we are not prepared to capitalize on them. Why is that? I’ve decided that there is this tiny receptor in the back of our brains, a sort of box, if you will. An idea box. And when we are not thinking about writing, and especially when we are not trying to think of ideas to write about, that box just pops open. Like that old Jack-in-the-box, you annoyed everyone with when you were a child. Pop Goes the Weasel just started playing in your head, didn’t it? Sorry about that. It will go away. Eventually. But here’s the thing. With practice, you can open that box at will. It will take time and some positive reinforcement, but you can do it. Open that little box in the back of your brain and let ideas flow into it. Sounds crazy? Well, maybe, but if it works, who cares? Try this. Open up your favorite social media or news feed. I like Twitter for this as it gives me the most bang for my mental buck, but you do you. Scroll through and start scanning posts. Open up any that interest you, but don’t spend much time on any one. You’re not trying to find out who did what with that thing. You’re capturing ideas. As you scroll, just keep thinking about that idea box. Don’t look at each post and think, “Can I write about this?” Just keep scrolling and visualizing that open box in the back of your mind. It’s more associative than anything else. Each post, story, Tweet, whatever, is like one of those inkblots in the Rorschach test. What does this make you think of? There it is. An idea. Write it down. Make a couple of quick notes about it, because you won’t remember what the idea is later. They are very fleeting things, those ideas. Inkblots? Weren’t they a singing group in the ‘30s? No, wait, that was the Ink Spots. Why would I want to write about them? But don’t spend more than a few seconds on each one, a minute tops. Keep scrolling. The more you do it, the easier and faster they will come. Before you know it, you’ve come up with enough ideas to keep you fresh for a month. But don’t wait a month to do it again. No matter how brilliant the thought is right now, when you get ready to write, it may fade away. You will lose about half of these, so feed the beast often, at least once a week. Don’t have your computer, tablet, or phone handy? What are you, a caveman? That’s okay, maybe you are stuck somewhere that you can’t spend time scrolling through social media. Like your real job. Get a piece of paper and just look around the room. Scroll through every item in your field of view. A pencil cup? Seven Office Supplies That Should Be on Every Desk. Four Obsolete Items You Should Get Rid of Today. Scan from item to item and make sure that box is open. What does that thing make you wonder about? What’s the history behind this stuff? Why don’t we start using this thing instead of that thing? If you can’t come up with ten articles without moving from your chair, you’re not trying. But again, it takes practice, this idea mode. Maybe, to begin with, you do it for ten minutes once a day. As you get better at it, expand the time. One good weekly session should be enough to fuel your writing for a long time. And here’s the best part. Here is where it becomes a superpower. After a while, you won’t have to turn it on. You won’t have to open the box. It will stay open all the time. You won’t be able to stop the ideas from coming. And that is a very good thing. Now, if you will excuse me, I just thought of a great idea for my next article.
https://medium.com/write-i-must/as-a-writer-you-need-to-get-into-idea-mode-680845f02aa4
['Darryl Brooks']
2020-11-23 17:30:42.735000+00:00
['Self Improvement', 'Life Lessons', 'Writing', 'Ideas', 'Self-awareness']
Use C# And ML.NET Machine Learning To Predict Taxi Fares In New York
I’m using the awesome Rainbow CSV plugin for Visual Studio Code which is highlighting my CSV data file with these nice colors. There are a lot of columns with interesting information in this data file, but I will only be focusing on the following: Column 0: The data provider vendor ID Column 3: Number of passengers Column 4: Trip distance Column 5: The rate code (standard, JFK, Newark, …) Column 9: Payment type (credit card, cash, …) Column 10: Fare amount I’ll build a machine learning model in C# that will use columns 0, 3, 4, 5, and 9 as input, and use them to predict the taxi fare for every trip. Then I’ll compare the predicted fares with the actual taxi fares in column 10, and evaluate the accuracy of my model. And I will use NET Core to build my app. NET Core is really cool. It’s the multi-platform version of the NET framework and it runs flawlessly on Windows, OS/X, and Linux. I’m using the 3.0 preview on my Mac right now and haven’t touched my Windows 10 virtual machine in days. Here’s how to set up a new console project in NET Core: $ dotnet new console -o PricePrediction $ cd PricePrediction Next, I need to install the ML.NET NuGet package: $ dotnet add package Microsoft.ML Now I’m ready to add some classes. I’ll need one to hold a taxi trip, and one to hold my model’s predictions. I will modify the Program.cs file like this: The TaxiTrip class holds one single taxi trip. Note how each field is adorned with a Column attribute that tell the CSV data loading code which column to import data from. I’m also declaring a TaxiTripFarePrediction class which will hold a single fare prediction. Now I’m going to load the training data in memory: This code sets up a TextLoader to load the CSV data into memory. Note that all column data types are what you’d expect, except RateCode and PaymentType. These columns hold numeric values, but I’m loading then as string fields. The reason I’m doing this is because RateCode is an enumeration with the following values: 1 = standard 2 = JFK 3 = Newark 4 = Nassau 5 = negotiated 6 = group And PaymentType is defined as follows: 1 = Credit card 2 = Cash 3 = No charge 4 = Dispute 5 = Unknown 6 = Voided trip These actual numbers don’t mean anything in this context. And I certainly don’t want the machine learning model to start believing that a trip to Newark is three times as important as a standard fare. So converting these values to strings is a perfect trick to show the model that RateCode and PaymentType are just labels, and the underlying numbers don’t mean anything. With the TextLoader all set up, a single call to Load() is sufficient to load the entire data file in memory. I only have a single data file, so I am calling TrainTestSplit() to set up a training partition with 80% of the data and a test partition with the remaining 20% of the data. You often see this 80/20 split in data science, it’s a very common approach to train and test a model. Now I’m ready to start building the machine learning model: Machine learning models in ML.NET are built with pipelines, which are sequences of data-loading, transformation, and learning components. My pipeline has the following components: CopyColumns which copies the FareAmount column to a new column called Label. This Label column holds the actual taxi fare that the model has to predict. which copies the FareAmount column to a new column called Label. This Label column holds the actual taxi fare that the model has to predict. A group of three OneHotEncodings to perform one hot encoding on the three columns that contains enumerative data: VendorId, RateCode, and PaymentType. This is a required step because machine learning models cannot handle enumerative data directly. to perform one hot encoding on the three columns that contains enumerative data: VendorId, RateCode, and PaymentType. This is a required step because machine learning models cannot handle enumerative data directly. Concatenate which combines all input data columns into a single column called Features. This is a required step because ML.NET can only train on a single input column. which combines all input data columns into a single column called Features. This is a required step because ML.NET can only train on a single input column. AppendCacheCheckpoint which caches all data in memory to speed up the training process. which caches all data in memory to speed up the training process. A final FastTree regression learner which will train the model to make accurate predictions. The FastTreeRegressionTrainer is a very nice training algorithm that uses gradient boosting, a machine learning technique for regression problems. A gradient boosting algorithm builds up a collection of weak regression models. It starts out with a weak model that tries to predict the taxi fare. Then it adds a second model that attempts to correct the error in the first model. And then it adds a third model, and so on. The result is a fairly strong prediction model that is actually just an ensemble of weaker prediction models stacked on top of each other. With the pipeline fully assembled, I can train the model on the training partition with a call to Fit(). I now have a fully- trained model. So now I need to load some validation data, predict the taxi fare for each trip, and calculate the accuracy of my model: This code calls Transform(…) to set up predictions for every single taxi trip in the test partition. The Evaluate(…) method then compares these predictions to the actual taxi fares and automatically calculates three very handy metrics for me: Rms : this is the root mean square error or RMSE value. It’s the go-to metric in the field of machine learning to evaluate models and rate their accuracy. RMSE represents the length of a vector in n-dimensional space, made up of the error in each individual prediction. : this is the root mean square error or RMSE value. It’s the go-to metric in the field of machine learning to evaluate models and rate their accuracy. RMSE represents the length of a vector in n-dimensional space, made up of the error in each individual prediction. L1 : this is the mean absolute prediction error, expressed in dollars. : this is the mean absolute prediction error, expressed in dollars. L2: this is the mean square prediction error, or MSE value. Note that RMSE and MSE are related: RMSE is just the square root of MSE. To wrap up, let’s use the model to make a prediction. I’m going to take a standard taxi trip for 19 minutes. I’ll be the only passenger and I’ll pay by credit card. Here’s how to make the prediction: I use the CreatePredictionEngine<…>(…) method to set up a prediction engine. The two type arguments are the input data class and the class to hold the prediction. And once my prediction engine is set up, I can simply call Predict(…) to make a single prediction. I know that this trip is supposed to cost $15.50. How accurate will the model prediction be? Here’s the code running in the Visual Studio Code debugger on my Mac:
https://medium.com/machinelearningadvantage/use-c-and-ml-net-machine-learning-to-predict-taxi-fares-in-new-york-519546f52591
['Mark Farragher']
2019-11-19 15:11:07.808000+00:00
['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Csharp', 'Data Science']
5 Life Lessons from 5 Years at VaynerMedia
This week marked my 5th anniversary (or Vaynerversary, as we call it) at a company I love: VaynerMedia. It’s a feat only a handful (no pun intended) of others have achieved to date, and one in which I happen to be quite proud. It reminds me of so much, and all the experiences, lessons and amazing friendships that have come out of it are invaluable. A photo I took of the VaynerMedia office in Tribeca (Oct, 2010) As someone who’s about to turn 30 (sigh), 5 years shouldn’t be all that transformative, but time just doesn’t work that way here. Here, you can hold 6 different job titles in 5 years. You can watch a 20 person team grow into a 500+ one. You can move to 4 separate offices. And you can open 3 new ones (with one on the way). Time isn’t supposed to work that way, right? I came here in 2010 because I heard Gary V. wanted to build the biggest building in town, and I truly believed he (we) would. I still believe it now. When you’re lucky enough to be a fly on the wall at a fast-paced company, you undoubtedly pick up some valuable insights and knowledge. In an effort to share some of mine, I’ve written out 5 key things I learned along the way. These are in no particular order, as I think they’re all super important. You may recognize a few, since some stem from philosophical things I know GV’s spoken about publicly over the years. What can I say? The guy’s quotable…
https://medium.com/the-ascent/5-life-lessons-from-5-years-at-vaynermedia-448844af2606
['Steve Campbell']
2019-12-10 23:15:41.591000+00:00
['Entrepreneurship', 'Startup', 'Life']
What is Google Kubernetes Engine (GKE)?
You can also checkout the explainer video where I walk through these concepts in detail Explainer video on the topic — “What is Google Kubernetes Engine?” Next steps If you like this #GCPSketchnote then subscribe to my YouTube channel 👇 where I post a sketchnote on one topic every week! Follow my website for downloads and prints👇 If you have thoughts or ideas on other topic that you might find helpful in this format, please drop them in comments below!
https://medium.com/google-cloud/what-is-google-kubernetes-engine-gke-d2cb2d17178d
['Priyanka Vergadia']
2020-12-10 05:41:11.400000+00:00
['Kubernetes', 'Containers', 'Google Cloud Platform', 'Cloud Computing', 'Cloud']
Artificial Intelligence Is Pioneering Advances in Ecology
Artificial Intelligence Is Pioneering Advances in Ecology CloudOps Follow Oct 16 · 6 min read This blog post was originally published here on CloudOps’ blog. The GitHub repo for this project can be found at: https://github.com/TristansCloud/YellowstonesVegitiation “Remote sensing is the acquisition of information about an object or phenomenon without making physical contact with the object… [and] generally refers to the use of satellite or aircraft based sensor technologies.” It’s the lazy persons data collection. The ‘scan the entire world every day’ data collection. Remote sensing has given us a continuous stream of data on the state of the world, revolutionizing agriculture, international defence, environmental monitoring, crisis management, telecommunications, weather forecasting, firefighting, and many other fields. Any application that can be framed in a spatial context has likely benefited from advances in remote sensing. As an ecologist, my field has been able to monitor global forest cover change and harmful algae blooms, estimate populations of endangered species, and designate the areas most important for ecosystem functioning to be protected all through the use of remote sensing technology. Not wanting to be left out, I’ve been thinking about what remote sensing can bring to my own research interests. I study the processes that drive evolutionary change at the intersection between evolutionary biology and ecology called eco-evolutionary dynamics. I am particularly interested in the non-living factors that structure an ecosystem: how does the intersection between terrain, climate, geochemisty, and human disturbance (technically a living factor, but a special case) determine what organisms will be living there? I am also very interested in machine learning solutions and applications of big data. A question naturally arose, can I use remote sensing and machine learning to link my non-living predictors to the resulting ecosystem at a large scale and across different ecosystems types? To do this, I first had to define my predictors and response variables. To start simple, I chose myresponse to be open source NDVI images from the Landsat 8 satellite, which photographs the majority of the globe every 16 days. NDVI stands for normalized difference vegetation index, a measure of how much vegetation is in a given area. Plants absorb photosynthetically active light and reflect near infrared light, so the difference between these two wavelengths is a measure of how much healthy plantmaterial is present. I chose my predictor to be a digital elevation model (DEM) and ignored climate, geochemistry and human activity at this first attempt. I selected an area to study that should have little deviation in climate across the study area, as to start I only wanted to focus on terrain affects on the first try at building this model. However, I wanted to design a data pipeline that could easily be expanded once I wanted to address more complex predictors and responses. Kubernetes provided an ideal solution, allowing me to create pods to complete each step in my preprocessing pipeline and delete those resources when no longer needed. The USGS hosts an API for downloading Landsat 8 images, which I accessed through the programming language R and was executed in the download pod. The data was then unzipped in a new pod, and finally my predictor DEM was layered on the NDVI image in a final pod to create my prepared data. This final step is easily expandable to layer on different predictors as I expand the project. Neural Networks The hypothesis I wanted to test for this analysis is how well can I predict vegetation growth from a DEM, and how transferrable that model is from one area to another, assuming climate and other factors stay the same. To test this, I downloaded NDVI images from two national parks in the Northwestern USA: Salmon Challis national park and Yellowstone national park. I chose these two places as they are in a similar geographic area so likely have similar climates. They also are both mountainous regions and cover similar amounts of land. Finally, they are both national parks and should have pristine ecosystems relatively removed from human interference (although there was some agriculture and human settlements in both areas). I selected two scenes, one from Salmon Challis and one from Yellowstone, from the same year and season. My plan was to build a fully convolutional neural net (CNN) and train it on tensors from Salmon Challis to predict tensors in Yellowstone. By taking 51 pixel by 51 pixel sections of the NDVI and DEM images, I created 17,500 individual tensors for Salmon Challis and 17,500 individual tensors for Yellowstone. I then built a CNN to take two inputs, the 51 x 51 DEM as well as a 51 x 51 pixel low resolution DEM that covered a much larger area, in case the large scale geographic features surrounding an area are important to predicting vegetation. The output of the model is the 51 x 51 pixelNDVI image. Through trial and error I found that an inception framework, inspired by GoogLeNet, improved the stability of model predictions. Inception branches tensors into separate processing pathways, possibly allowing models to understand different features in the data all while maintaining a computationally efficient network. GoogLeNet won the ImageNet Large-Scale Visual Recognition Challenge in 2014 with this style of architecture, and I recommend reading the original research paper (which is not overly technical) of this and other models if you are interested in learning more about network architecture effects on neural network performance. UNet, a separate convolutional network, also inspired some of my architecture. Overall, the model performance surpasses a classic technique based on generalized additive models (GAMs) but is still unsatisfactory, getting quite a few images wrong. See the final figure for some of the model predictions. I wanted to teach myself to design neural networks, so for now I am avoiding transfer learning from a pretrained network although this is still an option. Through GCP, CloudOps, and the huge amounts of remote sensing data generated daily, I have the resources and data to improve this model. I would love to pass many more predictors, and increase the width and depth of my neural network. Take a look at my github if you would like to see my actual layers in the neural network or the Kubernetes solution I use to download data. I’m still working on this project, so my network architecture may have evolved a bit. Good luck in your own data science adventures! Kubernetes and cloud native technologies have allowed scientists to store and make sense of the data collected by remote sensing. Nonetheless, these technologies can be difficult to learn and master. CloudOps’ DevOps workshops will deepen your understanding of cloud native technologies with hands-on training. Take our 3-day Docker and Kubernetes workshop to get started using containers in development or production, or our 2-day Machine Learning workshop to make your ML workflows simple, portable, and scalable with Kubernetes and other open source tools. Tristan Kosciuch Tristan is an evolutionary biologist interested in the effects of landscape levels on genetic and phenotypic variation. He works in Vancouver Island on threespine stickleback and in the Lake Victoria basin on Nile perch and haplochromine cichlids. His work on stickleback uses remote sensing to quantify environments to test the predictability of evolution. This blog post was originally published here on CloudOps’ blog. Sign up for CloudOps’ monthly newsletter to stay up to date with the latest DevOps and cloud native developments.
https://medium.com/datadriveninvestor/artificial-intelligence-is-pioneering-advances-in-ecology-5bd86d2ab8e1
[]
2020-11-07 04:00:34.291000+00:00
['Kubernetes', 'Data Pipeline', 'Neural Networks', 'Artificial Intelligence', 'Ecology']
Data science-Create Tailored Algorithms
Blackcoffer artificial intelligence solutions are easy to use out-of-the-box and are custom tailored to each individual client’s needs. Our end-to-end AI enabled platforms speed time to delivery, save costs, reduce risk, and deliver optimized results to give you an immediate competitive advantage and bolster your bottom line. AI innovation enabled by faster processors, Big Data and novel algorithms AI is “an area of computer science that deals with giving machines the ability to seem like they have human intelligence”. Read More
https://medium.com/data-analytics-and-ai/data-science-create-tailored-algorithms-e4f4365e4496
['Ella William']
2019-06-14 11:11:30.590000+00:00
['Artificial Intelligence', 'Analytics', 'Data Science', 'Big Data']
A Vacation to Mars: The Biggest Scam in Modern History
A Vacation to Mars: The Biggest Scam in Modern History Project Mars One A depiction of what the Mars One habitable home on Mars could look like (Source: Mars One) Colonization has been in the blood of humans for thousands of years, and with Earth’s population reaching its highest peak as well as its ecosystem leaning towards a decline, some people are already thinking of the possibility of moving to a different planet. People such as Elon Musk have shown the possibility of such a project because the required technology is here, but it is still too expensive to “mass produce.” However, one man in 2011 wanted to bring this vision or dream closer to “reality,” or better said, the foolishness of rich investors. Bas Lansdorp was the co-founder and CEO of the private organization Mars One. Before Mars One, Lansdrop became a successful entrepreneur in the western world, proving his ability to not only raise companies but also capital. Mars One started with Lansdorp’s dream of colonizing Mars and making it a habitable space for humans to live on. As ambitious as this sounded, he had spent quite a few years near-space engineers and scientists who saw his drive for this project, therefore supported his vision. As an entrepreneur he knew that this project would require an enormous amount of investment, therefore he used his entrepreneurial skills to look for people that had the two things he needed: A desire to move to a different planet. Lots of money! The money was never seen on paper Lansdorp came up with an estimate of six billion dollars to start the first missions and get a habitat going that could produce food and allow those who moved there to live with no support from Earth. During the first two years from the start of the company, over 220,000 people invested large amounts of money. These people were promised a chance to end up on Mars, as only a few would be selected at the beginning, and over the years more and more would be able to migrate to the new planet. Over the years the company kept receiving investments from people all around the world, but they never signed a contract. On paper, the company (Mars One) wasn’t even registered. For eight years, they claimed to be a real company, hiring hundreds of personnel, however, there were only four people in the venture, Lansdorp and four other people who are believed to be his friends. “Since we started Mars One in March 2011, we received support from scientists, engineers, businessmen and –women and aeropace companies from all over the world. The announcement of our plan in May 2012 resulted in the engagement of the general public, and the support from sponsors and investors. To see our mission evolve this way feels like my dream is becoming a reality.” (taken from Mars One website, written by Bas Lansdorp.) Bas Lansdorp as a keynote speaker (Source: Mars One) In order to show his legitimacy, he offered to give free speeches to various organizations about the Mars One project and his vision, such a humble man. Everything he was doing seemed legitimate, but some felt a bit skeptical about this whole project, as it simply sounded a bit too good to be true. Therefore they started to look up information about the company. As the company was private, this meant that most of the information was also private. However, this didn’t stop some investors who were wondering where their money actually went. The man behind the company was a mastermind in marketing, as he publicized everything on a professional level whilst using his background as an academic to create credibility around the fake company. The publicity created around the company made people think that his project was real. Although other space and technology prestigious institutions thought differently. Research was carried out by MIT (Massachusetts Institute of Technology) to show that even if his people made it to Mars, they would die after 68 days of living on the planet, just because of the extremely low temperatures. To combat this, Lansdorp promised all the investors that the project would be finished by 2027. The company was also backed up by lots of international space organizations, which shows how Lansdorp used this to not only build credibility around the project but also give a reason for investors to trust him with their money. Just look at this introduction video to the project from 2012 to see how convincing it is. In January of 2019, the company was declared bankrupt with the private bank account showing the company being $25,000 in debt. Since then, there has been no information about Lansdorp or Mars One as every asset owned by the company was liquified. But what happened with all the money raised? The accounts were never publicly shown, but people do speculate that Mars One raised a few billion dollars. It is believed that Lansdorp took all the money and left the public media. At the same time maybe he just moved to Mars by himself, at the end of the day that was his dream.
https://medium.com/history-of-yesterday/a-vacation-to-mars-the-biggest-scam-in-modern-history-d9d191ed79a8
['Andrei Tapalaga']
2020-12-18 21:02:19.482000+00:00
['Money', 'Space', 'History', 'Marketing', 'Entrepreneurship']
Happiness in Ordinary Things
Happiness is when you slip between fresh, air-dried sheets after bathing in scented oils. It’s the yellow flame of a candle on a winter’s night while the wind whistles outside your door and you snuggle inside. “The east wind breaks over the branch that twists, an ocean of waves among the thicket. And as the last bird sings, notes splash into the sky, washing the sunset with salty tears to drown the day.” BW When monochrome the day slides under trees, deep into their roots, and evening spreads her star-blanket wide, creeping over each sleeping house and prowling cat. Dawn inches, shy into the foliage, licking every grass and berry crimson and dropping diamonds web-ward, startling spiders fast into morning’s welcome. Waves that lap on the shore gently, enticing you to take off your shoes and dip in your toes. Orange and crimson sunsets that race across the sky, and gusts whistling through wheat in a field, making it dance. “Each blade of emerald that swabs the dawn meadow. Every thicket flower, the sunset and the alpine grove that plump the evening forest — even the morsel carried by the ant trailing in the dust — brings beauty.” BW The organic curve and beauty of a snail’s shell, so simple but perfect in every way. Red dresses, and generous carpet bags — think of Mary Poppins. The bright eyes and giggle of a toddler who finds simple things hilarious. Delicious comfort food and a blazing log fire in the winter. Christmas lights strung across streets and carol singing. Festive get-togethers with people I love. “A faint drizzle, a haze glistening, drenches down so soft like a mist of all the mornings ever uttered from the mouth of creation to the grass that sways and poppies that paint the meadow.” BW Warm summer rain, and heat that thaws your bones after a chill wind. Writing, literature, and creativity. Puddles to splash in, kites, wigwams, and rainbows. The scent of cinnamon, fresh coffee, cut grass, and geranium oil. Trampolines, bouncy castles, real castles, sunsets, sunrises, dawn before anyone else is up apart from the birds. Meditation, piano music, the saxophone. Colorful vegetables, flowers of all kinds, secret gardens, swings, hedgehogs, and parrots. Beloved pets, close family, friends — the type that last forever — and people who make everyone laugh. “Jar-wrapped, the herb and tomato fruits from the lingering summer scald, ripe red with luscious wine-scent and lemon, heaving and round as life, heavy and fat. Pick as I may season’s last offering of scooped out September banquet — that lingering prize and rosette-laden plot still offers succulent squash and blooms.The basil, holy and cinnamon, thrives among the fennel, edible flowers, and figs. And sweet peas, sunchokes, and okra splash the landscape with nature’s board.” BW An artist’s pallet, scattered with color. A blank canvas, a blank page, and fresh stationary. Velvet, especially purple, blue, forest green, or deep red. Woodland, oak trees, dreams, and soft pillows. Ducks that waddle, caterpillars — because of their potential — dragonflies, and moths. Stained glass and Tiffany lamps, yellow shoes, and bohemian art. Crisp, corkscrew leaves of orange and gold that swoosh as you kick them and shuffle in their colorful carpet. Morning dew on emerald grass and dripping from tightly curled fern-fronds. These ordinary things are life’s treasures. Fulfilling relationships, and closeness, when you know someone really, really well. Love, wherever it appears, and laughter. Stability and having needs met counts because it’s a strong foundation on which to build. For me, happiness arises from understanding I am the creator of my well-being and control my emotional state. Not relying on anyone else to make me happy brings the joy of freedom and independence. “I take a mental snapshot of the day as it pours warmth on bare-skinned knees and let the beechnuts crunching underfoot, birdsong, and indigo fields rise to nestle inside the tiny locker of a brain region meant for wonders. It’s then I spy a butterfly-filled canopy flutter at the oak’s crown. So much to paste within, and hold tight, lest it slips into the abyss.” BW Everything, each occasion or being that inspires my happiness is filtered through my perception. Others reflect the thoughts I entertain most often. So, if I ever feel less than happy, I know the problem isn’t a lack of outside stimulus; I need to tweak my mindset. Knowing this contributes to my happiness. How about you?
https://medium.com/the-bolt-hole/happiness-in-ordinary-things-e6412720ee5e
['Bridget Webber']
2020-10-21 11:10:28.600000+00:00
['Self Improvement', 'Philosophy', 'Lifestyle', 'Psychology', 'Mental Health']
Building scalable and efficient ML Pipelines
Building scalable and efficient ML Pipelines Using Kubernetes to build ML pipelines that scale Kubernetes is the gold standard for managing tons of containerized applications, whether they are in the cloud or on your own hardware. Whether it is pipeline building, model building or ML application building Kubernetes enable containerization which is a safe way to build and scale any of these scenarios. Kubernetes can host several packaged and pre-integrated data and data science frameworks on the same cluster. These are usually scalable or they auto-scale, and they’re defined/managed with a declarative approach: specify what your requirements are and the service will continuously seek to satisfy them, which provides resiliency and minimizes manual intervention. KubeFlow is an open source project that groups leading relevant K8 frameworks. KubeFlow components include Jupyter notebooks, KubeFlow Pipeline (workflow and experiment management), scalable training services (utilized for TensorFlow, PyTourch, Horovod, MXNet, Chainer) and model serving solutions. KubeFlow also offers examples and pre-integrated/tested components. In addition to typical data science tools, Kubernetes can host data analytics tools such as Spark or Presto, various databases and monitoring/logging solutions such as Prometheus, Grafana and Elastic Search as well. It also enables the use of serverless functions (i.e. auto built/deployed/scaled code like AWS Lambda) for a variety of data-related tasks or APIs/model serving. The key advantage of Kubernetes vs proprietary cloud or SaaS solutions is that its tools are regularly added and upgraded, Google’s search and Stack overflow are often the fastest path to help and the solution can be deployed everywhere (any cloud service or even on-prem or on your own laptop). A community project also forces associated components/services to conform to a set of standards/abstractions which simplify interoperability, security, monitoring which in turn benefits everyone. Bringing efficiency to ML pipelines Unfortunately, just betting on a credible platform like Kubernetes is not enough. Life for data and engineering gets easier once we adopt three guiding rules: Optimize for functionality — Create reusable abstract functions/steps which can accept parameters. Build for scalability— Apply parallelism to every step (or as often as possible, within reason). Automation — Avoid manual and repetitive tasks by using declarative semantics and workflows. The current trend in data science is to build “ML factories,” similar to agile software development, build automated pipelines which take data, pre-process it, run training, generate, deploy and monitor the models. The declarative and automated deployment and scaling approach offered by Kubernetes is a great baseline, but it’s missing a way to manage such pipelines on top of that. A relatively new tool which is part of the KubeFlow project is Pipelines, a set of services and UI aimed at creating and managing ML pipelines. We can write our own code or build from a large set of pre-defined components and algorithms contributed by companies like Google, Amazon, Microsoft, IBM or NVIDIA. Kubeflow Piplines UI Once we have a workflow, we can run it once, at scheduled intervals, or trigger it automatically. The pipelines, experiments and runs are managed, and their results are stored and versioned. Pipelines solve the major problem of reproducing and explaining our ML models. It also means we can visually compare between runs and store versioned input and output artifacts in various object/file repositories. A major challenge is always running experiments and data processing at scale. Pipelines orchestrate various horizontal-scaling and GPU-accelerated data and ML frameworks. A single logical pipeline step may run on a dozen parallel instances of TensorFlow, Spark, or Nuclio functions. Pipelines also have components which map to existing cloud services, so that we can submit a logical task which may run on a managed Google AI and data service, or on Amazon’s SageMaker or EMR. KubeFlow and its Pipelines, like most tools in this category, are still evolving, but it has a large and vibrant multi-vendor community behind it. This guarantees a viable and open framework. Much like the first days of Kubernetes, cloud providers and software vendors had their proprietary solutions for managing containers, and over time they’ve all given way to the open source standard demanded by the community.
https://medium.com/acing-ai/building-scalable-and-efficient-ml-pipelines-a9f61d2ecbbd
['Vimarsh Karbhari']
2020-09-18 15:22:26.109000+00:00
['Machine Learning', 'Artificial Intelligence', 'Kubernetes', 'Interview', 'Data Science']
Why React16 is a blessing to React developers
Just like how people are excited about updating their mobile apps and OS, developers should also be excited to update their frameworks. The new version of the different frameworks come with new features and tricks out of the box. Below are some of the good features you should consider when migrating your existing app to React 16 from React 15. Time to say Goodbye React15 👋 Error Handling Error Handling be like :) React 16 introduces the new concept of an error boundary. Error boundaries are React components that catch JavaScript errors anywhere in their child component tree. They log those errors, and display a fallback UI instead of the crashed component tree. Error boundaries catch errors during rendering, in lifecycle methods, and in constructors of the whole tree below them. A class component becomes an error boundary if it defines a new lifecycle method called componentDidCatch(error, info) : Then you can use it as a regular component. <ErrorBoundary> <MyWidget /> </ErrorBoundary> The componentDidCatch() method works like a JavaScript catch {} block, but for components. Only class components can be error boundaries. In practice, most of the time you’ll want to declare an error boundary component once. Then you’ll use it throughout your application. Note that error boundaries only catch errors in the components below them in the tree. An error boundary can’t catch an error within itself. If an error boundary fails trying to render the error message, the error will propagate to the closest error boundary above it. This, too, is similar to how catch {} block works in JavaScript. Check out the live demo: ComponentDidCatch For more information on error handling, head here. New render return types: fragments and strings Get rid of wrapping the component in a div while rendering. You can now return an array of elements from a component’s render method. Like with other arrays, you’ll need to add a key to each element to avoid the key warning: render() { // No need to wrap list items in an extra element! return [ // Don't forget the keys :) <li key="A">First item</li>, <li key="B">Second item</li>, <li key="C">Third item</li>, ]; } Starting with React 16.2.0, it has support for a special fragment syntax to JSX that doesn’t require keys. Support for returning strings : render() { return 'Look ma, no spans!'; } Portals Portals provide a first-class way to render children into a DOM node that exists outside the DOM hierarchy of the parent component. ReactDOM.createPortal(child, container) The first argument ( child ) is any renderable React child, such as an element, string, or fragment. The second argument ( container ) is a DOM element. How to use it When you return an element from a component’s render method, it’s mounted into the DOM as a child of the nearest parent node: render() { // React mounts a new div and renders the children into it return ( <div> {this.props.children} </div> ); } Sometimes it’s useful to insert a child into a different location in the DOM: render() { // React does *not* create a new div. It renders the children into `domNode`. // `domNode` is any valid DOM node, regardless of its location in the DOM. return ReactDOM.createPortal( this.props.children, domNode ); } A typical use case for portals is when a parent component has an overflow: hidden or z-index style, but you need the child to visually “break out” of its container. For example, dialogs, hovercards, and tooltips. Portals Custom DOM Attribute React15 used to ignore any unknown DOM attributes. It would just skip them since React didn’t recognize it. // Your code: <div mycustomattribute="something" /> Would render an empty div to the DOM with React 15: // React 15 output: <div /> In React16, the output will be the following (custom attributes will be shown and not be ignored at all): // React 16 output: <div mycustomattribute="something" /> Avoid Re-render with setting NULL in state With React16 you can prevent state updates and re-renders right from setState() . You just need to have your function return null . const MAX_PIZZAS = 20; function addAnotherPizza(state, props) { // Stop updates and re-renders if I've had enough pizzas. if (state.pizza === MAX_PIZZAS) { return null; } // If not, keep the pizzas coming! :D return { pizza: state.pizza + 1, } } this.setState(addAnotherPizza); Read more here. Creating Refs Creating refs with React16 is now much easier. Why you need to use refs: Managing focus, text selection, or media playback. Triggering imperative animations. Integrating with third-party DOM libraries. Refs are created using React.createRef() and are attached to React elements via the ref attribute. Refs are commonly assigned to an instance property when a component is constructed so they can be referenced throughout the component. class MyComponent extends React.Component { constructor(props) { super(props); this.myRef = React.createRef(); } render() { return <div ref={this.myRef} />; } } Accessing Refs When a ref is passed to an element in render , a reference to the node becomes accessible at the current attribute of the ref. const node = this.myRef.current; The value of the ref differs depending on the type of the node: When the ref attribute is used on an HTML element, the ref created in the constructor with React.createRef() receives the underlying DOM element as its current property. attribute is used on an HTML element, the created in the constructor with receives the underlying DOM element as its property. When the ref attribute is used on a custom class component, the ref object receives the mounted instance of the component as its current . attribute is used on a custom class component, the object receives the mounted instance of the component as its . You may not use the ref attribute on functional components because they don’t have instances. Context API Context provides a way to pass data through the component tree without having to pass props down manually at every level. React.createContext const {Provider, Consumer} = React.createContext(defaultValue); Creates a { Provider, Consumer } pair. When React renders a context Consumer , it will read the current context value from the closest matching Provider above it in the tree. The defaultValue argument is only used by a Consumer when it does not have a matching Provider above it in the tree. This can be helpful for testing components in isolation without wrapping them. Note: passing undefined as a Provider value does not cause Consumers to use defaultValue . Provider <Provider value={/* some value */}> A React component that allows Consumers to subscribe to context changes. Accepts a value prop to be passed to Consumers that are descendants of this Provider. One Provider can be connected to many Consumers. Providers can be nested to override values deeper within the tree. Consumer <Consumer> {value => /* render something based on the context value */} </Consumer> A React component that subscribes to context changes. Requires a function as a child. The function receives the current context value and returns a React node. The value argument passed to the function will be equal to the value prop of the closest Provider for this context above in the tree. If there is no Provider for this context above, the value argument will be equal to the defaultValue that was passed to createContext() . static getDerivedStateFromProps() getDerivedStateFromProps is invoked right before calling the render method. Both on the initial mount and on subsequent updates. It should return an object to update the state, or null to update nothing. This method exists for rare use cases where the state depends on changes in props over time. For example, it might be handy for implementing a <Transition> component that compares its previous and next children to decide which of them to animate in and out. Deriving state leads to verbose code and makes your components difficult to think about. Make sure you’re familiar with simpler alternatives: If you need to perform a side effect (for example, data fetching or an animation) in response to a change in props, use componentDidUpdate lifecycle instead. (for example, data fetching or an animation) in response to a change in props, use lifecycle instead. If you want to re-compute some data only when a prop changes , use a memoization helper instead. , use a memoization helper instead. If you want to “reset” some state when a prop changes, consider either making a component fully controlled or fully uncontrolled with a key instead. This method doesn’t have access to the component instance. If you’d like, you can reuse some code between getDerivedStateFromProps() and the other class methods by extracting pure functions of the component props and state outside the class definition. Note that this method is fired on every render, regardless of the cause. This is in contrast to UNSAFE_componentWillReceiveProps . It only fires when the parent causes a re-render and not as a result of a local setState . We compare nextProps.someValue with this.props.someValue. If both are different then we perform some operation, setState static getDerivedStateFromProps(nextProps, prevState){ if(nextProps.someValue!==prevState.someValue){ return { someState: nextProps.someValue}; } else return null;} It receives two params nextProps and prevState . As mentioned previously, you cannot access this inside this method. You’ll have to store the props in the state to compare the nextProps with previous props. In above code nextProps and prevState are compared. If both are different then an object will be returned to update the state. Otherwise null will be returned indicating state update not required. If state changes then componentDidUpdate is called where we can perform the desired operations as we did in componentWillReceiveProps . Bonus: React Lifecycle events Lifecycle credits — https://twitter.com/dceddia Well these are some of the features that you should definitely try while working with React16! Happy coding 💻 😀
https://medium.com/free-code-camp/why-react16-is-a-blessing-to-react-developers-31433bfc210a
['Harsh Makadia']
2018-10-09 16:56:50.455000+00:00
['React', 'Technology', 'Productivity', 'Tech', 'Programming']
How to Write a Fundraising Letter: The Best Donor Appeals Include 5 Key Elements
The donor was ready to sell stock to give a $200,000 gift because he knew the education institute needed the money. But it was March 2020, and markets were collapsing. Why would he be so generous at such an uncertain time? “I’ve found when I fear the Lord, nothing else in the world frightens me,’’ he answered. “But when I stop thinking about God? Then everything else frightens me.’’ Despite a pandemic that kept 4 billion people in lock-down, several nonprofits found it was the perfect time to send out fundraising appeal letters. For example, when Orchard Lake Schools had to cancel three of its biggest fundraising events (the Ambassador’s Ball, Founder’s Day, and the St. Mary’s Polish Country Fair), a fundraising letter went out spelling out all the numbers: how much was lost and what was needed. Money poured in. Similarly, our friend Al Kresta at Ave Mario Radio emailed supporters saying $300,000 was needed “to make up the deficit we incurred by canceling our Spring membership drive.’’ Within the first 24 hours, $115,000 came in, and Al wrote donors back the very next day thanking them for that great start, letting them know “we have about 185,000 dollars to go. That’s a great beginning response.’’ He added: “The question is whether Ave Maria Radio will be able to shine like a matchhead or like a floodlight. We will continue bearing witness to Jesus, the Light of the World. The difference will be the degree of effectiveness.’’ The very best donor appeal letters include these five elements 1. What is the problem or opportunity for good? You have six seconds to get their attention. So a good appeal starts with the headlines, which are typically half the story. Your headline and body type get to the point: What is the problem your donor can tackle with a gift? Or if it’s not a problem, what is the opportunity to do good? Past giving history and a clear understanding of who you are writing to are essential. 2. What is your organization doing about it? Your letter has to show how you are doing something that matters. Ideally, you are helping people in a way no one else can and can demonstrate exactly how you solve a problem or offer an opportunity for good. 3. What is this going to cost? Return on investment is key. For example, if you write “we can feed a child for just $7.32 per day,’’ the donor thinks, “that’s less than I’d pay to take a child to dinner.’’ They do the math and see the value. Many years back, our friends at Orchard Lake “sold’’ a donor on a plan to build new tennis courts, but when he saw the price, he immediately said, “that’s way too much money for tennis courts.’’ 4. How is this opportunity different from all the others? Your story shapes your identity, and story+identity should tell you about your mission. Every day, we are bombarded with appeal letters and emails, asking us for money. The essential question of marketing, religion, education, and nonprofits: Why? Why this cause over some other worthy cause they are about? Why your group over a rival? 5. Why now? If you don’t give your readers a deadline, they could throw your letter on the “not sure. This can wait — maybe I’ll think about it’’ pile. Once you're in the “maybe’’ pile, you’re likely to get buried in the clutter. Political fundraisers do well because they include a deadline: most donors know when Election Day and even campaign fundraising deadlines fall. Key takeaway: We throw away form letters — we cherish love letters We cherish love letters and throw away form letters. The most personal and moving appeals move people. The best fundraisers are matchmakers, connecting the donor with the group that most moves them. The more personal the connection, the quicker you are to cement the appeal.
https://medium.com/the-partnered-pen/how-to-write-a-fundraising-letter-the-best-donor-appeals-include-5-key-elements-2b5b593d538b
['Joseph Serwach']
2020-10-06 23:05:02.932000+00:00
['Marketing', 'Education', 'Work', 'Fundraising', 'Writing']
How to Make the Most out of James Clear’s Atomic Habits
3. Make It Simple It’s totally normal, at the beginning of a habit, to find yourself looking ahead to where you might end up where you to succeed at sustaining a habit. It’s the same thing when you first walk into a gym. The first time you catch yourself in the mirror at your gym, mid-repetition through your favorite exercise (deadlift of course), you can’t help yourself but really look into the reflection and see what you will look like if you stay the course of a good gym habit. But often, these high-goals can be the bane of the habit. Especially at the start where the results are likely to take their time to show themselves to you. Instead of waiting in the mirror, try instead to focus on ways of proactively pushing for the habit-building process to take stock in your life. A strong method of practicing this is by setting yourself a no excuses framework around the habit you are setting out. This means to step back and see the variables that could jeopardize the habit from taking place. Whether that’s being an inconvenience, or too time-consuming, there will always be friction against change that you want to set in your days. If we see going to the gym as the habit for this case, a friction point could be the place. How long would it take me to get there? And if taken further, how long am I going to stay there? Easily enough, you’ll be fast to come up with plenty of resistance points that will push you not to do the habit. This is where simplifying the process of the habit can help take some decision fatigue off your head. Now, simplifying a habit is not making it necessarily easier. Rather, it’s removing any unnecessary friction from stopping you from the process of performing the habit. You do not rise to the level of your goals. You fall to the level of your systems — James Clear Rather than looking to go to the same gym that your buddies go to, that so happens to lie on the complete opposite side of the town to where you live, look up a nearby gym spot to go after work (or before, if you dare). Take the inconvenience out of the equation and make the commute less of a burden on yourself and more of a point of action on your part to be more intentional with your time. At the same time that you’ll be economizing on your time, you would be simplifying the choice to go workout to make it unavoidable to yourself. In addition, by simplifying the habit of its atomic habits (you knew this was coming somewhere) the overall habit becomes far easier to hold onto and sustain. James’s Habit Tip: Make the habit simple enough to execute that no friction point has enough in it to doubt yourself attending to the habit in the first place.
https://medium.com/age-of-awareness/how-to-make-the-most-out-of-james-clears-atomic-habits-95691b421f37
[]
2020-11-18 04:43:07.684000+00:00
['Education', 'Learning', 'Habits', 'Productivity', 'Writing']
Lesser-known techniques for data exploration
Exploratory data analysis (EDA) is essentially the first step in the machine learning pipeline. There are many techniques used for EDA, such as: Checking all columns: name, type, segments Setting expectation of what the variable might mean and how it may affect the target — and testing the hypothesis Analyzing the target variable Using describe() function in Pandas to get a summary of all variables function in Pandas to get a summary of all variables Checking skewness and kurtosis Creating scatter plots ( pairplot() in Seaborn is probably the easiest way), distribution plots and box plots in Seaborn is probably the easiest way), distribution plots and box plots Creating correlation matrix (heat-map); zoomed heat-map if required Creating scatter plots between the most correlated variables; contemplating whether the correlation makes sense or not Check for missing data (if a column has more than 15% of missing data, it is probably better to delete the column instead of replacing the missing values) Checking for outliers (uni-variate as well as bi-variate) Apart from these, here are some lesser-known tips for EDA:
https://medium.com/bigbrownbag/lesser-known-techniques-for-data-exploration-23eeb6686a22
['Soham Ghosh']
2019-04-25 13:48:53.111000+00:00
['Exploratory Data Analysis', 'Machine Learning', 'Kaggle', 'Analytics', 'Data Science']
Planet OS Data Challenge at ExpeditionHack NYC
We’re thrilled to be part of the Expedition Hackathon NYC happening on November 12–13! This is your chance to map the future of sustainability with NGA, Mapbox, IBM Bluemix, Planet OS and others. The hackathon’s focus areas are Oceans, Forests, Conservation and Indigenous People. To add some motivation to the hours of intense coding and hustling, we decided to put out tons of high-quality environmental data, data integration and computational infrastructure, and reward the best teams with some cool prizes. All hackathon participants will get free, unlimited access to: The prizes: All teams that use our data tools will secure an unlimited free access to Planet OS data tools data tools The team with the best solution will get special swag and surprises from Planet OS The general Grand Prize of the hackathon is $3000 and a round trip to DC from NGA to meet NGA Executives. We have already validated a few business ideas that the teams could work on. Stay tuned for updates! All the updates will be shared on this page so it would be wise to bookmark it. Contact us at aljash@planetos.com for further questions. #PlanetOS #DataChallenge About Planet OS: Planet OS is the world’s leading provider of weather, environmental and geospatial data access and operational intelligence, based in Palo Alto, California. The company works with real-world industries — from energy, weather forecasting, agriculture, logistics to insurance — helping them become data-driven, mitigate risks and grow faster. The world’s second largest offshore wind farm runs on Planet OS Powerboard; and the company’s open data service, Datahub, provides access to thousands parameters of high-quality data collected by premier institutions around the world.
https://medium.com/planet-os/planet-os-data-challenge-at-expeditionhack-nyc-5e6a9f192956
['Planet Os']
2016-12-07 11:47:31.134000+00:00
['Hackathon', 'NYC', 'Sustainability', 'Big Data', 'Nga']
SEO For Startups with Naguib Toihiri
Raunak: For people who do not know what SEO is, could you give us a quick summary? Naguib: If you do not know what SEO is, or you have a vague understanding, don’t worry, because it lies in one of the grey areas of the Marketing World. SEO stands for Search Engine Optimisation. The main objective of SEO is to attract quality traffic to your site. So whenever users search for something in Google, SEO is basically working towards making you rank first on it in the organic search results. Raunak: And it’s basically a free of cost Marketing channel, am I right in saying that? Naguib: Absolutely. The way that I divide search engine marketing usually is into 3 parts: 1) Organic — which is SEO (Search Engine Optimisation) 2) Paid — SEA (Search Engine Advertising) 3) Social — SMO (Social Media Optimisation) SEO is the organic form of marketing, which basically means that we do not pay Google to increase your visibility, we optimise your site to get it ranked higher on Google organically (without ad money). Raunak: Just to give the readers a bird’s eye view, for a startup that is competing against big businesses on search; what should they be doing? Naguib: First of all it is relative to your objectives and what results they are looking for. If they are looking for immediate results, paid search is the way to go. Essentially you will be visible to your target audience the minute you implement your campaign. For SEO, it will definitely, take time. The big BUT however is SEO does NOT depend on your budget. This is why I love SEO, it basically is a fair competition and is not relative to budget. Even if you do have a bigger budget, your website will not automatically be ranked higher. It’s a combination of technical, content and social know-how. Aspects that are often overlooked like a mobile-friendly platform or page speed, are what make a big difference in the long-term visibility and technical rank of your site. Another key tactic for startups to utilise, is pushing out relevant content. If you push more relevant content with the right keywords and structure, and push it out regularly, Google will recognise this and rank you higher. Raunak: Alright, now you’ve told me once that there are thousands of SEO tools but most of them are not needed. What tools do you recommend? Naguib: I will give you only 3 tools to focus on and yes all of them are free. 1. Mobile Friendly Tool — Google — This tool is especially relevant in this region and in this year with an incremental rise in sites competing for the same local market who by the way search as much on smartphones as they do on desktop. 2. Page Speed by Google — It audits your website for mobile and desktop loading speed. It gives a ranking out of 100. If you get a ranking of over 70–80, you’re doing fine, otherwise there is always room for optimisation. 3. Google Search Console — How Google considers your website, is if their seeing any crawl error, how many pages are indexed or not. It is a very critical tool for how Google analyses your site and what you can learn from it. Where can someone looking to learn more get a chance to do that? I will be instructing the Search Engine Optimization workshop at the upcoming Digital Marketing Track. Otherwise you can add me on LinkedIn to connect.
https://medium.com/astrolabs/seo-for-startups-with-naguib-toihiri-9a97b8b54936
['Raunak Datt']
2017-09-18 09:39:58.826000+00:00
['Marketing', 'Startup', 'Digital', 'Digital Marketing', 'SEO']
Are You the CEO of Your Writing Career?
Are You the CEO of Your Writing Career? Take control of your writing job by acting as if you are a CEO Photo by LinkedIn Sales Navigator on Unsplash Are you the chief executive officer (CEO) of your writing career? To become better at writing, act as if you are the CEO of your writing business and look to grow it. You should work with other writers, ask for help, listen, and make informed decisions. As the CEO of your writing career, you manage your writing like a business. Here are five tips to help you become the CEO of your writing career, no matter if writing is your full-time job, a side hustle, or something you do for fun.
https://medium.com/change-your-mind/are-you-the-ceo-of-your-writing-career-2339ee20d12e
['Matthew Royse']
2020-12-18 16:37:14.386000+00:00
['Mindset', 'Entrepreneurship', 'Careers', 'Inspiration', 'Writing']
Don’t Be Fooled. Looking for Inspiration Doesn’t Work Anymore.
Don’t Be Fooled. Looking for Inspiration Doesn’t Work Anymore. 6 proven tactics to get your best work done in no time Photo by Miriam Espacio from Pexels I’m so done with it. Every time I run out of ideas or my current ideas aren’t good enough, I switch on the Dora The Explorer-modus and I go surf the internet. Hoping to find the shiny coast of opportunity and inspiration. Better said — I browse YouTube. I didn’t do the math, but I guess that in 2% of the cases it works. I came to the conclusion that consuming more content isn’t the answer. My desk is located next to my bookshelves and in those moments of incompetence I take out some books — interesting or not — and scroll through them. Trying to find inspiration as if the map to Mordor is hidden inside one of them. The best part? This strategy has a success ratio of 3,7% — not great either. That leaves me with the last option: bang my head on the keyboard until I find something to write about. I’ve had days where this behavior took place 4 times a day, but in better times it only happened twice a week. Besides these mental shackles and some bruises, I’m fine. I guess. Oh, the success ratio? Around 5%. I’m not an idiot. This year, I’ve published over 65 articles, so there must be something that I do that enables me to consistently push out content. That got me thinking — what are my strategies whenever I’m stuck in this crazy content-block-limboland? It turned out that there are 6 tactics I subconsciously apply every time I’m stuck.
https://medium.com/the-brave-writer/dont-be-fooled-looking-for-inspiration-doesn-t-work-anymore-9db4606d4653
['Jessie Van Breugel']
2020-12-21 17:03:04.865000+00:00
['Inspiration', 'Freelancing', 'Writing Tips', 'Entrepreneurship', 'Writing']
Open Source Dataset for NLP Beginners
Nowadays, Natural language processing is an up growing and booming field of research, . Sometimes, it might be very confusing for an NLP beginner, which areas to explore and how to find the exact dataset for the implementation and hands-on experience. This blog is focused to provide an overview of different free online datasets for NLP. It is quite difficult to compile all the datasets, as NLP is a broad range research area but keeping respective of a beginner in mind and general problem statements to be implemented at the starting point, I tried to build the following list. Datasets for Sentiment Analysis Applying Machine Learning for the Sentiment analysis task needs a large number of specialized datasets. The following list should hint at some of the ways that you can improve your sentiment analysis algorithm. Multidomain Sentiment Analysis Dataset: This dataset contains the features of a variety of product reviews taken from Amazon. IMDB Reviews: This is relatively small dataset was compiled primarily for binary sentiment classification use cases which contain 25,000 movie reviews. Stanford Sentiment Treebank: Also built from movie reviews, Stanford’s dataset was designed to train a model to identify sentiment in longer phrases. It contains over 10,000 snippets taken from Rotten Tomatoes. Sentiment140: This dataset consists of 160,000 tweets formatted with 6 fields: polarity, ID, tweet date, query, user, and the text. Emoticons have been pre-removed. Twitter US Airline Sentiment: This dataset contains tweets about US airlines that are classified as positive, negative, and neutral. Negative tweets have also been categorized by reason for complaint. Text Datasets Natural language processing is a massive field of research, but the following list includes a broad range of datasets for different natural language processing tasks, such as voice recognition and chatbots. 20 Newsgroups: This dataset is a collection of approximately 20,000 documents covers 20 different newsgroups, from baseball to religion. ArXiv: This repository contains all of the arXiv research paper archive as full text, with a total dataset size of 270 GB. Reuters News Dataset: The documents in this dataset appeared on Reuters in 1987. They have since been assembled and indexed for use in machine learning. The WikiQA Corpus: This corpus is a publicly-available collection of questions and answers pairs. It was originally assembled for use in research on open-domain question answering. UCI’s Spambase: This large spam email dataset is useful for developing personalized spam filters,created by a team at Hewlett-Packard, Yelp Reviews: This open dataset released by Yelp contains more than 5 million reviews. WordNet: Compiled by researchers at Princeton University, WordNet is essentially a large lexical database of English ‘synsets’, or groups of synonyms that each describe a different, distinct concept. The Blog Authorship Corpus — This dataset includes over 681,000 posts written by 19,320 different bloggers. In total, there are over 140 million words within the corpus. General — Datasets for Natural Language Processing There are a few more datasets for natural language processing tasks that are commonly used in general. Enron Dataset: This contains, roughly 500,000 messages from the senior management of Enron. This dataset is generally used by people who are are looking to improve or understand current email tools. Amazon Reviews: This dataset contains around 35 million reviews from Amazon spanning a period of 18 years. It includes product and user information, ratings, and plaintext review. Google Books Ngrams: A Google Books corpora of n-grams, or ‘fixed size tuples of items’, can be found at this link. The ’n’ in ‘n-grams’ specifies the number of words or characters in that specific tuple. Blogger Corpus: This is a collection of 681,288 blog posts contains over 140 million words. Each blog included here contains at least 200 occurrences of common English words. Wikipedia Links Data: Containing approximately 13 million documents, this dataset by Google consists of web pages that contain at least one hyperlink pointing to English Wikipedia. Each Wikipedia page is treated as an entity, while the anchor text of the link represents a mention of that entity. Gutenberg eBooks List: This annotated list of ebooks from Project Gutenberg contains basic information about each eBook, organized by year. Hansards Text Chunks of Canadian Parliament: This corpus contains 1.3 million pairs of aligned text chunks from the records of the 36th Canadian Parliament. Jeopardy: The archive linked here contains more than 200,000 questions and answers from the quiz show Jeopardy. Each data point also contains a range of other information, including the category of the question, show number, and air date. SMS Spam Collection in English: This dataset consists of 5,574 English SMS messages that have been tagged as either legitimate or spam. 425 of the texts are spam messages that were manually extracted from the Grumbletext website. Depending on the problem statement you are aspiring to solve download the respective dataset and get explored more of yourself into world of NLP.
https://medium.com/swlh/you-c120c972f8c6
['Dr. Monica']
2020-06-28 12:47:31.914000+00:00
['Machine Learning', 'Python', 'Data Science', 'NLP', 'Artificial Intelligence']
Could Netflix’s Big Mouth Be Oversimplifying Mental Illness?
Note: This article contains Big Mouth Season 4 spoilers. Like many other people I know, I spent a good chunk of this weekend binging the entire newly released season of Big Mouth, an animated Netflix comedy about … puberty? Well, maybe that definition isn’t giving the show enough credit. Although Big Mouth does revolve around the life of tweens going through puberty, it also tackles topics surrounding relationships, sexual and gender identity, and mental health issues — which makes total sense, given that these are all things that come with growing up. The show’s creators have brought anxiety and depression to life — literally. More specifically, depression is “The Depression Kitty”, and she’s a giant purple cat that likes to pin you down and berate you with thoughts that are, well, depressing. Anxiety is “Tito the Anxiety Mosquito” and to be honest, I kind of hate how accurately they brought anxiety to life as a character. Swarming the kids with his nervous energy and whispering anxious fears into their ears, Tito is truly the worst. While these two foes have a good run on the show, one of their tormented victims, 13-year-old Jessi Glaser, learns to fight them off with “The Gratitoad”. No, that wasn’t a spelling error — gratitude is represented in the show by a talking toad (if you didn’t already sense that this show is all sorts of weird, here’s your cue to do so). Although practicing expressing gratitude doesn’t get rid of Jessi’s depression and anxiety completely, they become fairly minimized (quite literally — in the season finale, we see the formerly massive depression kitty shrink down to the size of a cute little house cat). So, combating depression and anxiety with gratitude — is this realistic? Well, yes and no. While we can use grateful thinking to improve our relationships (both with ourselves and others), it’s unlikely to be the be-all and end-all of mental health cures. A recent meta-analysis conducted on 27 individual studies dealing with gratitude and its effects on mental health, originally published in the Journal of Happiness Studies (summarized in a Healthline article here) points to clear limits in how much gratitude can really accomplish. While the studies were conducted differently, they all shared one thing: participants were asked to perform some kind of gratitude exercise. Whether it was writing a grateful letter and reading it to the recipient or listing out all the things that went well in a day, those participating in the studies (3,675 people in total) were all asked to practice gratitude. After the experiments were up, psychologists analyzed the effect of these different gratitude exercises on the participants’ mental health — specifically as it pertained to their symptoms of depression and anxiety. In short: The effects were insignificant. As an article on Science Daily summarizes the results: “Go ahead and be grateful for the good things in your life. Just don’t think that a gratitude intervention will help you feel less depressed or anxious.” This doesn’t mean that practicing gratitude can’t be impactful on your mindset and your relationships, just that it isn’t a viable treatment for depression and anxiety on its own (nor is it a viable replacement for proven treatments like cognitive behavioral therapy). So what does this say about Big Mouth and The Gratitoad? How about media representation of mental health issues in general? I’d say it remains to be seen. The way I see it, the creators of the show didn’t make it seem like Jessi had entirely defeated her depression and anxiety through working with The Gratitoad, only that she had minimized them. It’s how they’ll choose to move forward with her story in the next season that will really show how they view mental health issues. In my opinion, I think they gave us a hint of Jessi’s depression coming back with the return of the depression kitty in the final minutes of the season finale (albeit a smaller, less-threatening version of her). Big Mouth has already made mental health-representation history — now let’s hope they don’t fuck it up. Not a lot of shows have tried to tackle the progression of mental health issues like Big Mouth has — not with children in their early teens, and certainly not with animation to give these ethereal concepts more digestible names and faces. In a weird way, this infamously uncensored, whacky, over-the-top cartoon has done a lot more for the on-screen representation of mental illness than many of its more “serious” counterparts. But the creators of this show are walking a very fine line — the line between the benefit of making these concepts digestible and the danger of oversimplifying mental illness to the point of belittling it. I never thought I’d say this, but for the sake of accurate representation, I hope the depression kitty and anxiety mosquito have some fight left in them in season five.
https://medium.com/an-injustice/could-netflixs-big-mouth-be-oversimplifying-mental-illness-a2eea1cdfb10
['Till Kaeslin']
2020-12-08 23:31:31.096000+00:00
['Entertainment', 'Mental Health', 'Psychology', 'TV Series', 'Gratitude']
Úll 2017: Call for Proposals
Úll 2017, April 10–11, Killarney, Ireland Last year, for the first time, we had an open call for submissions to perform at Úll. The response was fantastic and the result was a line-up rich with varied life experiences, performance styles and areas of expertise. It allowed us to bring you a collection of speakers that pushed beyond the expected, celebrating a beautiful balance of fresh faces and friendly familiars. This year, we want to extend that invitation again. We are opening a call for submissions for three types of presentation: Storytelling. Classic. Special Feature. The theme this year is, simply, “The Future”. Storytelling This is a magical part of the programme. The brief is simple: We are looking for folks to tell a 10 minute story about “The Future”. This isn’t a typical presentation, rather a performance. There are no slides or elaborate visuals- just you and the audience. Could you tell a tale that will delight folks? Do you have some novel insight or pearls of wisdom to share? Could you bring us on an adventure or just plain old inspire us? If so, we’d like to hear from you. Apply to tell a story. Classic These presentations form the backbone of the conference. Guided by the theme of ‘The Future’, we want you to share your experience and insight. This year, we’ll once again be placing these under the banner of ‘The Builders’ and each presentation will last roughly 10–15 minutes. Presenters can follow a more traditional presentation format or get creative with slides, visuals and props. Are you building an exciting new app? Have you learned valuable lessons while creating your product or building your business? Have you some interesting forecasts or predictions you’d like to share? If so, we’d like to hear from you. Apply to present on The Builder’s Track. Special Feature The Úll Special Feature is an idea we have developed over the last few years. In a nutshell, rather than give you a stage and a timeslot, we give you a room and invite you to set it up with your presentation. A Special Feature can be a regular conference talk that you record that attendees can walk in and watch. Or it could be an art installation that teaches attendees about hardware hacking. Or it could be a time machine that takes attendees back to their childhood and forward into old age. This format is particularly attractive for those more introverted amongst us, or folks who prefer to create a more personal, intimate experience rather than performing on stage. Can you create an experience that folks will remember long after the conference is over? Can you build an interactive installation? Prepare a talk that you pre-record? Do something completely original? If so, we’d like to hear from you. Apply to present a Special Feature. The Package For anyone who presents at Úll, we will provide: A travel allowance for flights A full free ticket to the conference Up to 2 nights accommodation at The Europe Hotel and Spa Resort (lakeview room with balcony) Up to 2 nights accommodation in Dublin for the Fringe events Train transportation to and from Killarney if you arrive in Dublin Support All storytellers and feature presenters will have access in advance to test out the AV, walk the stage, or set up their space. We want everyone who presents at Úll to feel supported. We are happy to work with you on preparing your story or feature. Things we could help with: Brainstorming ideas Finding a mentor Figuring out and budgeting for any additional AV requirements Arranging suitable rehearsal time Providing a volunteer to help with your feature As much mutual support as we can muster We’re here to support you in any way we can. Presentations form the structural core of a conference, and we want ours to be as strong as it can be. Applications will be open until February 17, 2017
https://medium.com/the-%C3%BAll-blog/%C3%BAll-2017-call-for-proposals-78c38480da90
[]
2017-01-31 22:22:37.963000+00:00
['Storytelling', 'Presentations', 'iOS', 'Apple', 'Conference']
The Bad Writing Habits We Learned in School: And Advice to Forget Them
Photo by Evan Leith on Unsplash The Bad Writing Habits We Learned in School: And Advice to Forget Them ‘Good habits make time your ally. Bad habits make time your enemy.’ Intro: Why Term Papers Need to Go If you’re an undergraduate student right now, you are probably consuming and sharing more forms of communication than at any time in history: texts, blogs, Instagram, tweets, TikTok, email, news. You are a node in a fast-moving network of incoming and outgoing communication of all kinds. In a society in which most of us are immersed in massive amounts of information, sociology professor Deborah Cohan writes, the power of writing lies not merely in the ability to absorb and recycle endless amounts of information, but more so: “to appreciate essence, nuance, and depth, to distill and focus on important points without convenient guides to translate all the ideas for [us].” It’s with this ethos of what writing enables us to do that Cohan calls for the end of a modern staple of higher education: the end-of-semester, final ‘term paper.’ In her essay, The Case Against the Term Paper, Cohan writes:
https://medium.com/swlh/the-bad-writing-habits-we-learned-in-school-and-advice-to-forget-them-7662e7517e61
['Gavin Lamb']
2020-07-20 22:22:45.146000+00:00
['Writing Tips', 'Education', 'Productivity', 'Learning', 'Writing']
Machine Learning (ML) Algorithms For Beginners with Code Examples in Python
Machine Learning (ML) Algorithms For Beginners with Code Examples in Python Best machine learning algorithms for beginners with coding samples in Python. Launch the coding samples with Google Colab Author(s): Pratik Shukla, Roberto Iriondo, Sherwin Chen Last updated, June 23, 2020 Machine learning (ML) is rapidly changing the world, from diverse types of applications and research pursued in industry and academia. Machine learning is affecting every part of our daily lives. From voice assistants using NLP and machine learning to make appointments, check our calendar, and play music, to programmatic advertisements — that are so accurate that they can predict what we will need before we even think of it. More often than not, the complexity of the scientific field of machine learning can be overwhelming, making keeping up with “what is important” a very challenging task. However, to make sure that we provide a learning path to those who seek to learn machine learning, but are new to these concepts. In this article, we look at the most critical basic algorithms that hopefully make your machine learning journey less challenging. Any suggestions or feedback is crucial to continue to improve. Please let us know in the comments if you have any. 📚 Check out our tutorial diving into simple linear regression with math and Python. 📚 Index Introduction to Machine Learning. Major Machine Learning Algorithms. Supervised vs. Unsupervised Learning. Linear Regression. Multivariable Linear Regression. Polynomial Regression. Exponential Regression. Sinusoidal Regression. Logarithmic Regression. What is machine learning? A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E. ~ Tom M. Mitchell [1] Machine learning behaves similarly to the growth of a child. As a child grows, her experience E in performing task T increases, which results in higher performance measure (P). For instance, we give a “shape sorting block” toy to a child. (Now we all know that in this toy, we have different shapes and shape holes). In this case, our task T is to find an appropriate shape hole for a shape. Afterward, the child observes the shape and tries to fit it in a shaped hole. Let us say that this toy has three shapes: a circle, a triangle, and a square. In her first attempt at finding a shaped hole, her performance measure(P) is 1/3, which means that the child found 1 out of 3 correct shape holes. Second, the child tries it another time and notices that she is a little experienced in this task. Considering the experience gained (E), the child tries this task another time, and when measuring the performance(P), it turns out to be 2/3. After repeating this task (T) 100 times, the baby now figured out which shape goes into which shape hole. So her experience (E) increased, her performance(P) also increased, and then we notice that as the number of attempts at this toy increases. The performance also increases, which results in higher accuracy. Such execution is similar to machine learning. What a machine does is, it takes a task (T), executes it, and measures its performance (P). Now a machine has a large number of data, so as it processes that data, its experience (E) increases over time, resulting in a higher performance measure (P). So after going through all the data, our machine learning model’s accuracy increases, which means that the predictions made by our model will be very accurate. Another definition of machine learning by Arthur Samuel: Machine Learning is the subfield of computer science that gives “computers the ability to learn without being explicitly programmed.” ~ Arthur Samuel [2] Let us try to understand this definition: It states “learn without being explicitly programmed” — which means that we are not going to teach the computer with a specific set of rules, but instead, what we are going to do is feed the computer with enough data and give it time to learn from it, by making its own mistakes and improve upon those. For example, We did not teach the child how to fit in the shapes, but by performing the same task several times, the child learned to fit the shapes in the toy by herself. Therefore, we can say that we did not explicitly teach the child how to fit the shapes. We do the same thing with machines. We give it enough data to work on and feed it with the information we want from it. So it processes the data and predicts the data accurately. Why do we need machine learning? For instance, we have a set of images of cats and dogs. What we want to do is classify them into a group of cats and dogs. To do that we need to find out different animal features, such as: How many eyes does each animal have? What is the eye color of each animal? What is the height of each animal? What is the weight of each animal? What does each animal generally eat? We form a vector on each of these questions’ answers. Next, we apply a set of rules such as: If height > 1 feet and weight > 15 lbs, then it could be a cat. Now, we have to make such a set of rules for every data point. Furthermore, we place a decision tree of if, else if, else statements and check whether it falls into one of the categories. Let us assume that the result of this experiment was not fruitful as it misclassified many of the animals, which gives us an excellent opportunity to use machine learning. What machine learning does is process the data with different kinds of algorithms and tells us which feature is more important to determine whether it is a cat or a dog. So instead of applying many sets of rules, we can simplify it based on two or three features, and as a result, it gives us a higher accuracy. The previous method was not generalized enough to make predictions. Machine learning models helps us in many tasks, such as: Object Recognition Summarization Prediction Classification Clustering Recommender systems And others What is a machine learning model? A machine learning model is a question/answering system that takes care of processing machine-learning related tasks. Think of it as an algorithm system that represents data when solving problems. The methods we will tackle below are beneficial for industry-related purposes to tackle business problems. For instance, let us imagine that we are working on Google Adwords’ ML system, and our task is to implementing an ML algorithm to convey a particular demographic or area using data. Such a task aims to go from using data to gather valuable insights to improve business outcomes. Major Machine Learning Algorithms: 1. Regression (Prediction) We use regression algorithms for predicting continuous values. Regression algorithms: Linear Regression Polynomial Regression Exponential Regression Logistic Regression Logarithmic Regression 2. Classification We use classification algorithms for predicting a set of items’ class or category. Classification algorithms: K-Nearest Neighbors Decision Trees Random Forest Support Vector Machine Naive Bayes 3. Clustering We use clustering algorithms for summarization or to structure data. Clustering algorithms: K-means DBSCAN Mean Shift Hierarchical 4. Association We use association algorithms for associating co-occurring items or events. Association algorithms: Apriori 5. Anomaly Detection We use anomaly detection for discovering abnormal activities and unusual cases like fraud detection. 6. Sequence Pattern Mining We use sequential pattern mining for predicting the next data events between data examples in a sequence. 7. Dimensionality Reduction We use dimensionality reduction for reducing the size of data to extract only useful features from a dataset. 8. Recommendation Systems We use recommenders algorithms to build recommendation engines. Examples: Netflix recommendation system. A book recommendation system. A product recommendation system on Amazon. Nowadays, we hear many buzz words like artificial intelligence, machine learning, deep learning, and others. What are the fundamental differences between Artificial Intelligence, Machine Learning, and Deep Learning? 📚 Check out our editorial recommendations on the best machine learning books. 📚 Artificial Intelligence (AI): Artificial intelligence (AI), as defined by Professor Andrew Moore, is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence [4]. These include: Computer Vision Language Processing Creativity Summarization Machine Learning (ML): As defined by Professor Tom Mitchell, machine learning refers to a scientific branch of AI, which focuses on the study of computer algorithms that allow computer programs to automatically improve through experience [3]. These include: Classification Neural Network Clustering Deep Learning: Deep learning is a subset of machine learning in which layered neural networks, combined with high computing power and large datasets, can create powerful machine learning models. [3] Neural network abstract representation | Photo by Clink Adair via Unsplash Why do we prefer Python to implement machine learning algorithms? Python is a popular and general-purpose programming language. We can write machine learning algorithms using Python, and it works well. The reason why Python is so popular among data scientists is that Python has a diverse variety of modules and libraries already implemented that make our life more comfortable. Let us have a brief look at some exciting Python libraries. Numpy: It is a math library to work with n-dimensional arrays in Python. It enables us to do computations effectively and efficiently. Scipy: It is a collection of numerical algorithms and domain-specific tool-box, including signal processing, optimization, statistics, and much more. Scipy is a functional library for scientific and high-performance computations. Matplotlib: It is a trendy plotting package that provides 2D plotting as well as 3D plotting. Scikit-learn: It is a free machine learning library for python programming language. It has most of the classification, regression, and clustering algorithms, and works with Python numerical libraries such as Numpy, Scipy. Machine learning algorithms classify into two groups : Supervised Learning algorithms Unsupervised Learning algorithms I. Supervised Learning Algorithms: Goal: Predict class or value label. Supervised learning is a branch of machine learning(perhaps it is the mainstream of machine/deep learning for now) related to inferring a function from labeled training data. Training data consists of a set of *(input, target)* pairs, where the input could be a vector of features, and the target instructs what we desire for the function to output. Depending on the type of the *target*, we can roughly divide supervised learning into two categories: classification and regression. Classification involves categorical targets; examples ranging from some simple cases, such as image classification, to some advanced topics, such as machine translations and image caption. Regression involves continuous targets. Its applications include stock prediction, image masking, and others- which all fall in this category. To illustrate the example of supervised learning below | Source: Photo by Shirota Yuri, Unsplash To understand what supervised learning is, we will use an example. For instance, we give a child 100 stuffed animals in which there are ten animals of each kind like ten lions, ten monkeys, ten elephants, and others. Next, we teach the kid to recognize the different types of animals based on different characteristics (features) of an animal. Such as if its color is orange, then it might be a lion. If it is a big animal with a trunk, then it may be an elephant. We teach the kid how to differentiate animals, this can be an example of supervised learning. Now when we give the kid different animals, he should be able to classify them into an appropriate animal group. For the sake of this example, we notice that 8/10 of his classifications were correct. So we can say that the kid has done a pretty good job. The same applies to computers. We provide them with thousands of data points with its actual labeled values (Labeled data is classified data into different groups along with its feature values). Then it learns from its different characteristics in its training period. After the training period is over, we can use our trained model to make predictions. Keep in mind that we already fed the machine with labeled data, so its prediction algorithm is based on supervised learning. In short, we can say that the predictions by this example are based on labeled data. Example of supervised learning algorithms : Linear Regression Logistic Regression K-Nearest Neighbors Decision Tree Random Forest Support Vector Machine II. Unsupervised Learning: Goal: Determine data patterns/groupings. In contrast to supervised learning. Unsupervised learning infers from unlabeled data, a function that describes hidden structures in data. Perhaps the most basic type of unsupervised learning is dimension reduction methods, such as PCA, t-SNE, while PCA is generally used in data preprocessing, and t-SNE usually used in data visualization. A more advanced branch is clustering, which explores the hidden patterns in data and then makes predictions on them; examples include K-mean clustering, Gaussian mixture models, hidden Markov models, and others. Along with the renaissance of deep learning, unsupervised learning gains more and more attention because it frees us from manually labeling data. In light of deep learning, we consider two kinds of unsupervised learning: representation learning and generative models. Representation learning aims to distill a high-level representative feature that is useful for some downstream tasks, while generative models intend to reproduce the input data from some hidden parameters. To illustrate the example of unsupervised learning below | Source: Photo by Jelleke Vanooteghem, Unsplash Unsupervised learning works as it sounds. In this type of algorithms, we do not have labeled data. So the machine has to process the input data and try to make conclusions about the output. For example, remember the kid whom we gave a shape toy? In this case, he would learn from its own mistakes to find the perfect shape hole for different shapes. But the catch is that we are not feeding the child by teaching the methods to fit the shapes (for machine learning purposes called labeled data). However, the child learns from the toy’s different characteristics and tries to make conclusions about them. In short, the predictions are based on unlabeled data. Examples of unsupervised learning algorithms:
https://medium.com/towards-artificial-intelligence/machine-learning-algorithms-for-beginners-with-python-code-examples-ml-19c6afd60daa
['Towards Ai Team']
2020-12-09 23:51:03.187000+00:00
['Technology', 'Artificial Intelligence', 'Education', 'Science', 'Innovation']
Space Science with Python — A Data Science Tutorial Series
Space Science with Python Python is an amazing language for data science and machine learning and has a lot of great community driven Open Source libraries and projects. How can we use Python to explore and analyse the wonders and mysteries of Space? Photo by Shot by Cerqueira on Unsplash Near-Earth Objects, Meteors, ESA’s Rosetta/Philae mission to a comet, the spacecraft Cassini exploring the ring worlds of Saturn … I worked in great projects during my academic studies and later as a doctorate student in the university. As a modern astrophysicist or space scientist, the major work is done in front of the screen: data exploration, data storage and maintenance, as well as the scientific analysis and publication of fascinating results and insights. I learned a lot during these times and I am very grateful for that. Grateful for the opportunities and the time to explore cosmic wonders at academia’s final frontier. I used data scientific methods, machine learning and neural network architectures that can be developed and used by virtually anybody thanks to great publication sites, passionate users and a strong open source community. Now, I want to create a link between Data Science and Space Science. On Medium, Twitter, Reddit or at my Public Outreach presentations: People are amazed and fascinated by our cosmos! And I want to contribute something back for the community: A tutorial series that links Space Science with Python. Overview This article is an overview and provides short summaries of all articles that I publish here on Medium. This article will be updated continuously and provides a table of contents. All code examples are uploaded on my GitHub repository. Bookmark it to get future updates. The very first article contains no coding parts. It was written and published as an initial introduction. Setup of a virtual environment for Python. Installation of the NASA toolkit SPICE, respectively the Python Wrapper spiceypy. Explanation of some so-called SPICE kernels. Computation of the Solar System Barycentre with respect to the Sun (using SPICE). The tutorial shows that the gravitational centre of our Solar System moves within and outside the Sun. Consequently, the Sun “wobbles” around this common centre. The outer gas giants (Jupiter, Saturn, Uranus and Neptune) are the major gravitational influencers in our Solar System. The computations and visualisations of miscellaneous angular parameters reveal that these planets are the main reason of the movement of the Solar System Barycentre as introduced in tutorial session 2. April / May 2020: The Venus is visible to the naked eye in the evening; right after sunset our neighbor planet appears as a star above the horizon. Close angular distances with the Moon create a nice photo shoot. Here, the tutorial explains how to compute the angular distance between the Venus, Moon and Sun to determine optimal observation parameters (using SPICE). A tutorial that explains a core analysis and visualisation part of astronomy and space science: maps. SPICE and matplotlib are used to explain, compute, draw and interpret these maps. Further, two different reference systems are explained that are used in future sessions, too. SPICE provides so-called kernels that allow one to determine the position and velocity vector of planets, asteroids or spacecraft. The vector computation procedure is shown for the dwarf planet Ceres. Based on the position and velocity vector the corresponding orbital elements are calculated. Further, it is shown how close the asteroid 1997BQ passed by Earth in May 2020. Comets are a remnant of the formation of our Solar System. Hundreds are known, documented and free available as a dataset. In this session, an SQLite database is created with data from the Minor Planet Center and some parameters are derived using SPICE. Further, the Great Comet Hale-Bopp is used as an example to derive positional information. Two types of comets are known: P and C Types. The different statistical variations are shown and discussed as well as their possible source of origin. P Type comets are dynamically associated with Jupiter. This dynamical link is described with the Tisserand Parameter that is introduced and explained. A data scientific analysis of the distribution reveals the significant dynamical differences between C and P Type comets. This tutorial session is a supplementary article. It describes how one can create animations of the multi-dimensional Tisserand Parameter. These kind of visualisations help one to understand more easily multi-input functions. Online supplementary materials are often provided in publications to support the reader with additional information. Bias effects are present in virtually any statistical or data scientific research topic. Smaller, respectively fainter comets are more difficult to detect and their detectability scales with the distance and activitiy to the Sun. ESA’s Rosetta/Philae mission explored the comet 67P/Churyumov–Gerasimenko from 2014 to 2016. During its 2 years mission the camera instruments took several images of the comet’s core and derived a 3 D shape model. With the package visvis a Python renderer is programmed to interactively explore this icy world. There are several sources to predict the trajectory of a comet (here: 67P). We established an SQLite database with data from the Minor Planet center (see part 7) and we learned how to derive data from the SPICE kernels. Both data provide different and also non-static results that are described and compared here. Part 13 has shown that the orbital elements of 67P from the SPICE kernels change for different Ephemeris times. One possible reason: 67P is a P Type and Jupiter-Family-Comet (part 9) that is being influenced significantly by Jupiter. With the support of SPICE we can show the gravitational influence of the gas giant by computing a simple 2-body solution. A few weeks ago (End May / Beginning of June 2020) ESA’s Solar Orbiter crossed parts of the dust and ion tail of comet ATLAS. What kind of geometric requirements must be fulfilled to be sure that the spacecraft crossed the ion tail? Using SPICE and the most recent kernels of the spacecraft help us to answer this question. Brightness, flux density, irradiance, radiance … there are a lot of confusing words and definitions to describe light sources. In astronomy and space science one uses another definition: Magnitude. We create in this basic concept tutorial some functions that are used for future sessions (e.g., brightness computation of asteroids or meteors). It is the 30th June 2020: Asteroid Day! Today we start with some asteroid related articles, beginning with an asteroid that passed by at a distance of 3 Lunar Distances: 2020 JX1. Computing the position of an asteroid is not as simple as shown in the past, we need the covariance matrix to determine a possible solution space of the asteroid’s location. 2020 JX1 left the vicinity of our home planet! A distance of 3 Lunar Distances was small in cosmic scales, but large enough to miss us. The error-bars in the orbit solution space (see last session) propagate through the computation. Consequently, the sky coordinates of the asteroid are a solution space, too! A 2D Kernel Density Estimator will help us to determine an area of uncertainty in the sky, to answer the question: Where could the asteroid be? The brightness of asteroids can be computed by using the so called H-G magnitude function. An empirically determined equation that depends on the distance between the asteroid and the Earth and Sun, the phase angle, its absolute magnitude and the slope parameter. What are the special features of this equation? Let’s see … Tutorial #20 links several topics together: distance and phase angle determination, the apparent magnitude, sky coordinates and so on. The task: Visualising the path of Ceres in the sky for the year 2020 (considering its brightness trend, too). After this article we are good to go to start our first space science project about asteroids and Near-Earth Objects. Science Project #1 The first part of the project is an introduction into the Near-Earth Object (NEO) topic and does not include any coding yet. The structure of the upcoming weeks is being described. Our project shall lead to a Python library that can be later used by amateur and professional astronomers and scientists alike. To ensure a credible and sustainable software package the library shall be written in a Test Driven Development (TDD) coding framework. What is TDD exactly? We will figure it out in this session. A generic TDD example is provided in this step-by-step guide. Using a simple equation (computation of the enclosed angle between 2 vectors) we will try to find a solution based on example for all required computational steps.
https://medium.com/space-science-in-a-nutshell/space-science-with-python-a-data-science-tutorial-series-57ad95660056
['Thomas Albin']
2020-10-05 14:42:36.278000+00:00
['Python', 'Data Science', 'Science', 'Space', 'Programming']
A Beginner’s Look at Kaggle
The above heatmaps show our strength of association between each variable. While there is no rigid standard for “Highly Associated” or “Weakly Associated”, we will use a cut-off value of |0.1| between our independent variables and survival. We will likely drop features whose association is lower than |0.1|. This is an entirely arbitrary guess, and I may return to raise or lower the bar later (In fact, I decided to keep Age after noticing improved performance when I did). For now, the feature that meets the criteria for dropping are SibSp. Additionally, I am choosing to drop Name, Ticket and Cabin, mostly on a hunch that they don’t add much. It should be noted that correlation between independent (predictor) variables can mean redundant information. This can cause a drop in performance in some algorithms. Some might choose to drop highly correlated predictors. I did not take the time to do that, but you might try it at home! todrop = ['SibSp', 'Ticket', 'Cabin', 'Name'] train_df = train_df.drop(todrop, axis=1) Let’s take a look at our transformed data frame, replete with new features, categories converted to numerical data, and old features dropped: Setup for Machine Learning: During this phase, we will begin to format our data for feeding into a machine learning algorithm. We will then use this formatted data to get a picture of what a few different models can do for us, and pick the best one. This phase is broken into the following parts: Train/Test Split Normalize Data of each split Impute missing values Let’s go. Train/Test Split We will split our data once into training and testing sets. Within the training set, we will use stratified k-fold cross validation to find average performance of our models. The test set will not be touched until after we have fully tuned each of our candidate models using the training data and k-fold cross validation. Once training and tuning is complete, we will compare the results of each model on the held-out test set. The one that performs the best will be used for the competition. # Split dependant and independant variables X = train_df.drop(['Survived'], axis = 1) Y = train_df.loc[:, 'Survived'] # Split data into training and validation sets x_train, x_test, y_train, y_test = model_selection.train_test_split(X, Y, test_size=0.2, random_state=333) Normalizing the Data Some Machine Learning models require all of our predictors to be on the same scale, while others do not. Most notably, models like Logistic Regression and SVM will probably benefit from scaling, while decision trees will simply ignore scaling. Because we are going to be looking at a mixed bag of algorithms, I’m going to go ahead and scale our data. # We normalize the training and testing data separately so as to avoid data leaks. Ask at the end! x_train = pd.DataFrame(pre.scale(x_train), columns=x_train.columns, index=x_train.index) x_test = pd.DataFrame(pre.scale(x_test), columns=x_test.columns, index=x_test.index) Imputing Missing Data You might recall that there were a significant amount of missing Age values in our data. Let’s fill this in with the median age: # Again, applying changes to the now separate datasets helps us avoid data leaks. x_train.loc[x_train.Age.isnull(), 'Age'] = x_train.loc[:, 'Age'] .median() x_test.loc[x_test.Age.isnull(), 'Age'] = x_test.loc[:, 'Age'] .median() Let’s make sure our missing data is filled in: x_train.info() # Output: <class 'pandas.core.frame.DataFrame'> Int64Index: 712 entries, 466 to 781 Data columns (total 11 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Pclass 712 non-null float64 1 Sex 712 non-null float64 2 Age 712 non-null float64 3 Parch 712 non-null float64 4 Fare 712 non-null float64 5 Embarked 712 non-null float64 6 Title 712 non-null float64 7 FamilySize 712 non-null float64 8 Alone 712 non-null float64 9 LName 712 non-null float64 10 NameLength 712 non-null float64 dtypes: float64(11) Now we see that each and every variable that we chose to keep has 712 valid data entries. Model Selection Now that we have prepared our data, we want to look at different options available to us for solving classification problems. Some common ones are: K-Nearest Neighbors Support Vector Machines Decision Trees Logistic Regression We will train and tune each of these models on our training data by way of k-fold cross-validation. When complete, we will compare the tuned models’ performance on a held out test set. Training and Comparing Base Models: First, we want to get a feel model’s performance before tuning. We will write two functions to help us describe our results. The first will evaluate the model several times over random splits in the data, and return the average performance as a dictionary. The second will simply nicely print our dictionary. # A function that evaluates each model and gives us the results: def kfold_evaluate(model, folds=5): eval_dict = {} accuracy = 0 f1 = 0 AUC = 0 skf = model_selection.StratifiedKFold(n_splits=folds) # perform k splits on the training data. for train_idx, test_idx in skf.split(x_train, y_train): xk_train, xk_test = x_train.iloc[train_idx], x_train.iloc[test_idx] yk_train, yk_test = y_train.iloc[train_idx], y_train.iloc[test_idx] # Test performance on this fold: model.fit(xk_train, yk_train) y_pred = model.predict(xk_test) report = metrics.classification_report(yk_test, y_pred, output_dict=True) # Gather performance metrics for output prob_array = model.predict_proba(xk_test) fpr, tpr, huh = metrics.roc_curve(yk_test, model.predict_proba(xk_test)[:,1]) auc = metrics.auc(fpr, tpr) accuracy += report['accuracy'] f1 += report['macro avg']['f1-score'] AUC += auc # Average performance metrics over the k folds measures = np.array([accuracy, f1, AUC]) measures = measures/folds # Add metric averages to dictionary and return. eval_dict['Accuracy'] = measures[0] eval_dict['F1 Score'] = measures[1] eval_dict['AUC'] = measures[2] eval_dict['Model'] = model return eval_dict # a function to pretty print our dictionary of dictionaries: def pprint(web, level): for k,v in web.items(): if isinstance(v, dict): print('\t'*level, f'{k}: ') level += 1 pprint(v, level) level -= 1 else: print('\t'*level, k, ": ", v) Putting our kfold evaluation function to use: # Perform evaluation on each model: evals = {} evals['KNN'] = kfold_evaluate(KNeighborsClassifier()) evals['Logistic Regression'] = kfold_evaluate(LogisticRegression(max_iter=1000)) evals['Random Forest'] = kfold_evaluate(RandomForestClassifier()) evals['SVC'] = kfold_evaluate(SVC(probability=True)) # Plot results for visual comparison: result_df = pd.DataFrame(evals) result_df .drop('Model', axis=0) .plot(kind='bar', ylim=(0.7, 0.9)) .set_title("Base Model Performance") plt.xticks(rotation=0) plt.show() Base Model Summary It appears that we have a clear winner in our Random Forest classifier. Hyper-parameter Tuning: Let’s tune up our current champion’s hyper-parameters in hopes of eking out a little bit more performance. We will use scikit-learn’s RandomizedSearchCV which has some speed advantages over using an exhaustive GridSearchCV . Our first step is to create our grid of parameters over which we will randomly search for the best settings: # Number of trees in random forest n_estimators = [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)] # Number of features to consider at every split max_features = ['auto', 'sqrt'] # Maximum number of levels in tree max_depth = [int(x) for x in np.linspace(10, 110, num = 11)] max_depth.append(None) # Minimum number of samples required to split a node min_samples_split = [2, 5, 10] # Minimum number of samples required at each leaf node min_samples_leaf = [1, 2, 4] # Method of selecting samples for training each tree bootstrap = [True, False] # Create the random grid from above parameters random_grid = {'n_estimators': n_estimators, 'max_features': max_features, 'max_depth': max_depth, 'min_samples_split': min_samples_split, 'min_samples_leaf': min_samples_leaf, 'bootstrap': bootstrap} pprint(random_grid, 0) #Output: n_estimators : [200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000] max_features : ['auto', 'sqrt'] max_depth : [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, None] min_samples_split : [2, 5, 10] min_samples_leaf : [1, 2, 4] bootstrap : [True, False] Next, we want to create our RandomizedSearchCV object which will use the grid we just created above. It will randomly sample 10 combinations of parameters, test them over 3 folds and return the set of parameters that performed the best on our training data. # create RandomizedSearchCV object searcher = model_selection.RandomizedSearchCV( estimator = RandomForestClassifier(), param_distributions = random_grid, n_iter = 10, # Number of parameter settings to sample cv = 3, # Number of folds for k-fold validation n_jobs = -1, # Use all processors to compute in parallel random_state=0 # Fore reproducible results ) # Look for the best parameters search = searcher.fit(x_train, y_train) params = search.best_params_ params #Output: {'n_estimators': 1600, 'min_samples_split': 10, 'min_samples_leaf': 4, 'max_features': 'auto', 'max_depth': 30, 'bootstrap': False} After performing our parameter tuning, we can verify whether or not the parameters provided by the search actually improve the base model or not. Let’s compare the performance of the two models before and after tuning. tuning_eval = {} tuned_rf = RandomForestClassifier(**params) basic_rf = RandomForestClassifier() tuning_eval['Tuned'] = kfold_evaluate(tuned_rf) tuning_eval['Basic'] = kfold_evaluate(basic_rf) result_df = pd.DataFrame(tuning_eval) result_df.drop('Model', axis=0).plot(kind='bar', ylim=(0.7, 0.9)).set_title("Tuning Performance") plt.xticks(rotation=0) plt.show() result_df Final Steps: Now that we have chosen and tuned a Random Forest classifier, we want to test it on data it has never before seen. This will tell us how we might expect the model to perform in the future, on new data. It’s time to use that held out test set. Then, we will combine the test and training data, and re-fit our model to the combined data set, hopefully giving it the greatest chance of success on the unlabeled data from the competition. Finally, we will make our predictions on the unlabeled data for submission to the competition. Final Test on Held Out Data # Get tuned model predictions on held out data y_pred = tuned_rf.predict(x_test) # Compare predictions to actual answers and show performance results = metrics.classification_report(y_test, y_pred, labels = [0, 1], target_names = ['Died', 'Survived'], output_dict = True) pprint(results, 0) And here is how our model performed: Died: precision : 0.7815126050420168 recall : 0.8532110091743119 f1-score : 0.8157894736842106 support : 109 Survived: precision : 0.7333333333333333 recall : 0.6285714285714286 f1-score : 0.6769230769230768 support : 70 accuracy : 0.7653631284916201 macro avg: precision : 0.757422969187675 recall : 0.7408912188728702 f1-score : 0.7463562753036437 support : 179 weighted avg: precision : 0.7626715490665541 recall : 0.7653631284916201 f1-score : 0.761484178861421 support : 179 It looks like we may have experienced some over-fitting. Our model’s performance on the test data is roughly 7–9% lower across the board, but we should expect that our model performs about this well on real world data that it has never before seen. Combine Training and Testing Datasets for Final Model Fit Now that we have ascertained that our tuned model performs with about 76% accuracy and has an f1-score of 0.74 on new data, we can proceed to train our model on the entire labeled training set. More (good) data is almost always better for an algorithm. X = pd.concat([x_train, x_test], axis=0).sort_index() Y = pd.concat([y_train, y_test], axis=0).sort_index() tuned_rf.fit(X, Y) Format and Standardize Unlabeled Data Now that our model has been completely fitted on the training data, it’s time to get ready to make the predictions that we will submit to the competition. We need to transform our unlabeled competition data in the same manner as when we were formatting our training data. This includes encoding categorical variables, dropping the same features and normalization. The idea here is consistency. What we did to the data that we trained the model on, we need to do to the data we will use to make our final predictions. # Feature Engineering: test_df['Title'] = test_df.Name.str.extract(r'([A-Za-z]+)\.') test_df['LName'] = test_df.Name.str.extract(r'([A-Za-z]+),') test_df['NameLength'] = test_df.Name.apply(len) test_df['FamilySize'] = 1 + test_df.SibSp + test_df.Parch test_df['Alone'] = test_df.FamilySize.apply(lambda x: 1 if x==1 else 0) test_df.Title = test_df.Title.map(title_dict) # Feature Selection test_df = test_df.drop(todrop, axis=1) # Imputation of missing age and fare data test_df.loc[test_df.Age.isna(), 'Age'] = test_df.Age.median() test_df.loc[test_df.Fare.isna(), 'Fare'] = test_df.Fare.median() # encode categorical data for i in test_df.columns: if test_df[i].dtype == 'object': test_df[i], _ = pd.factorize(test_df[i]) # center and scale data test_df = pd.DataFrame(pre.scale(test_df), columns=test_df.columns, index=test_df.index) # ensure columns of unlabeled data are in same order as training data. test_df = test_df[x_test.columns] test_df Make Final Predictions and Common Sense Check: Roughly 32 percent of the passengers aboard the Titanic lived. We will do a last, common sense check to see if our algorithm predicts roughly the same distribution of survivals. Since Survived variable with value 1 implies survival, we can simply add all instances of survival and divide by the total number of passengers to get a rough idea of our predicted distribution. Keep in mind, the competition organizers could have been tricky and given us uneven distributions for training and testing. In that case, this might not work, but I’m assuming they did not. # Make final predictions final = tuned_rf.predict(test_df) # Check the probability of survival according to our predictions. It should be roughly 32% (we get 36.6% which is a bit optimistic) final.sum()/len(final) # Get our predictions in the competition rules format: submission = pd.DataFrame({'PassengerId':test_df.index, 'Survived':final}) # Output our submission data to a .csv file: submission.to_csv('submission2.csv', index=False) Summary My focus was not to win a competition, but to learn a way of thinking. After all, if you are like me, you aspire to become a data-scientist. Therefore, machine learning experiments should be rigorous and repeatable, but just as important, the process should be uniquely defined by the questions being asked and the data on hand to answer those questions. However, if you are wondering how well this set-up performed, it achieved an accuracy of 77%. That’s far from perfect! Again, if you have any feedback, I would love to hear your questions, comments and critiques of my process. Writing this article is part of my own learning process, and I hope you join in. Thanks!
https://medium.com/analytics-vidhya/a-beginners-look-at-kaggle-b868ceb2eccf
['Wesley Neill']
2020-05-09 17:43:23.183000+00:00
['Machine Learning', 'Data Science', 'Python', 'Begginer', 'Kaggle']
How to Gain Wisdom? Read Some of Aesop’s Fables
How to gain wisdom? Read some Aesop fables Everyone wants to gain wisdom. Wisdom is one of the greatest qualities that human beings can possess. So, seek it, hold on to it, share it and treasure it. Why? Because it will help you navigate through choppy waters, it will lift you up from the depths of despair, it will help you put everything into perspective, and ultimately it will turn you into the hero of your own story. But, how do you gain wisdom? I suggest that you start by reading some of Aesop’s fables. With the possible exception of the New Testament, no works written in Greek have been more widespread and better known than Aesop’s fables. For more than 2500 years, Aesop fables have been teaching people of all ages valuable life lessons in the most entertaining and cynical way. Want to hear a rags-to-riches story? Meet Aesop the Wise-Fool Aesop’s life reads just like one of his fables. Aesop is believed to have lived between the period from 620 to 560 BC. He began his life as a slave and was said to have been remarkably ugly with some physical deformities and as this wasn’t enough misfortune, he was born mute, unable to utter a word. On the positive side, he was intelligent, resourceful and kind. His life took a turn for the better after he rescued a priestess of the goddess Isis from a difficult situation after she had strayed from the road and became lost. From Slavery to Greatness — Meet Aesop who is also known as the Wise-Fool His divine reward for this act of kindness was the gift of speech and a remarkable ability to conceive and elaborate wise tales in Greek. His talent for storytelling, his wisdom, and wit set him free literally. Aesop acquired freedom, fame, and fortune in the same breath. Not bad for an ugly, deformed mute. He acquired some kind of celebrity status by hanging out with the most prominent and powerful personalities of the time offering to solve their problems, giving them sound advice, and telling fables along the way. But in the end, it was his very success that lead him to his ruin. Aesop made a good living as a storyteller travelling from city to city to perform his art, acquiring fame and fortune along the way. When he arrived in Delphi, he realized that his wit and sarcasm didn’t work so well on the Delphian audience, who refused to give him any reward for his performance. Disappointed and vexed by this cold treatment he lashed out and mocked the Delphians comparing them to driftwood (something worthwhile at a distance but is revealed to be worthless when see closed-up.) He should have stopped there but continued his tirade realizing too late how outraged the Delphians were by his insults. They kicked him out of town, but unbeknown to him they hid a golden cup from the Temple of Apollo in his luggage and as he was leaving the city he was arrested, charged, sentenced to death, and executed unceremoniously by being pushed off a cliff. Moral of the story: Storytelling and wit can set you free, but it can also make you fly off a cliff. Want to survive a bad situation? Follow the cat and not the fox I don’t know what was Aesop’s final thought before he died but I am going to speculate that he may have recited to himself the Fox and the Cat fable that he himself wrote a little while before. The Fox and the Cat A fox was boasting to a cat of its clever devices for escaping its enemies. I have a whole bag of tricks he said , which contains a hundred ways of escaping my enemies. I have only one, said the cat. But I can generally manage with that. Just at that moment they heard the cry of a pack of hounds coming towards them, and the cat immediately scampered up a tree and hid herself in the boughs. This is my plan, said the cat. What are you going to do? The fox thought first of one way, then of another, and while he was debating, the hounds came nearer and nearer, and at last the fox in his confusion was caught up by the hounds and soon killed by the huntsmen. Miss Puss, who had been looking on, said, Better one safe way than a hundred on which you cannot reckon. Aesop Want to hear a truly inspirational tale? Meet The Peddlar of Swaffham Please allow me to take you to Norfolk, England in a small village called Swaffham, where you will hear the extraordinary tale of the Peddlar of Swaffham. The Pedlar of Swaffham “Tradition says that there lived in former times in Swaffham, Norfolk, a certain pedlar, who dreamed that if he went to London Bridge, and stood there, he would hear some very joyful news, which he at first slighted, but afterwards, his dream being doubled and trebled upon, he resolved to try the issue of it, and accordingly went to London and stood on the bridge there for two or three days, looking about him, but heard nothing that might yield him any comfort. At last, it happened that a shop keeper there, having noted his fruitless standing, seeing that he neither sold any wares nor asked any alms, went to him and most earnestly begged to know what he wanted there, or what his business was; to which the pedlar honestly answered that he had dreamed that if he came to London and stood there upon the bridge he should hear good news; at which the shop keeper lighted heartily, asking him if he was such a fool as to take a journey on such a silly errand, adding: “I will tell you country fellow, last night I dreamed that I was in Swaffham, in Norfolk, a place utterly unknown to me where I thought that behind a pedlar’s house in a certain orchard, and under a great oak tree, if I dig I should find a vast treasure! Now think you, says he, that I am such a fool to take such a long journey upon me upon the instigation of a silly dream? No. No. No. I am wiser. Therefore, good fellow, learn wit from me, and get you home and mind your business.” The pedlar observing his words what he had said he dreamed and knowing they concerned him, glad of such joyful news, went speedily home, and dug and found a prodigious great treasure, with which he grew exceedingly rich; and Swaffham Church being for the most part fallen down, he set on workmen and rectified it most sumptuously, at his own charges; and to this day, there is a statute therein with his pack at his back and his dog at his heels; and his memory is also preserved by the same form of picture in most of the old glass windows, taverns and ale houses of that town unto this day.” Source: Sidney Hartland — English Diary and Other Folks Tales (London, ca. 1890) which in turn refers to the Diary of Abraham Dela Pryme — 1699. Text available under Creative Commons CC-By-SA-4.0 License. In this video, I am taking you to Norfolk, UK, in the village of Swaffham, where the fable of the Peddlar of Swaffham originates. Come along with me … This English tale resonates with me because of its candour and the moral that emanates from it. My own reflection on this tale is that the moral of the story is as follows: Listen to your inner voice, your intuition, your gut feeling, your inner compass; Don’t be afraid to be ridiculed. Be patient. Have grit. Have resilience. Have faith; Have the courage to act upon your dream and remember that a thousand-mile journey starts with the first step; The journey will no doubt be marred with uncertainties, danger, surprises and some intriguing encounters; Pay attention. Listen to the signs. Listen to the messages, the tips you receive on your journey. There may be joyful news awaiting you; In the end, your courage, your efforts, your convictions will pay off and success will flow towards you, abundance will flow into your life; When prosperity falls upon you do not hold tight to the wealth you seek but keep a healthy vision of its power to heal and the power it will give you to fulfil your purpose and spread goodness all around you. And this, my Dear Companion, is Your Quest! If you liked this post you can follow me on Instagram, Pinterest, or Facebook, or you may also like: The audio version of my book “This Is your Quest ” is available. Feel free to check it out and use this special Promotion code Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/how-to-gain-wisdom-read-some-of-aesops-fables-fcd011976313
['Joanne Reed']
2020-12-02 20:31:56.139000+00:00
['Storytelling', 'Self-awareness', 'Philosophy', 'Wisdom', 'Self Improvement']
Yes, Social Media Is Making You Depressed
Yes, Social Media Is Making You Depressed The science is in and it’s not surprising Photo by Tim Mossholder on Unsplash The science is in regarding social media, and the findings are not very surprising. The University of Arkansas just released a new study which is the first large, national study to show a link between social media use and depression over time. The connection is clear between social media and depression. Essentially, the more time you spend on social media, the more likely you are to become depressed. Participants who used social media for more than 300 minutes per day were 2.8 times more likely to become depressed than those who spent less than 120 minutes per day on social media. I know what you’re thinking. 300 minutes is a long time. Who spends 5 hours on social media per day? Well, this study was conducted in 2018, before the pandemic of 2020. According to this source, in 2020 we spend on average 3 hours per day on social media. Consider this fact: 3.96 billion people use social media today, which accounts for roughly half (51%) of the global population. So if we average 3 hours right now, 5 hours is actually not as high as it sounds. Consider that we are using social media on our phones, tablets, and computers. I think we could argue that YouTube is even a form of social media with the comments section. LinkedIn wasn’t considered in this study either. Here is a summary that includes some of the highlights of the study. Some Details Of The Study The study had a sample size of 1,000 individuals between the ages of 18 to 30 during 2018. The study focused on the following social media platforms: Facebook, Twitter, Reddit, Instagram, and SnapChat. “Social media is often curated to emphasize positive portrayals,” said Jaime Sidani, assistant professor of medicine at the University of Pittsburgh and co-author of the study. “This can be especially difficult for young adults who are at critical junctures in life-related to identity development and feel that they can’t measure up to the impossible ideals they are exposed to.” Here’s another powerful insight from Sidani. “Excess time on social media may displace forming more important in-person relationships, achieving personal or professional goals, or even simply having moments of valuable reflection.” Let’s be totally honest: this study is only confirming what we’ve already known. Social media makes it easier to compare ourselves to other people. In turn, once we do that, we often feel inadequate and lonely. Over time, feelings of isolation and loneliness lead to depression. I’ve heard it said that depression is often anger that is turned inward. That’s what it is like for me: depression is a mix of frustration, anger, and loneliness. Real connection — actually talking to people in a meaningful way — often helps with that. So what’s the solution? What’s the takeaway? Engage in a real conversation, not a superficial conversation. Yes, you can do this on social media. But it’s not very common. Be willing to be weird. Dig deeper. Have real conversations and don’t settle for the surface level ones. Intentionally engage with other people. Don’t settle for casual browsing and scrolling. That’s what the app makers want you to do. Schedule a “virtual coffee” and hang out with people. Honestly, this might be a great time to start a podcast where you intentionally connect with other people. I’m doing that with mine. I’m starting an interview once per month on my Write Your Book Podcast. Not into podcasting? Sick of Zoom too? I get that. How about a good old fashioned phone call? That’s right, that thing in your hand is good at making phone calls. Do that. Connect with others proactively, not reactively. You’ll be glad you did.
https://medium.com/the-partnered-pen/yes-social-media-is-making-you-depressed-40a68f7ba7a4
['Jim Woods']
2020-12-28 19:01:08.343000+00:00
['Self-awareness', 'Social Media', 'Self Improvement', 'Psychology', 'Life Lessons']
Pro Tips to Help You Get Started With Your Side Project
Pro Tips to Help You Get Started With Your Side Project Begin with solid foundations to keep the excitement kicking in Photo by Blake Meyer on Unsplash Day 1 — You bought your <fancy-name>.io domain name and promised yourself you would finish this product for good, this time. Day 56— <fancy-name>.io homepage is still a 404. You refuse to talk to anyone about what went wrong. How often do you start a project and give up on it? Justified by a lack of structure, discipline, or organization, this project that was once your best idea ever gets boring, messy, and doesn’t look as exceptional as when you had your first thought about it. In short, your project is not even exciting anymore, and you gave up. Here are some tips to help you stay motivated and keep focused on what matters until you ship.
https://medium.com/better-programming/pro-tips-to-help-you-get-started-with-your-side-project-15d01b76e0d8
['Thomas Guibert']
2020-07-15 15:37:27.012000+00:00
['Side Project', 'Technology', 'Software Engineering', 'Productivity', 'Programming']
A Simple Growth Marketing Plan For SaaS Startups
Instructions Similar to step three, you should prioritize your new markets/ channels based on the following criteria: Profit margins Market size Control Input/output time ration Scalability Please make sure you look into ways to identify and prioritize new markets only when your startup gets to the “growth” stage, as mentioned in the introduction above. Bonus: Get A copy of the SaaS growth marketing framework now! The spreadsheet above is broken down into six tabs. Here’s how to use each of them:
https://medium.com/better-marketing/a-simple-growth-marketing-plan-for-saas-startups-543ae2d339b2
['Nicolás Vargas']
2019-10-30 10:29:37.291000+00:00
['Startup', 'Marketing', 'Growth']
How I Read 69 Books in 2019 without Changing My Routine
How I Read 69 Books in 2019 without Changing My Routine And how you can do it too. Photo by Artem Beliaikin on Unsplash Even though we are (finally) reaching the end of the catastrophic year that is 2020, there is something I want to tell you about 2019. Yes, 2019, or 1 b.C. (before Corona). I have never read so many books in a year as I did in 2019. I fell one book short of seventy. I haven’t taken part in any special reading challenge or struggled to squeeze reading time into a tight routine, and I don’t do any form of speed reading (spoiler: I think speed reading fiction is a complete nonsense). It felt so natural to finish this amount of books in this timeframe, I got surprised by the result. I love to read. It’s one of my favorite activities to pass the time — be it at home or in a waiting room. But this year, I applied some “tricks” that increased the number of works I finished. These aren’t top-secret techniques, nor do they require any special ability or skill. These are easy steps you can do at home if you wish, like me, to read more books from now on. If you, too, want to read more books in 2020 and 2021, here are some tips:
https://medium.com/a-life-of-words/how-i-read-69-books-in-2019-without-changing-my-routine-accad3ca8875
['Morton Newberry']
2020-08-22 23:59:11.075000+00:00
['Books', 'Readinglist', 'Productivity', 'Audiobooks', 'Reading']
A Stark Look at Covid-19 and Racial Disparities
A Stark Look at Covid-19 and Racial Disparities We knew this would happen Image courtesy of author Life expectancy in the United States will almost certainly drop in 2020 due to Covid-19 deaths, extending a decline that frustrates economic demographers like David Bishai, MD, a professor at Johns Hopkins Bloomberg School of Public Health. After rising steadily for 50 years, U.S. life expectancy fell in 2015, 2016, and 2017. The drop wasn’t due to infectious disease or war or any biological limit to how long humans can live, but rather persistent systemic inequities and racial disparities in the health system, along with increases in deaths from opioids, alcohol, and suicide — the latter are what Bishai and other experts call “deaths of despair.” The ultimate story of Covid-19, written through the lens of history with all the final death statistics, will undoubtedly mirror what we already know from hard data on U.S. life expectancy: On average, the haves outlive the have-nots in a country where the responsibility for health care is placed largely on the individual, and life expectancy varies dramatically based on disparities deeply rooted in geography, wealth, and race. Globally, the United States ranks 50th in life expectancy, trailing such countries as Cuba, Chile, Slovenia, Portugal, France, and Italy. America is a full five years behind several of the leading nations. And in America, there are notable gaps in longevity. On average, white men outlive black men by about 4.5 years, and white women outlive black women by about 2.7 years. More glaring, life expectancy varies by a whopping 20.1 years in U.S. counties with the most favorable numbers — mostly on the coasts and scattered around a handful of other states, including Colorado — compared with counties at the bottom of the charts, which are mostly in the South or have large Native American populations. And things are not getting better for those at or near the bottom: Between 1980 and 2014, the worst counties made no progress, researchers concluded in the journal JAMA Internal Medicine. That geographic disparity disproportionately affects minorities, the poor, people with underlying health conditions like heart disease and diabetes, and people who often have little choice about working from home or even staying home when they’re sick. Then along came Covid-19. On average, the haves outlive the have-nots in a country where the responsibility for health care is placed largely on the individual. Segregation of a different sort “Most epidemics are guided missiles attacking those who are poor, disenfranchised, and have underlying health problems,” says Thomas Frieden, MD, former director of the U.S. Centers for Disease Control and Prevention. Already, coronavirus deaths prove the point. While just 22% of U.S. counties are disproportionately black, they accounted for 58% of Covid-19 deaths by April 13, according to a study released May 5 by the Foundation for AIDS Research. Other research published in April found Covid-19 death rates among black people and Hispanics much higher (92.3 and 74.3 deaths per 100,000 population, respectively) than among whites (45.2) or Asians (34.5). In Chicago, nearly 70% of Covid-19 deaths have been among black people, who make up 30% of the population. Similarly lopsided statistics have come out of Michigan and Detroit. An analysis of deaths in Massachusetts, published May 9 by the Boston Globe and based on research by Harvard scientists, finds a surge in excess deaths in the early days of Covid-19 was 40% greater in cities and towns “with higher poverty, higher household crowding, higher percentage of populations of color, and higher racialized economic segregation” compared to those with the lowest levels of those measures. These are people who can’t afford to miss a chance to work, often don’t have paid sick leave, may not get proper protection from Covid-19 spread on the job, and tend to already have lower health status due to “persistent health inequities,” says study team member Nancy Krieger, PhD, professor of social epidemiology in the department of social and behavioral sciences at Harvard T.H. Chan School of Public Health. “It’s been hard for Americans to understand that there are racial structural disparities in this country, that racism exists,” says Camara Jones, MD, an epidemiologist at the Morehouse School of Medicine in Atlanta. “If you asked most white people in this country today, they would be in denial that racism exists and continues to have profound impacts on opportunities and exposures, resources and risks. But Covid-19 and the statistics about black excess deaths are pulling away that deniability.” Today’s segregation involves factors like severely limited access to healthy foods and green space, and higher exposure to environmental hazards, all contributing to higher rates of obesity, diabetes, high blood pressure, and heart disease, Jones says, echoing the views of many public health researchers. “Prior to this pandemic and economic calamity, African Americans already lacked health insurance at a rate nearly 40% higher than white people,” says Christopher Hayes, PhD, a labor historian at Rutgers School of Management and Labor Relations. “Many of the highest rates of being uninsured are in Southern states that have not expanded Medicaid and have large black populations.” Also, the massive unemployment caused by the 2020 global economic shutdown will only worsen the plight of U.S. minorities, putting further strain on families and their options for attending to their health. While the overall unemployment rate rose to 14.7% as of May 8, it jumped to 16.7% among black workers and 18.9% for Hispanic and Latino workers. “Given that African Americans are disproportionately concentrated in low-wage jobs, and we live in the only rich country without universal health care, too many people only seek medical care in dire situations, and when they do, it can easily be financially ruinous,” says Hayes. The impact of Covid-19 in the United States will almost surely prove detrimental to the longevity of African Americans and other marginalized ethnic and racial groups, the experts say. Counting the years Life expectancy at birth is an estimate of how long a person might be expected to live if known death rates at the time were to remain consistent throughout that person’s life. It is based on a complex calculation of age-specific mortality rates, giving more weight to the probability of death later in life than for young people. Throughout the first half of the 20th century, it spiked up and down significantly in the United States as various deadly infectious diseases swept largely unabated through the population every few years. The spikiness began to change for many reasons, not the least being higher living standards and improved sanitation and hygiene, says Bishai, the Johns Hopkins demographer. In 1900, tuberculosis was among America’s leading causes of death. Filthy, crowded living and workplace conditions contributed to the spread of TB bacteria. Also, contaminated food, milk, and water caused TB infections and many other foodborne illnesses, from typhoid fever to botulism. Infections began to slow with public health messages that promoted hand-washing, as well as the introduction of refrigerators and pasteurization of dairy products in the 1920s — making food safer. Along the way, several childhood vaccines were introduced, including whooping cough in 1914 and diphtheria in 1926. Smallpox was eliminated in the United States by 1949. Vaccines for polio, measles, mumps, and rubella, introduced in the 1960s, helped keep the upward longevity trend going. In 1960, the U.S. surgeon general began recommending annual flu vaccines for pregnant women and people over 65 or with chronic diseases. From the 1960s onward, there were “noticeable gains in life expectancy at the middle and the end of life,” Bishai says. This was helped in part by advances in heart surgery and cancer treatments. Improved insurance coverage, including Medicare, also helped, he says. But his research, published in 2018 in the journal BMC Public Health, finds that increases in life expectancy have slowed here and across the world since 1950. Then it all came to a screeching halt. The decline in U.S. life expectancy in 2015, 2016, and 2017 (it ticked up slightly in 2018, and 2019 figures are not out yet) reflects a stark new reality: Death rates are rising not among children or the very old, but among people age 25 to 64, especially in the economically challenged industrial Midwest and Appalachia, according to a study published last year in the journal JAMA. “In America, that’s where the battle is — it’s in the middle of life,” Bishai says in a phone interview. “It’s been hard for Americans to understand that there are racial structural disparities in this country, that racism exists.” Inequalities not addressed The federal government is well aware of the nation’s regional disparities in health and mortality. In 2011, the CDC created a Social Vulnerability Index that ranks counties by their resilience “when confronted by external stresses on human health, stresses such as natural or human-caused disasters, or disease outbreaks.” It factors things like socioeconomic status, minority status, and even access to transportation. “Reducing social vulnerability can decrease both human suffering and economic loss,” the agency states. The Social Vulnerability Index, last updated in 2016, reveals the most vulnerable counties in dark blue. Image: CDC “Health differences between racial and ethnic groups are often due to economic and social conditions” such as living in densely populated areas, lack of access to grocery stores and medical facilities, and lack of paid sick leave, among a host of other systemic factors, the CDC states. “In public health emergencies, these conditions can also isolate people from the resources they need to prepare for and respond to outbreaks.” Those well-known differences are driving disastrous outcomes in real time as the new coronavirus rips through low-income and poor neighborhoods. Greg Millett, MPH, director of public policy at the Foundation for AIDS Research and leader of the study out last week on the disproportionate number of deaths in predominantly black U.S. counties, ties Covid-19 directly to the known regional inequities. Underlying health problems, including diabetes, hypertension, and heart disease, which raise the risk of death from Covid-19, “tended to be more prevalent in disproportionately black counties, but greater Covid-19 cases and deaths were still observed in those counties when adjusting for these factors,” Millett and his colleagues write. “Many people have observed large and consistent disparities in Covid-19 cases and deaths among black Americans, but these observations have largely been anecdotal or have relied on incomplete data,” Millett says. “This analysis proves that county-level data can be used to gauge Covid-19 impact on black communities to inform immediate policy actions.” Force for change? Since we don’t know how many people will die in the pandemic, it’s not possible yet to predict the drop it will cause in life expectancy. But it’s a safe bet it will go down, Bishai says, adding that it would take some “miraculous” decrease in other causes of deaths to prevent a dip. It didn’t have to be so bad. In a pandemic, a rising health tide would lift all boats. Improved overall health among the most disadvantaged, along with better access to health care and the ability for people to confidently stay at home when they are sick — all things that could change with significant governmental policy shifts — would mean fewer infections for everyone, less pressure on hospitals, and a quicker restart of the economy. Bishai hopes one positive outcome of Covid-19 is that it helps America get past the notion that the federal government is not responsible for the nation’s health. “What makes you healthy is beyond what you choose to eat, and lifestyle, and what your doctor does for you,” he says. He’s not alone in finding it “frustrating” and “bothersome” that our political system has not addressed the dipping life expectancy curve or the gross health disparities across the country. “The first thing the federal government could do is take charge and actually have a strategy for dealing with the pandemic,” says Hayes, the Rutgers historian. “Telling the states to handle it is not a solution and is a profound refusal to perform basic duties. Who could imagine FDR telling Hawaii to take care of Pearl Harbor or George Bush shrugging his shoulders at New York on 9/11?” Ultimately, Hayes argues, the federal government needs to provide universal health care, greatly reduce pollution that contributes to poor heart health, and address income inequality by raising the minimum wage. “The scourge of Covid-19 will end, but health care disparities will persist,” writes Clyde Yancy, MD, an academic cardiologist at Northwestern University, in an April 15 commentary in the journal JAMA. “The U.S. has needed a trigger to fully address health care disparities,” he writes. “Covid-19 may be that bellwether event.”
https://elemental.medium.com/a-stark-look-at-covid-19-and-racial-disparities-5737e56dbe2b
['Robert Roy Britt']
2020-05-14 14:29:46.679000+00:00
['Coronavirus', 'Mental Health', 'Healthcare', 'Racism', 'Covid 19']
Increasing Accuracy by Converting to a Multivariate Dataset
Increasing Accuracy by Converting to a Multivariate Dataset Tracyrenee Follow Dec 22 · 6 min read Being a data science novice seeking to improve my skills, I continuously go through the competitions I have previously entered and seek to improve their accuracy. One such competition I have reviewed is Analytics Vidhya’s JetRail time series analysis. There are many ways that one can predict future figures, such as Random Forest, statsmodels functions and Facebook Prophet. In both statsmodels and Prophet there are ways that one can check if a date is a weekend or a holiday, but this can be tricky. For example, I do not know what country JetRail is based in, but I am assuming it is in a western country. Even if JetRail is in a western country, however, it is important to know the holiday schedule for the country it is in before an accurate prediction can be obtained. With this in mind, I decided to assume that JetRail is based in a western country and the work week is from Monday to Friday and the weekend lasts from Saturday to Sunday. I have previously written about the JetRail dataset as a univariate time series analysis problem, with the link to this post being found here:- How I solved the JetRail time series problem with FB Prophet | by Tracyrenee | Python In Plain English | Nov, 2020 | Medium In this post, however, I have converted the univariate dataset to a multivariate dataset in an attempt to improve the accuracy. If you would like to know what happened then please read on. The problem statement and datasets can be found on Analytics Vidhya’s JetRail competition page, the link being here:- Time Series Forecasting (analyticsvidhya.com) The .ipyn file for this competition question was created in Google Colab, a free online Jupyter Notebook that can be used from any computer that has internet access. The problem statement for this competition question reads as follows:- “Welcome DataHacker! Congratulations on your new job! This time you are helping out Unicorn Investors with your data hacking skills. They are considering making an investment in a new form of transportation — JetRail. JetRail uses Jet propulsion technology to run rails and move people at a high speed! While JetRail has mastered the technology and they hold the patent for their product, the investment would only make sense, if they can get more than 1 Million monthly users with in next 18 months. You need to help Unicorn ventures with the decision. They usually invest in B2C start-ups less than 4 years old looking for pre-series A funding. In order to help Unicorn Ventures in their decision, you need to forecast the traffic on JetRail for the next 7 months. You are provided with traffic data of JetRail since inception in the test file.” Because many of the libraries I need to solve this question are already installed on Google Colab, I only needed to import those libraries into the program, being pandas, numpy, seaborn, matplotlib, fbprophet and sklearn. I then loaded and read the datasets into the program, being train, test and sample:- I decided to convert the univariate time series dataset to a multivariate dataset and I accomplished this by adding a column, “dayofweek”. The function, dayofweek, returns a value from 0 to 6 signifying what day of the week the sampling occurred:- I then created an additional column from the index, which is in datetime format. This column is necessary to perform a datetime analysis. I then changed the names of the columns to names that Prophet wants to see when it is training and fitting the data:- I created variables, ID_train and id_test, which stored the data train.ID and test.ID respectively. These columns were then dropped from the datasets because they are not needed to carry out the computations:- I plotted a graph of the train dataset because it is important to have a visual representation of how the number of passengers have increased with time:- I split the train dataframe in two to separate it into training and validation sets. The splitting is based on the date of this time series analysis:- I defined the model, being Facebook Prophet. Prophet normally only wants to see two variables, being “y” and “ds”, but it is possible to add an additional variable, “add1”, which I did in this instance. I forecasted on the validation set to obtain yhat:- I then plotted a graph of the training and validation datasets’ time serious analysis to visually illustrate how Prophet has predicted on the numbers of passengers of JetRail:- I then forecast on the test dataset to obtain yhat for that dataset:- I produced a graph of Prophet’s predictions of the test dataset and it can be seen visually that the number of passengers is anticipated to increase in an ascending fashion:- I prepared the submission from the value, yhat, and put it on a dataframe, which I then converted to a .csv file:- When I submitted the predictions to Analytics Vidhya’s solution checker I achieved an accuracy of 365.16, which was less than 1 point better than the model I had previously submitted that was univariate. I decided to make the predictions integers, and this reduced the accuracy to about half a point better than the previously submitted univariate version. I thought that if the days of the week data had improved the accuracy of the predictions then whether or not the day was a weekend might provide further illumination, so I added code to create an extra boolean column that stated whether the day in question was a weekend, and submitted the amended code to the solution checker. Sadly, this extra data did not increase the accuracy of the model, but actually reduced it. The code for this amendment is on my personal Google Colab account, but if anyone wants me to post that code, I will be more than happy to:- The code for this post can be found in its entirety in my personal GitHub account, the link is to the right:- Jet-Rail-TS/AV_JetRail_Multivariate_Prophet.ipynb at main · TracyRenee61/Jet-Rail-TS (github.com)
https://medium.com/ai-in-plain-english/slightly-increase-accuracy-of-the-jetrail-competition-question-by-converting-it-to-a-multivariate-1ce846baa781
[]
2020-12-22 08:35:21.928000+00:00
['Time Series Analysis', 'Data Science', 'Facebook Prophet', 'Python', 'Artificial Intelligence']
11 Social-Media Marketing Tools to Bookmark Now
Want to save time? Boost productivity? Get organized? Develop new, unicorn-level social-media strategies? The workflow of a social-media marketer can be chaotic and overwhelming — but it doesn’t have to be. Tools like MobileMonkey, Meet Edgar, and IFTTT, to name a few, help you get the job done and stay sane. Every social-media marketer should have these 11 tools bookmarked for easy access (I know I do). Discover them now! Chatbot marketing is at the forefront of most digital-marketers minds. With unprecedented ROI (re: an average 80 percent open rate and 20 percent click-through rate on messages delivered through Facebook Messenger), building a chatbot is your №1 priority. MobileMonkey is a simple and straightforward chatbot builder where you can create a chatbot in minutes without writing a single line of code. That’s right — its drag-and-drop interface makes it easy as click-drag-type, and you can have a chatbot up and running in no time. Did I mention it’s free? Get on this now. If you’re posting on multiple social-media accounts at once, then Hootsuite is a must. It can help make that juggling act with multiple tabs and tons of copy-pasting a whole lot easier. You can organize and schedule hundreds of posts on all your social-media accounts at once. Notably, most of its features are free to use. If you want to get a closer and more organized look at what’s trending or viral, try Tagboard. It’s a social-listening tool that lets you enter a term, topic, or hashtag to see what’s buzzing. You can use it to monitor brand or product mentions, or find out what hashtag is making waves. That information can then give new content ideas and ways to engage the audience. Standing for “if this, then that,” IFTTT is another one of my go-to social-media automation tools. It allows you to set up recipes. For example, you can create a recipe to automatically upload your Instagram posts to a Facebook Page’s album. Or you can set up recipes that will tweet content from a specific user’s Twitter account, or you can sync your Instagram posts to a Pinterest board. The possibilities are endless. IFTTT is a major time-saver and a helpful automation tool for social-media marketers everywhere. For visual marketers who use images and video, Tailwind is the answer. It has hashtag lists and tons of shortcuts for your Instagram and Pinterest marketing. Tailwind also lets you track the performance of your posts to see what works and what doesn’t. Its competitive pricing makes Tailwind accessible to consultants, small businesses, and large agencies alike. Visuals in your social-media posts may include photographs of places, objects, events, etc. A dependable and affordable source of stock photography is an asset for online marketers. Unsplash is one such website that offers over 810,000 photographs in its library. The most amazing thing about Unsplash is that it’s free, as unbelievable as that sounds. If you think your social-media posts are a mess and need something to organize them, Meet Edgar may be for you. Use Meet Edgar to find old posts on your social-media profiles and reschedule them. It also has a browser extension to easily add new content you may want to share. Meet Edgar also lets you edit and update posts in bulk, saving you a lot of time and energy. Keeping tabs on social-media competition can be rather tedious, but not so with Brand24. This tools notifies you of sudden changes in conversations. That can help you track down whatever interactions may affect your image. Data in Brand24 can be filtered however you want and exported to PDF, spreadsheet, or infographics. If you’re looking to get hardcore with your metrics, Brand24 is great. If you’re worried about the grammar in your written content, then Grammarly has you covered. It’s a great all-in-one online grammar, spell-checking, and plagiarism detection tool. Using Grammarly can make sure your content is both well-written and original. Most people don’t have either money for Photoshop or the know-how to properly use it. Canva is for those who need visuals to go with their content, but need something free and easy. The drag-and-drop interface makes it very easy for anyone to create good-looking visuals. It also gives you access to over a million photographs, graphics, and fonts. Both design novices and professionals can benefit from using Canva for their social-media marketing. This can be considered the online marketer’s multi-tool with its versatility and effectiveness. BuzzSumo is one of the best tools ever because it can help you find fresh content on the web. You can enter a topic or keyword to get a breakdown of what’s getting engagements. It also analyzes domains and back-links, as well as lists of influencers who are sharing that content. BuzzSumo is a great tool for all sorts of content marketing and social-media campaigns. Be a Unicorn in a Sea of Donkeys Get my very best Unicorn marketing & entrepreneurship growth hacks: 2. Sign up for occasional Facebook Messenger Marketing news & tips via Facebook Messenger. About the Author Larry Kim is the CEO of MobileMonkey — provider of the World’s Best Facebook Messenger Marketing Platform. He’s also the founder of WordStream. You can connect with him on Facebook Messenger, Twitter, LinkedIn, Instagram. Originally Published on Inc.com
https://medium.com/marketing-and-entrepreneurship/11-social-media-marketing-tools-to-bookmark-now-bf453555639c
['Larry Kim']
2019-06-19 10:26:01.098000+00:00
['Marketing', 'Life Hacking', 'Tools', 'Entrepreneurship', 'Social Media']
Why it’s Terrifying to Start Writing Again
Hello Muddah, hello Faddah Here I am at Camp Grenada Camp is very entertaining And they say we'll have some fun if it stops raining (Written by Allen Sherman-Sung by Mel Brooks) Writing is terrifying, especially if you’re afraid of everything, which I am. I do a bang-up job of acting fierce but come behind the curtain, I’m a panic-stricken fool. But, do you know what’s scarier than writing? Not writing. Because when you stop, it feels like you’ve been thrown off the ship into a tropical cyclone without a life jacket. Not that a lifejacket would be much help in a tropical cyclone, but it would be something, and sometimes something keeps you going. I grew up singing this satirical song about camp. The first lines are shown above. The song is about a postcard written by a camper to his mother and father (Mudduh and Fadduh). It’s all about the perils of camp. The singing postcard describes poison ivy, missing children, kids eaten by bears, having to read Ulysses. The end of the postcard reveals it’s only been one day. He concludes that all is well now and his parents should disregard his letter. As an adult, I see this song as a metaphor for adulthood. Days are long. Mondays seem like they should be Fridays. I would love to occasionally write a postcard, asking someone to pick me up and take me out of this one long day, but who would I send it to? As a writer, the longest day is the day I return to the page after a self-imposed hiatus. You see, writing can feel like an unnecessary act. Like in a cult, once you start questioning those hive beliefs, you want to get the hell out of there. Writing is like that. You shouldn’t question why you’re doing it. It breaks the spell. Here’s my question. Do you keep writing even when you have nothing to give to the page? Or, do you take breaks so you can reboot, reflect and shift your gaze? When I keep writing, and there's no light in my attic, I feel like I’m writing one long sentence. Not necessarily a bad sentence, but not a particularly interesting one either. More tortoise than hare. When I stop to take a break from writing, however, I get scared that I’ll never return. Like I’ve been thrown off a boat into a cyclone and once the storm clears, I can’t see the land or the boat. I am lost. Once I return to the page, after all this water-treading, I doubt myself. I wonder how I sailed this writing ship before? What muscles did I use to lift these thoughts? What routes did my brain travel to connect the words in my brain to the words on the page? I also doubt the navigation I previously used to find my way. Aren’t all the stars dead? Why was I using them to map my way? I am somewhere, but I still lost. I used to be a swimmer. When I raced, you got one false start. After two, you were disqualified. The officials realized some people were using the first false starts to their advantage, so now you only get one false start then you’re history. Go back to the locker room, change, and go home. Writing has a lot of false starts. One false start after the other. No one comes in and says “Get off the block, ya’ done”, but it feels that way. You keep starting over, again and again. Every time you return to the page, you’re at the beginning. You have to remember how to dive off every time. This week, I took off two days from writing. I still scribbled down ideas and potential titles, but I was off the block. The block was a mile away. It’s always terrifying. This morning, I walked back towards the starting block. My chest tightened. I stretched out my brain by inhaling and exhaling. I took a swig from my water bottle, which for writers is coffee. I climbed onto the block. Instead of fear-shaking, I bent over and grabbed the part where my toes go. I squeezed it. I said ‘I’m not afraid of you, block.’ I was a little afraid, but what difference did that make? I’m more afraid of not writing than of writing, so I’m just going to stand here until the starter's gun goes off. I can’t see the finish line, but it’s something, and sometimes something keeps you going.
https://medium.com/illumination/why-its-terrifying-to-start-writing-again-3a32446cdfc9
['Amy Culberg']
2020-12-23 20:28:21.390000+00:00
['Fear', 'Writing Tips', 'Self-awareness', 'Self', 'Writing']
GEOINT App: Using web maps as the spatial ground truth.
The Web Map Use Case Loading a web map can sometimes take a while and should therefore be carried out in the background. ArcGIS Runtime supports the so-called loadable design pattern. Accessing web maps and operational data structured in layers requires the resources to initialize their state asynchronously. The loadable design pattern reflects the behavior that online resources use to load data asynchronously. The pattern also provides a retry mechanism if previous attempts to load have failed so we can properly handle and resolve situations such as network outtakes. Online resources process simultaneous and repeated requests for loading accordingly and also allow a request to be canceled so that we can stop the loading of a resource. The ACLED events are represented as two feature layers of this web map. Both feature layers represent the events as data records, so called features. One contains the events of the last 14 days and the other all historical events. The events are visualized by a specific unique value and a simple renderer. An event feature has a unique primary key and a point geometry. All features of a feature layer always have the same geometry type and the same spatial reference. This restriction not only has advantages in terms of rendering performance, but also in evaluating the features. It also allows to spatially relate the features to one another in a simple manner. When querying the features you have to make sure that the corresponding layer is fully loaded first. You get access to the attributes and geometries of every feature. A query can not only filter the resulting feature set, but also define which attributes and whether or not the geometries should be returned. After investigating a bunch of ACLED items from the Living atlas we took a closer look at one of the items the “Bureau of Conflict and Stabilization Operations” (CSO) published in 2019. The web map was last updated on December 4th in 2019 and can be easily accessed using this item. If we reuse the examples from Proof of concept — Ramp-up, we just have to replace the default map by using the item. We need to create a new portal item instance using the item’s url and passing this instance into the constructor of the map instance. Compile, run the sample map viewer and after the web map is loaded we should see a ACLED layer showing on top of a dark-gray basemap. JavaFX based ACLED web map sample WPF based ACLED web map sample Qt Quick based ACLED web map sample Each SDK uses the language specific best practices for getting the job done. In Java you create and register a listener using the map or layer instance. When developing with C# you do the same, the listener is just called an event handler and you can enjoy the async/await pattern. In Qt you are using the specific signals and define slots for handling the map and layer events. We defined a simple use case for stressing the map viewer samples. When the map is fully loaded we just register a map view tap listener/handler/slot. By tapping the map view a query is scheduled against the feature layer representing the ACLED events. The returned result is analyzed by using the primary key and the returned geometry of each feature. The map sample viewer defines a generic Hastable/Dictionary/Hash managing all returned features using the ID as key and the feature itself as value. Whenever a new query result is obtained, the feature is only added when the ID is not already known and if the ID is known the geometries of both features are compared using the underlying Geometry Engine implementation of the C++ based runtime core. If the geometry had changed, the old feature is replaced by the new feature. The feature layer contains 13 684 features all having a valid and not empty geometry representation. When analyzing the point geometries by just using their raw coordinates we saw that 6301 unique locations were represented by those features. If we want to know the real spatial distribution of these ACLED events, we would create a spatial grid and classify locations being near to each other as a match. Let us take a look at the following chart representing the overall memory consumption of all three sample map viewers during startup and on shutdown.
https://medium.com/geospatial-intelligence/geoint-app-using-web-maps-as-the-spatial-ground-truth-c8e716e87af8
['Jan Tschada']
2020-12-06 18:14:12.132000+00:00
['Dotnet', 'Geospatial', 'Qt', 'Java', 'Software Engineering']
Keeping your story open and accessible for everyone to read
Keeping your story open and accessible for everyone to read Common questions about Medium and its Partner Program. The UX Collective is a platform that elevates unheard design voices all over the world. One of the channels we use for knowledge sharing is Medium, an online publishing platform that enables anyone to share their thoughts and build a community around ideas. Below are some common questions around your options as an author when sharing your content on Medium. What is the Medium Partner Program? Medium has recently introduced a Partner Program: a way for writers to get paid for their writing. Writers can choose to submit their stories to the Program and, if the story is approved, they will get paid based on how readers are engaging with their articles. As members read longer, writers earn more. Is my story going to sit behind a paywall? No. By default, every new story is open for everyone. Writers who are enrolled in the Medium Partner Program will have the option to place their stories behind the metered paywall. If they choose to do so, their stories are eligible to earn money. (Source) Here’s what Ev Williams, the founder of Medium, has to say about the topic: “It is free to publish on Medium, your content is always yours, and we will host it for free, indefinitely. This applies whether or not you choose to make money by putting your articles behind the Medium paywall or make them completely free. Yes, there is a paywall, which blocks reads after a few articles per month — but only if they were put behind the paywall by the writer (which, again, is optional). Many writers choose not to do so. But if they do, know that they are getting paid when members read their posts.” How does the paywall work? If you decide to close your story , it becomes “member-only”. In reality, readers who are not members will be able to see a number of closed stories for free per month, before they are asked to sign up for a Medium membership (US$5/month in most markets). Medium will also give you, the author, a “Friend Link”, which is a specific URL you can share with your network so they can read your article for free, regardless of having a Medium account or membership. , it becomes “member-only”. In reality, readers who are not members will be able to see a number of closed stories for free per month, before they are asked to sign up for a Medium membership (US$5/month in most markets). Medium will also give you, the author, a “Friend Link”, which is a specific URL you can share with your network so they can read your article for free, regardless of having a Medium account or membership. If you choose to keep your stories open, your story will remain free and open for every reader, including logged out users. But why pay the writer? Writing, like any other work, takes time. Not everyone has the privilege of being able to write and publish content for free, so it’s fair to give authors the option to get paid based on how many people read and engage with their stories. If you’re a writer and you don’t depend on making money on your writing, you can keep your stories open. I believe in keeping design knowledge open and available to everyone — can I keep my articles open? Yes. When you are about to publish your story on Medium, you’ll see a checkbox that gives you the option of whether your story will be open or closed. Your story will only be put behind a paywall if you check that box. Your story will only be put behind a paywall if you check that box. When your story is open, anyone can read it, without restrictions. The story will remain open and accessible to everyone. As publication editors, we don’t have any influence or control over whether your story is part of the program or not. It’s your choice, and your choice only. As it should be. Does Medium own my content? According to the founder of Medium: “It is free to publish on Medium, your content is always yours, and we will host it for free, indefinitely.” For more details, check Medium’s Terms of Service. How much do writers who decide to close their stories get paid?
https://uxdesign.cc/can-i-keep-my-story-open-and-accessible-to-everyone-to-read-on-medium-ebb91751987
['Ux Collective Editors']
2020-10-11 01:14:55.466000+00:00
['UX', 'UI', 'User Experience', 'Startup', 'Design']
Your Love of Big Breasts Isn’t Biologically Hardwired
I was thirteen the first time I was catcalled for having breasts. I developed early, much earlier than getting my first period. And from that moment on, despite my youthful face and obvious lack of sexual maturity, men felt obliged to comment on, stare at, and talk about my breasts. “Straight men are just hardwired to find bigger breasts more attractive.” This was a comment I got when I complained about men staring at my breasts with no sense of shame. It was aimed to excuse the men in question, to allow their behavior. And it wasn’t the only comment I got. Over the years, there have been lots: dirty, smug, scathing comments about how men simply can’t resist looking at big honking bazonkers, and not only can they not resist, but biology is on their side for it. There’s nothing I despise more than folks — usually straight cis men — using really bad human evolutionary psychology takes to defend their misogyny. You see it when men claim that women are naturally worse at science, better at nurturing, just hardwired to want kids. There’s just something so patronizing about this line of defense that sets my teeth on edge. “I’m not sexist,” these men seem to say. “I’m not objectifying you on purpose. It’s just science.” But the science they’re citing, in this case, is wrong. Let’s get into the various mistaken assumptions about breast size. Bigger breasts have no reproductive value. Let’s go basic biology for a minute. Traditionally, people are attracted to features that indicate the future of their potential offspring is strong. We like symmetrical features that signify healthy genes, smooth faces that indicate a lack of disease. And one of the things these men seem to seize upon is the allegedly universal truth that bigger breasts mean a woman is more likely to be reproductively successful, and that’s why they’re so attracted to them. Photo by Annie Spratt on Unsplash The truth is that there is no evolutionary reason why men would prefer larger breasts. They’re not linked to higher fertility as a single trait, larger breasts don’t produce more milk for offspring, and if anything, larger breasts might signify that a woman is already pregnant which would count as a mark against her suitability as a mate. Not only that but in terms of signaling reproductive readiness, they’re flawed at best. Many women develop breasts long before they’re fertile. Just as secondary sexual characteristics in men, like beards, aren’t universally attractive and don’t signify sexual virility or otherwise, breasts don’t either. Breasts are not found universally attractive. What a lot of people don’t realize is that many of the indications we take for attractiveness now are simply cultural. It’s not “hardwired” into us to find certain traits attractive, it’s drilled into us as a cultural preference. Look at thinness, which is deemed a universally appealing trait. But the second you start to dig into any research that’s been done, you can see that women with higher BMIs tend to have more children, and children with higher birth weight, which would suggest that a higher BMI should be deemed sexually attractive. But it isn’t. Look further afield and you’ll find one culture prefers “tubular” shaped women, instead of the traditional hourglass, whereas others prefer rounder figures because those signify a well-structured community that looks after its members. In a 1951 study of 191 cultures, anthropologist Clellan Ford and ethologist Frank Beach reported that breasts were considered sexually important to men in 13 of those cultures. —Natalie Wolchover, via New Scientist (I’ll leave aside the very worthy criticisms of BMI as a measurement for now.) Nobody likes to talk about that, or about any of the other deviations of what people from different cultures find attractive because that contradicts popular perception of what people find attractive in society now. But big breasts are by no means something every culture deems sexually attractive. Breast obsession is learned, not hardwired. Here’s a pretty wild example: would you consider bound feet to be sexually desirable? Probably not — nowadays it’s viewed as a pretty controlling method which caused pain for the women it was inflicted upon. Photo by Andalucía Andaluía on Unsplash And yet, until fairly recently, footbinding was sexually appealing. This wasn’t due to some strange hardwired preference. Smaller feet didn’t signify a greater reproductive potential. It was simply a cultural preference tied up with a whole lot of weird misogyny about women being helpless. Additionally, women can learn to fetishize breasts. There’s no reproductive benefit to women preferring to look at breasts, and yet in selected cultures, women do. That’s not a coincidence — it’s a sign that this kind of attraction is nurture, not nature. In the cultural view, men aren’t so much biologically drawn to breasts as trained from an early age to find them erotic. — Natalie Wolchover via Live Science The problem with the breast fetish Well, what’s all the fuss? Why does it matter that men aren’t really hardwired to be obsessed with breasts? It’s because of the cultural significance placed on breasts. Women are simultaneously told to cover up and display their breasts. We’re revered and shamed for them. They cement our position in society as mothers, caregivers, nurturers, while simultaneously casting us as harlots, provocateurs, shameless whores. Breasts are demonized when out of the control of men and put on the holiest of thresholds when in their possession. “[Ms. Yalom] found very little in the record to indicate how women have felt about their breasts: whether they took pleasure in them, the extent to which they chose to display their breasts or if they had any say in the debate over wet-nursing.” — Natalie Angier, via the New York Times In the past thirty years, there has been so much research done on the alleged importance of the female body — breasts included — and what men find attractive. And there’s been an unsurprising dearth of research that runs contrary to popular cultural expectations. For example, look at the traditional story we’re told of women being the childbearing mothers staying at home collecting berries while the menfolk were all off hunting mammoths. It begs the question: Why aren’t strong women seen as more attractive, given that the stronger, bigger, and broader women would have been more capable of protecting their children should. Why is it petite, dainty, helpless, big-breasted, small-waisted women who claim public adoration? Photo by averie woodard on Unsplash There’s no research done on this for the same reason the breast fetish isn’t questioned, for the same reason that there is next to no scrutiny on the body shapes and sizes that women prefer: because for the history of science, academia has had a vested interest in protecting the dominant worldview that breasts and the women attached to them are there for the consumption and pleasure of men. I encourage scientists and readers alike to question their deep-held beliefs about universal attraction and the “natural” preferences and skills reported for men and women. Look at how these have changed over time, between and within cultures. Closely examine your prejudice and be brave enough to question it in the books you read, the people you speak with, and the beliefs you hold.
https://zulie.medium.com/your-love-of-big-breasts-isnt-biologically-hardwired-2f903209a13e
['Zulie Rane']
2019-08-30 15:17:12.266000+00:00
['Equality', 'Sexuality', 'Psychology', 'Culture', 'Science']
20 Simple Ways To Reduce Your Environmental Impact While Travelling
As a current international gap year student, a large part of my life of late has involved travelling. Going abroad and experiencing a different country, without a doubt, is extremely valuable when it comes to fostering understanding and respect towards different cultures. Sadly, however, the act of hopping on a plane and going across countries is largely detrimental towards the environment. Especially as someone who advocates for low-impact living, I’ve recently become hyper-aware of how damaging and hypocritical this can be, and I thought it would be necessary to address the contradictory nature of travelling as a vegan ‘environmentalist’. While an ideal world wouldn’t require any of these individual adjustments in the first place (read: Neoliberalism has Conned Us Into Fighting Climate Change as Individuals), I am still in the process of rectifying my passion for global understanding with reducing my carbon emissions. In my eyes, travelling and interacting with different cultures will always be a fundamental way to connect with the world; the gift of nature is what motivates me every day to perform little acts of advocacy for a healthier planet and better future. Unfortunately, I fail to see an immediate way to cut out high-carbon travelling right now, but what I believe we can do in the meantime, is make lifestyle changes to reduce this colossal impact. Over the months, I have been paying extra attention to areas with potential for adjustment towards sustainability. In the process, I have acquired some tokens of advice on how we may lower our impact while travelling, which I’ve listed out below: Slow Travel: Trekking in Nepal 1. Embrace slow, low-impact travel The concept of slow travel was definitely drilled into me during my three months spent in Nepal. A 15-hour drive on a bumpy road? No problem. 11-hour trek days carrying all our gear on our backs at 15,000 ft altitude? Sure thing! 17 days of camping without a shower? Come at me…Not only do these practices promote ways to fully absorb everything that’s around you, they also lend to more opportunities to connect with people on the way and enjoy the process while you’re at it. Those 15-hour bus rides without any devices definitely enabled me to bond with my peers and during the times where we were not socialising, it reminded me of the important ‘skill’ of being bored as I was able to utilise this time to reconnect with previous neglected thoughts. 2. Bring your own container, utensils, napkin, earphones/headphones, and blanket on the plane to avoid the packaged ones they provide. And for when you’re eating local street food in places like Thailand or Taipei! If, for whatever reason, you’re still missing some of these ‘zero-waste’ essentials, Net Zero Co kindly sent me some of their products and I am loving them so far! Of course, repurpose what you have before purchasing anything you ‘need’, but their website contains pretty much anything you’re looking for — even if it’s just inspiration. Reusable container and utensils from Net Zero Co 3. Bring an empty water bottle, cup or mug to fill on the plane (just ask the flight attendants!). Or, if you’re in Hong Kong, bring it to fill at the water dispensers which can be found at every few departure gates. While filling your water bottle on the plane, it’s also a great way to spark conversation with flight attendants when they’re not busy serving food, demonstrating safety procedures, etc. It might be interesting to learn about where the food goes when it’s not eaten! Download water fountain apps if possible (e.g. Water for Free in HK), and a Steri pen or Life Straw can also be a worthwhile investment when it comes to purifying tap water in countries where it’s unsafe to drink out of the faucet directly. 4. Pack lightly; choose carry-on if possible. This saves money AND reduces the need for transportation fuel. I wasn’t aware of this before, but the extra bags do add up and require extra fuel to transport. The Rainbow Bus to the Farmers Market in Byron Bay, Australia 5. Shop at zero-waste bulk food stores and/or support local farmers markets. Make sure you go with a reusable bag or container — bring multiple in case your friends forget! Buying local means that a) you’re supporting their economy, b) reducing food miles — the food didn’t have to travel huge distances to get to where you are (according to Levi “save the world” Hildebrand, buying local cuts down on the average 1500 miles that food travels to be on your plate), c) it’s probably cheaper, and d) the produce is likely to be more fresh and nutritious! 6. Don’t buy souvenirs. This applies to anywhere you are: buy experiences, not things. You’ll save money, form more memories, and have more stories to share! Avoid falling into the trap of consumerism, capitalism, and pretty much any ‘ism’ that begins with c…Want to show your friends you care about them? Share videos of you in different places telling them how much you love them! And if you really want to bring home a souvenir, buy something that is made locally. These can often be found at different artisan/handmade markets. 7. Avoid transfer flights when possible. Taking off and landing are what generates the most carbon run-off and can be easily avoided. If possible, take flights that are of a higher priority within the airport and/or those with a built-in carbon offsetting program. This means they are less likely to linger around in the airport — emitting more damaging and unnecessary chemicals into the environment. Cheap, ‘affordable’ flights are often the ones that cause your carbon footprint to soar. Compost Bins at Grampians Eco YHA 8. Avoid food wastage. This applies wherever you are in the world, but only buy what you need if cooking at home or eating out! As I mention in this article, not only does wasting food waste all the resources that went in to its production — from the water and energy used to produce and transport it, to the nutritional value it once contained — when food decomposes in the landfills, it also emits methane gas, which is 21 times more potent than CO2 — leaving an even greater impact on climate change. If you’re concerned about disliking the food on the plane, bring your own snacks in a reusable container (refer to tip #2) instead! 9. Walk — or run — everywhere! This is the best way to explore the area that you’re in; it’s cheap, fun, and sustainable. If you’re not a fan of either, try renting a convenient form of transportation. You can do this with rental systems such as SmartBike in Hong Kong, Lime in the U.S. and some places in Europe and Australia, Bird in LA, City Cycle in Brisbane, the list goes on…Alternatively, you can join a free walking tour, hop-on hop-off tour bus, or just jump on a metro and explore! 10. Do research before you enter the country you’re visiting! See if they have food waste apps available such as OLIO and Too Good To Go or events such as dumpster diving. These are great ways to reduce food waste while meeting people abroad. Before visiting Brisbane, I found out that they have a community herb garden where you can collect herbs to bring home — for free! While I never ended up grabbing any, for those planning to stay longer term, this would be the perfect way to reduce your costs and environmental impact, while potentially making some like-minded friends. A second-hand book! 11. Invest in a Kindle/download the Kindle app on your phone/tablet. I love bookstores as much as the next person, but maybe this time, you could use the store as your browser, then simply purchase the book off the Kindle app instead. Since libraries are unlikely to be accessible for one-off, short-term travellers, downloading them onto your devices can be an easy way to access your book everywhere you go. Alternatively, you can scout out second-hand book stores (I got this book photographed on the left from a pre-loved store in Byron Bay, for example). It’s likely to be cheaper, too! 12. If staying at a hotel/place with room service, ask the cleaners to NOT wash your sheets/towels, etc every day. Perhaps I’m overgeneralising, but I doubt you’re so dirty that your bedsheets require furnishing after a single night’s sleep. I’m sure you don’t change your sheets at home every day, so it should be no different while you’re away! 13. Bring your own toothbrush, toothpaste (which, again — you can get as tablets or in a jar from Live Zero or Slowood), creams, shampoo, conditioner, soap, safety razor, instead of using ones at hotels that will come in plastic packaging. A Vegan Meal 14. Eat less meat! An obvious one coming from me, but an important reminder nevertheless. Sure, it’s great to dabble in different cultural cuisines, but once you’ve had a try, it’s a good idea to cut down on the meat consumption — especially beef. If you need a refresher as to why this industry is particularly damaging towards our environment, check out this article. 15. Plogging! Pick up trash whenever you see any on the beach/streets/wherever you go. Or join a local beach clean-up to give back to the environment you’re in. 16. Travel differently: Try WWOOFing (a form of work exchange where you work on an organic farm in exchange for food and accommodation), backpacking (learning to live simplistically out of your backpack), camping (this includes living in a teepee like the one pictured below), travelling and sleeping in a campervan, and — for those looking for something more extreme, you can even try living in a hammock. I read about a girl who did this while travelling Australia, who would bring her hammock around and sleep in people’s backyards for no cost! Alternatively, if you want something a bit more conventional, look for hostels or hotels with a specific emphasis on being eco-friendly. In Australia, I stayed at the Grampians Eco YHA for a night, and I was impressed by all their sustainability initiatives. They had a herb garden (where residents could take herbs from for free), vermicompost box, chickens with free-range eggs, etc — many factors which contribute to a sustainable food system. For a slightly more cultural experience, working as an au pair or living with a homestay family works wonders. I did this in Nepal for over a month, and it was one of the most valuable travel experiences I’ve had! Not only was my host family great company, but I also learnt so much more about the area than I would have had I been staying on my own. Low-impact travel 17. Use reef-friendly sunscreen. It turns out that oxybenzone — an ingredient commonly found in conventional sunscreens — combined with warmer water temperatures is a leading cause of coral bleaching. This mixture disrupts the fish and wildlife, leaches coral of its nutrients and bleaches it white. Not only does this affect the habitat itself, but also harms local economies which depend on tourism that the coral reef attracts. Therefore, when purchasing sunscreen for your next vacation, seek out ‘reef-friendly’ sunscreen which doesn’t contain any toxic chemicals or substances. 18. LEARN their ways. One of the first things I did upon arriving Australia was attend an aboriginal walking tour that led me through the different ways in which indigenous communities have been preserving the land for years. This set the tone for how I’d come to interact with and appreciate different natural landscapes while navigating the country, and it left me with some useful insight on how we can become better stewards of the earth even back home. Try to attend workshops and talks where you can learn something valuable and transfer that knowledge back to your home community! Canoeing the Noosa Everglades 19. Support tours that don’t destroy habitats. Rather than go on a speed boat that disturbs the serenity of marine habitat, why not try kayaking instead? In Byron Bay, I went on a dolphin kayaking tour, where the guides made a special emphasis not to disrupt the natural movement of the sea life. This is the best way to experience nature while getting some exercise in! Other forms of sustainable exploration include canoeing, cycling, walking, etc. 20. Stay for a longer length of time. There’s a difference between being a tourist and a traveller. While the former can be achieved within a couple of days of landmark-hopping, the latter takes more time and effort but allows you to connect more deeply with a country. By remaining in one place for a longer period of time, not only will you be able to see and do more, you can get much more out of your stay both socially and culturally. With business, school, and individual trips on the rise, it doesn’t look like overseas travel is going anywhere anytime soon. However, the worst thing we can do is dwell on the fact that we’re doing something ‘wrong’. We can make changes to mitigate our impact, and the simplest thing we can do is reframe our mindset and recognize this reality. Keep in mind, however, that while small lifestyle changes are great, these are best carried out in conjunction with other, more grand acts such as demanding systemic change. I hope this post gave you a bit of inspiration as to how to travel more sustainably, and do let me know if you have any feedback or other ideas! — — — If you found this article insightful, please do give it some claps (you can clap up to 50 times)! This goes a long way in helping me reach more people with my work. Also, you can find me on Instagram for more related content! Originally published at https://www.veganhkblog.com.
https://medium.com/climate-conscious/20-simple-ways-to-reduce-your-environmental-impact-while-travelling-d787e3156966
['Eugenia Chow']
2020-07-14 13:56:17.158000+00:00
['Travel', 'Sustainability', 'Climate Action', 'Culture', 'Environment']
10 free tools to help you grow your business
“Most businesses actually get zero distribution channels to work. Poor distribution — not product — is the number one cause of failure.” This Peter Thiel quote should be heeded by every startup founder reading this. Honestly, I’ve made this mistake myself, in my previous failed entrepreneurial experience. And how many times you’ve seen friends, co-workers or teams pitching their ideas, where there’s a fantastic team, a brilliant product but zero effort on understanding how to distribute the whole package. Without an excellent distribution model, your business will fail. If you build it, they won’t come. Marketing is at the heart of any good distribution framework, and as a fledgling startup, it’s critical that your company finds cost-effective tools to help amplify your network as quickly as possible. That’s where this list comes in. Each tool is free to use at least for a “freemium” plan, and will make a genuinely positive impact on your company. 1. SEMrush: From doing competitive analysis, to SEO keyword research, SEMrush is an incredibly powerful tool that will empower you and your colleagues to appraise your company’s online performance in minutes. Their free to-use starter plan provides insightful data that other company’s would make you pay to see. It’s a must use platform, without a doubt. 2. BuzzSumo: If you’re interested in harnessing the power of content marketing, but don’t know where to begin, BuzzSumo is the tool to use. Simply enter a URL or keyword, and this social media monitoring tool will show you the 10 most-shared articles for free. While you have to pay to see other highly shared articles, a top 10 list associated with a specific keyword or website will help you jump-start your content marketing strategy. 3. Canva: Gone are the days where you needed to rely on an Adobe Illustrator expert to create a great looking logo, blog post header, or social media background. Canva is an incredibly intuitive design tool that empowers business owners to quickly create professional graphics. 4. Google Analytics: Probably no introductions needed here. Stop wondering how many people are visiting your website, from where, via what method. Google Analytics provides users a set of rich information, perfect for a new startup interested in analyzing user behavior. Use the “behavior flow” tool to see what pages most of your visitors view first, and to see how visitors explore your website from there. That’ll help you to better optimize their site for UX. 5. GetSocial: Virality is a startup founder’s best friend. You can’t create it without highly clickable social share buttons. That’s where GetSocial comes in, with its social media app store that helps websites improve their traffic, shares, followers and conversions. Also, they’ve optimized the whole mobile social sharing experience. 6. Buffer: Becoming an influencer on social media has never been so easy. Buffer allows you to schedule 10 Tweets, LinkedIn posts or, Facebook posts and tracks all key metrics. Plus, the platform will suggest relevant content for you to share, so that you can grow your audience by providing valuable and relevant content. Buffer is one of the best social media monitoring platforms around and is a must use for founders. 7. Trello: This task management platform is free to use, and will help you stay organized as a business and as a marketer. You can create segmented columns with Trello, which will help you stay on-top of various marketing initiatives like blog posting, social media, and email marketing. Plus you can share Trello boards with your team, that way everyone will be aligned on what needs to get done. On a side note, I use Trello for everything: from shopping list, to finding an apartment to rent, to our day-to-day product management. I also love the use case from the guys at Uservoice. 8. Hubspot Marketing Grader: Hubspot is an all-in-one marketing automation system. While it costs quite a bit to actually use Hubspot, the company offers the Hubspot Marketing Grader that will analyze the overall performance of your marketing strategy online. Use insights from Hubspot to understand what is working and what needs to be fixed if your business is to scale. 9. MozBar: Learn why various website are ranking on Google with the MozBar. This Chrome and FireFox extension shows users ranking factors like page authority, and social media performance as they browse the web. It’s an ideal free tool for founders interested in better understand their competitors and SEO in general. 10 Headline Analyzer: Whether you’re writing a blog post, titling a new page on your website, or editing your pitch deck, headlines have a huge impact on the overall performance of a written marketing initiative. That’s where Co.Schedule’s Headline Analyzer Tool comes in. Simply paste your headline into the tool and it will grade your headline on an F to A scale for virality. While creating a product that customers can’t resist is a critical component to building any successful startup, building a marketing machine is another key component to creating a business that scales quickly. These 10 free to use marketing tools are sure to make it easier for any founder to grow his or her business quickly.
https://medium.com/getsocial-io/10-free-tools-to-help-you-grow-your-business-a9ab8d73d997
['João Romão']
2017-07-07 08:41:28.319000+00:00
['Growth Hacking', 'Digital Marketing', 'Startup', 'Online Marketing', 'Marketing']
Mental Health vs. Strong Body
Since there were only a few things I could control entirely, one of them was to get fit. In my entire 20s I have been slightly chubby, as I loved, and still do (whom am I kidding?) eating anything made from dough: pastries, bread, pasta, you name it. If it screamed carbohydrates, it was my go-to meal. I have always had this excuse saying that if I ever wanted to lose weight I will do it. The only problem with this — let’s call it on its real name, a delusion — was that it’s been ages since I kept selling myself this, which only made me even more comfortable with the kilos I was adding on the scale on each passing year. It was January 2018 when I have decided to make a change. While I was not actively thinking about motherhood, the mental seed was always there, waiting. So I have decided to try to lose some weight. I hadn’t had any target but I was sure as hell I didn’t want to reach a new weight milestone on the scale which I would become comfortable with it, too. I was 150 cm in height and weighing 59 kilos. Since 2017 June, I have stopped eating other meat than fish, and that would have been only on occasions, having switched to a lacto-ovo-vegetarian diet with a predominant preference towards carbohydrates — and lots of it. Anyone who moves from a regular diet to a vegetarian has to be careful on the caloric intake because of the trap of thinking your body will not get enough energy to support your normal activities. Since this was an underlining concern for me, I ended up choosing very high caloric foods. Knowing what my culprit was, the decision to try the keto diet was a no brainer. At the same time, I also started a very light exercise plan, because I hate any form of physical exercise to even consider going to the gym. The plan involved approximately 10 to 15 minutes a day (usually in the morning) of medium intensity exercises that I could perform in the comfort of my home. It worked like a charmer. The keto diet (which I would only dare to try again in case of emergency — aka nothing else works) was perfect. I lost around 2 kilos during the first week, which was a strong incentive for me to continue with it, even though, preparing the meal or eating out was very often a challenge for a vegetarian on keto. In parallel, as I was losing weight, my morning routine for exercises became more bearable to the point I started enjoying it and became even disciplined about it. That was it. First time in my life, I was owning it. Week after week, I became leaner and fitter so I decided, in that spring, to go for my first years' jog. It was nothing spectacular at the beginning, but my new discipline habit caught up and helped me come back to it whenever I felt that I will never be a runner.
https://medium.com/in-fitness-and-in-health/becoming-a-mother-a-heartfelt-testimonial-c114f8dfec71
['Eir Thunderbird']
2020-12-20 20:53:24.186000+00:00
['Fitness', 'Mental Health', 'Motivation', 'Feminism', 'Parenting']
An interview with Awkward co-founder Kevin Kalle
You first studied at the Willem de Kooning Academy in Rotterdam, then transferred to Maryland Institute College of Art. What drove you to move to the states and study at MICA? I felt limited at the Willem de Kooning Academy. It made me look for a challenge and I knew the bar at MICA was really high. I signed up, got accepted, and packed my bags. These days education is behind on the industry, so I think the problem I faced then is something we still face today. Why do you think education is behind on the industry? The curriculum does not match the practical experience. It’s hard for schools to innovate, and at the same time, our industry is changing rapidly. Looking back, I’m glad I made the decision to transfer instead of dropping out. It’s not just design skills you learn at school but you also develop social skills, and you get a chance to work in teams and learn to understand other perspectives. These are the things that I think should be emphasized in the work we do; it’s not just about your skill set. You finished MICA in 2006 and started Awkward in 2011. What did you do during the five-year gap? Right after school, I co-founded a start-up with 3 others in San Francisco where I learned a lot of things besides design itself. I did that for a while but noticed I still wanted to learn a variety of things instead of just one thing. That’s when I started freelancing for multiple startups and agencies with a strong focus on user interface and icon design. During this time I met Pieter Omvlee from Sketch. Back then he was working on Fontcase and Drawit.
https://medium.com/madeawkward/an-interview-with-awkward-co-founder-kevin-kalle-5874c0439a01
[]
2018-09-17 08:06:30.909000+00:00
['Design', 'Agency', 'Founder Stories', 'Interview', 'Startup']
Did the Protagonist Need a Backstory in Tenet?
Christopher Nolan’s more recent films have, in one way or another, been polarizing, to say the least. Whether it was the narratively messy The Dark Knight Rises, or the heavy-handed dialogue found in Interstellar, or the lack of character development in Dunkirk, there is no shortage of criticism that can be found being levied against Nolan films. And yet, for how prevalent this is for Nolan’s work, the criticism and critiques never seem to stick, at least not in the same way that it has for the likes of M. Night Shyamalan; which effectively sunk his career and reputation in a big way. The why behind Nolan’s success is truly fascinating. For while we can criticize his storytelling style all day long, we always find ourselves coming back for more. Which brings us to Nolan’s latest polarizing project, Tenet. Tenet is a fascinating character study — not of the protagonist, the, uh, Protagonist — but of Nolan himself. Tenet, probably more so than any other of Nolan’s recent projects gives us a glimpse into how he approaches his minimalistic storytelling process. The protagonist, the Protagonist Ironically, the most fascinating criticism about Tenet isn’t the preposterously crazy take on time travel, but about how the film presents its lead character, the protagonist who is purposefully known literally as the Protagonist. Many have taken humorous jabs at Nolan for this seemingly on-the-nose creative self-indulgence. After all, on the surface, naming your protagonist the Protagonist seems like the sort of thing a film student would do in an attempt to be artistically edgy and unique, but is instead groan-inducing. And while I’m not saying that Nolan couldn’t have nor shouldn’t have come up with a more appropriate naming convention, it makes me wonder, how much focus did Nolan plan on putting into Tenet’s main character in the first place? After all, the Protagonist feels like a shell of a character. He seemingly doesn’t have a fleshed-out backstory, and his motivations are unclear at best. While defenders of Tenet have tried to explain away the Protagonist’s coldness and aloofness, you can’t deny that those elements definitely exist within the character. Which I guess is kind of the point. At the end of the day, all you really need to know about the Protagonist is that he’s cold, efficient, and incredibly competent at his job. Only in very subtle instances do we see cracks in his exterior that hint at an underlying softness in his stoic shell. So while the Protagonist doesn’t have genuine character development, he does have character. Seeing a character react to their situation is character, whereas challenging the belief systems of a character is character development. With the Protagonist, we see him react to plenty of unusual circumstances, but we never get a firm grasp of why he has chosen to face these challenges in the first place or how it makes him feel. After all, you can’t have character development if the character doesn’t grow or shift their mindset in a meaningful way. And the problem with the Protagonist is that we have no idea what he believes. But by naming the protagonist the Protagonist, Nolan effectively stripped the character down to its naked core. In a way, Nolan naming the main character the Protagonist is simply his way of saying, ‘This story isn’t about the character. It’s about the story. Oh, and by the way, he’s the good guy, and he knows he’s the good guy.’ In the tech development industry, many teams have adopted the Lean methodology. Simply put, Lean is meant to help product teams focus on small, doable tasks while cutting out the fat of digital products. In this way, you focus on expanding the elements of your product that are essential. In much the same way, Tenet seems like part of an extended experiment on Nolan’s part in a quest to find the most efficient way to tell overly complicated stories. No matter whether you think the use of the title protagonist is fitting or inherently silly, you have to admire Nolan for creating such a complicated story in such a lean, efficient way. Do Characters Need Backstory? But all this got me thinking. Is the Protagonist a cold and aloof character simply because he has no backstory, or is there more to the story itself? After all, the Protagonist is far from the first action hero that has no backstory. The first example that came to mind for me is one of my favorite heroes, Ethan Hunt, in the Mission Impossible franchise. For as iconic of a character that he is, what do we really know about Ethan, exactly? The first film alludes to his upbringing in a small rural town and mentions his mother and Uncle Donald, but besides that, we know nothing about Ethan’s past. Was he in the military or CIA before joining the IMF? Does he have siblings? What did he have to overcome personally and professionally to get to be an IMF agent? The fact is, we simply don’t know. Funnily enough, Tom Cruise’s spy character in the criminally under-appreciated Knight and Day has more backstory than Ethan Hunt does in the Mission Impossible films. And what about characters like Jason Bourne, which is a character who’s past is deliberately held back from the audience. (Except for Matt Damon’s last entry into the franchise, but we don’t talk about that.) How is it that a character can be successful like Jason Bourne when the only things we know about him are the same things that the character knows about himself? The answer is that in these cases, their past simply doesn’t matter. What matters is how the characters react and respond to obstacles in the moment. In The Bourne Identity, we see Bourne struggle with his amnesia, even going so far as to lash out verbally due to his frustration. We also are able to get into his mind to see how he solves problems, such as when he’s escaping from the embassy. With Ethan Hunt, we get emotionally invested with him as he deals with the turmoil of seeing his team murdered right in front of him in the original Mission Impossible film. We get to see the aftermath as he struggles with figuring out what to do next, while also dealing with emotional fatigue. These are only a couple of examples that would seem to suggest that characters don’t need backstories for us as the audience to identify and empathize with them. Which raises the question, are backstories even necessary at all? Depends On the Story It’s been posited by some online commentators that backstories are unnecessary. I’ve heard arguments be made that you can watch The Dark Knight without having seen Batman Begins and still be able to understand and become engaged in Bruce Wayne’s story. While this is true, it’s a fact that even though they’re in the same trilogy, The Dark Knight has a totally different story to tell than Batman Begins. You can’t just simply take the storytelling style of the Dark Knight and make Batman Begins. It just wouldn’t work, and vice versa. Including a backstory or not is completely predicated on the type of story you want to tell. Are you telling a tight, lean spy story that’s mostly focused on espionage and mind-games, or are you diving into a character study where the character’s depth is important to the story and the progression of the plot? While The Dark Knight is, at its core, a crime thriller, Batman Begins is a character study about Bruce Wayne’s childhood trauma. Both are great stories in their own right, but they’re not equal because they’re not the same. So no, backstories are not a tool to simply be thrown away. At the same time, not every story ever written needs one either. Ultimately, it just depends. Every type of story has pros and cons. With a character study like Batman Begins, you gain the ability for the audience to empathize and become emotionally invested in the hero’s journey, whereas with a crime thriller like The Dark Knight, you can place all your focus on the character’s actions and reactions. What about Tenet? I went to go see Tenet in theaters with a couple of my brothers, and afterward, while discussing the film, one of my brothers pointed out that in Tenet, it wasn’t the Protagonist’s lack of backstory that was the problem with his character, but that we didn’t get to see him respond in a human way to the obstacles he encounters. With every new obstacle or piece of information he learns, he accepts everything in stride without ever reacting in a relatable way for the audience to empathize with. For all intents and purposes, the Protagonist is effectively emotionless. David Washington does what he can with the character, and I quite liked him in the role, but his character almost felt more robotic than human, like an AI always trying to figure things out while not having any underlying emotions to connect with. Yes, we get to see him making difficult decisions, but we don’t really get to see the effect that those decisions have on him as a person. On top of that, the Protagonist only asks direct questions and doesn’t ask for elaboration. For someone who is experiencing a scientific anomaly, he seems numb for most of the runtime since nothing that happens in the course of the story seems to pique his curiosity in the slightest. In a way, the Protagonist feels more like he’s caught up in the current of the story and is just along for the ride as opposed to being an active participant in the plot. Which, once again, might be kind of the point, but I won’t go into spoilers here. Ultimately, with Christopher Nolan’s screenplay, the themes, concepts, and storytelling beats took precedence over the characterization of the characters. Which, in a way, is logical and totally warranted. Tenet has so many complicated twists and turns that it’s hard to just keep up with what’s happening in the story. If Nolan had inserted deep characterization into the plot, it potentially could have just become too bloated to be engaging. In essence, Nolan sacrificed characterization for the sake of the plot. Was that the right decision to make? Well, not only does a story depend on the type of story that the storyteller intends to tell, but it also depends on what the audience expects of certain stories as well. After Christopher Nolan’s Dunkirk, I was expecting Tenet to be more of a visual and audio spectacle more than a deep character study. In that sense, Tenet totally paid off for me, because while I didn’t become emotionally attached to the characters, I was fully engaged with the story. So while I sympathize with people who saw Tenet and were disappointed at the lack of characterization — and I’ll readily admit that they’re definitely not wrong for thinking so — I’m not convinced that Christopher Nolan made the wrong decision to forego characterization for the sake of the story. While I think a more nuanced director like Doug Liman could have turned Tenet’s protagonist into a more relatable character — which probably would have translated into a better movie overall — I’m also simultaneously amazed at the sheer scope and visceral energy of Tenet’s story and filmmaking. Tenet is one of those movies that keeps you thinking about it for days afterward. Conclusion While Tenet told its story in a lean and satisfactory way, it was missing a human element to ground the story on an emotional level. What this boils down to is that Tenet is one of Christopher Nolan’s lesser movies, but also one of his most fascinating. Tenet tells a story that doesn’t resonate with me emotionally, but the plot keeps the analytical side of my mind constantly engaged. Much like Ad Astra, that was so cold and emotionless as to render the audience numb, Tenet ultimately was a lesser film because it only hooked me intellectually, not emotionally. In general, the best films are able to do both, but that doesn’t mean that Tenet was a mistake. In short, Nolan knew the story he wanted to tell, and he did it in the most efficient way possible. If you enjoy movies and liked this story, give me some claps and follow me for more stories like this!
https://medium.com/oddbs/did-the-protagonist-need-a-backstory-in-tenet-bc7a80974fd0
['Brett Seegmiller']
2020-10-06 18:58:30.181000+00:00
['Storytelling', 'Cinema', 'Film', 'Writing', 'Movies']
April Fools’ 2019: Perception-driven data visualization
April Fools’ 2019: Perception-driven data visualization Exploring OKCupid data with the most powerful psychological technique for accelerating analytics This article was a prank for April Fools’ Day 2019. Now that the festivities are over, scroll to the end of the article for the Real Lessons section for a minute of genuine learning. Evolution endowed humans with a few extraordinary abilities, from walking upright to operating heavy machinery to hyperefficient online mate selection. Humans have evolved the ability to process faces quickly, and you can use perception-driven technique to accelerate your analytics. One of the most impressive is our ability to perceive tiny changes in facial structure and expression, so data scientists have started exploiting our innate superpowers for faster and more powerful data analytics. Evolution-driven data analysis Get ready to be blown away by an incredible new analytics technique! Chernoff Faces are remarkable for the elegance and clarity with which they convey information by taking advantage of what humans are best at: facial recognition. The core idea behind Chernoff faces is that every facial feature will map to an attribute of the data. Bigger ears will mean something, as will smiling, eye size, nose shape, and the rest. I hope you’re excited to see it in action! Let’s walk through a real-life mate selection example with OKCupid data. Data processing I started by downloading a dataset of nearly 60K leaked OKCupid profiles, available here for you to follow along. Real-world data are usually messy and require quite a lot of preprocessing before they’re useful to your data science objectives, and that’s certainly true of these. For example, they come with reams of earnest and 100% reliable self-intro essays, so I did a bit of quick filtering to boil my dataset down to something relevant to me. I used R and the function I found most useful was grepl(). First, since I live in NYC, I filtered out all but the 17 profiles based near me. Next, I cleaned the data to show me the characteristics I’m most fussy about. For example, I’m an Aquarius and getting along astrologically is obviously important, as is a love of cats and a willingness to have soulful conversations in C++. After the first preprocessing steps, here’s what my dataset looks like: The next step is to convert the strings into numbers so that the Chernoff face code will run properly. This is what I’ll be submitting into the faces() function from R’s aplpack package: Next step, the magic! Faces revealed Now that our dataset is ready, let’s run our Chernoff faces visualization! Taa-daa! Below is a handy guide on how to read it. Isn’t it amazingly elegant and so quick to see exactly what is going on? For example, the largest faces are the tallest and oldest people, while the smilers can sing me sweet C++ sonnets. It’s so easy to see all that in a heartbeat. The human brain is incredible! Data privacy issues Unfortunately, by cognitively machine deep learning all these faces, we are violating the privacy of OKCupid users. If you look carefully and remember the visualizations, you might be able to pick them out of a crowd. Watch out for that! Make sure you re-anonymize your results by rerunning the code on an unrelated dataset before presenting these powerful images to your boss. Dates and dating Chernoff faces?! You really should check publication dates, especially when they’re at the very beginning of April. I hope you started getting suspicious when this diehard statistician mentioned astrology and were sure by the time I got to the drivel about de-anonymization. Much love from me and whichever prankster forwarded this to you. ❤ Real lessons I’ve always been amused by Chernoff faces (and eager for an excuse to share some of my favorite analytics trivia with you), though I’ve never actually seen them making themselves useful in the wild. Even though the article was intended for a laugh, there are a few real lessons to take away: Expect to spend time cleaning data. While the final visualization took only a couple of keystrokes to achieve, the bulk of my effort was preparing the dataset to use, and you should expect this in your own data science adventures too. While the final visualization took only a couple of keystrokes to achieve, the bulk of my effort was preparing the dataset to use, and you should expect this in your own data science adventures too. Data visualization is more than just histograms . There’s a lot of room for creativity when it comes to how you can present your data, though not everything will be implemented in a package that’s easy for beginners to use. While you can get Chernoff faces through R with just the single function faces(data), the sky is the limit if you’re feeling creative and willing to put the graphics effort in. You might need something like C++ if you’re after the deepest self-expression. There’s a lot of room for creativity when it comes to how you can present your data, though not everything will be implemented in a package that’s easy for beginners to use. While you can get Chernoff faces through R with just the single function faces(data), the sky is the limit if you’re feeling creative and willing to put the graphics effort in. You might need something like C++ if you’re after the deepest self-expression. What’s relevant to me might not be relevant to you. I might care about cat-love, you might care about something else. An analysis is only useful for its intended purpose, so be careful if you’re inheriting a dataset or report made by someone else. It might be useless to you, or worse, misleading. I might care about cat-love, you might care about something else. An analysis is only useful for its intended purpose, so be careful if you’re inheriting a dataset or report made by someone else. It might be useless to you, or worse, misleading. There’s no right way to present data , but one way to think about viz quality is speed-to-understanding. The faces just weren’t efficient at getting the information into your brain — you probably had to go and consult the table to figure out what you’re looking at. That’s something you want to avoid when you’re doing analytics for realsies. , but one way to think about viz quality is speed-to-understanding. The faces just weren’t efficient at getting the information into your brain — you probably had to go and consult the table to figure out what you’re looking at. That’s something you want to avoid when you’re doing analytics for realsies. Chernoff faces sounded brilliant when they were invented, the same way that “cognitive” this-and-that sounds brilliant today. Not everything that tickles the poet in you is a good idea… and stay extra vigilant for leaps of logic when the argument appeals to evolution and the human brain. Don’t forget to test mathemagical things before you deploy them in your business. If you want to have a go at creating these faces yourself, here’s a tutorial. If you prefer to read one of my straight-faced articles about data visualization instead, try this one.
https://towardsdatascience.com/perception-driven-data-visualization-e1d0f13908d5
['Cassie Kozyrkov']
2019-04-02 13:29:17.253000+00:00
['Analytics', 'Data Science', 'Technology', 'Visualization', 'Artificial Intelligence']
Training multiple machine learning models and running data tasks in parallel via YARN + Spark + multithreading
Training multiple machine learning models and running data tasks in parallel via YARN + Spark + multithreading Harness large scale computational resources to allow a single data scientist to perform dozens or hundreds of Big data tasks in parallel, stretching the limits of data science scaling and automation image: Freepik.com Summary To objective of this article is to show how a single data scientist can launch dozens or hundreds of data science-related tasks simultaneously (including machine learning model training) without using complex deployment frameworks. In fact, the tasks can be launched from a “data scientist”-friendly interface, namely, a single Python script which can be run from an interactive shell such as Jupyter, Spyder or Cloudera Workbench. The tasks can be themselves parallelised in order to handle large amounts of data, such that we effectively add a second layer of parallelism. Who this article is intended for? Data scientists who wish to do more work with less time, by making use of large scale computational resources (e.g. clusters or public clouds), possibly shared with other users via YARN. To understand this article you need a good knowledge of Python, working knowledge of Spark, and at least basic understanding about Hadoop YARN architecture and shell scripting; Machine learning engineers who are supporting data scientists on making use of available computational capacity and operating large scale data Introduction Data science and automation “Data science” and “automation” are two words that invariably go hand-in-hand with each other, as one of the keys goals of machine learning is to allow machines to perform tasks more quickly, with lower cost, and/or better quality than humans. Naturally, it wouldn’t make sense for an organization to spend more on tech staff that are supposed to develop and maintain systems that automate work (data scientists, data engineers, DevOps engineers, software engineers and others) than on the staff that do the work manually. It’s not thus surprising that a recurrent discussion is how much we can automate the work of data science teams themselves, for instance via automated machine learning. To achieve cost-effective data science automation, it is imperative to able to harness computational power from public or private clouds; after all, the cost of hardware is quite low compared to the cost of highly skilled technical staff. While technology to achieve so is certainly available, many organisations ended up facing the “big data software engineer vs data scientist conundrum”, or more precisely, the drastic discrepancy between “Big data software engineer skills”, i.e. skills necessary to manipulate massive amounts of data in complex computational environments, and run these processes in a reliable manner along with other concurrent processes “Data scientist skills”, i.e. skills necessary to apply algorithms and mathematics to the data to extract insights valuable from a business standpoint Harnessing computational power is key to automating data science work image: Freepik.com Some organisations would make “data scientists” responsible for developing the analytics models in some sort of “controlled analytics environment” where one does not need to think too much about the underlying computational resources or sharing the resources with other processes, and “big data software engineers” responsible for coding “production-ready” versions of the models developed by data scientists and deploy them into production. This setup resulted in obvious inefficiencies, such as: Data scientists developing sub-optimal models due to not making use of large scale data and computational resources. In some organisations, data scientists even ended up working with single-node frameworks such as Pandas/Scikit-Learn and basing their models entirely on small datasets obtained via sampling or over-engineered features; Developed models performing well on analytics environment but not performing well, or being completely unable to run, in production environment; The difficulty to evaluate generation of business value, identify and fix problems, as well as making iterative improvements, as data scientists end up dramatically losing oversight of the analytics process once models are sent into production. Different organisations dealt with this situation with different ways, either by forcing big data software engineers and data scientists learn the skills of the “other role”, or by creating a “third role”, named “Machine Learning Engineer” to bridge the gap between the two roles. But the fact is that nowadays, there are far more resources in terms of allowing data scientists without exceptional software engineering skills to work in “realistic” environments, i.e. similar to production, in terms of computational complexity. Machine learning libraries such as Spark MLLib, Kubeflow, Tensorflow-GPU, and MMLSpark allow data preparation and model training to be distributed across multiple CPUs, GPUs, or a combination of both; at the same time, frameworks such as Apache Hadoop YARN and Kubernetes allow data scientists to work simultaneously using the same computational resources, by understanding only basic concepts about the underlying server infrastructure, such as number of available CPUs/GPUs and available memory. The intent of this article is to provide an example of how these libraries and frameworks, as well as massive (but shared) computational resources, can be leveraged together in order to automate the creation and testing of data science models. From individually massively parallelised tasks to massively running tasks in parallel Frameworks like Spark and Kubeflow make easy to distribute a Big Data task, such as feature processing or machine learning model training, across GPUs and/or hundreds of CPUs without a detailed understanding of the server architecture. On the other hand, executing tasks in parallel, rather than individual parallelised tasks, is not as seamless. Of course, it’s not hard for a data scientist to work with two or three PySpark sessions in Jupyter at the same time, but for the sake of automation, we might be rather interested in running dozens and hundreds of tasks simultaneously, all specified in a programmatic way with minimal human interference. Naturally, one may ask why bother with running tasks in parallel, instead of simply increasing the number of cores per task and make each task run in a shorter time. There are two reasons: The processing speed often does not scale with the number of cores. For example, in the case of training machine learning models, if the data is not large enough, there might be zero improvement on computation time by increasing the number of cores from say, 10 to 100, and sometimes the computational time might even increase due to process and communication overhead, as well as the inability to leverage highly efficient single-processor implementations available in some machine learning libraries The accuracy of machine learning algorithms models may also decrease due to parallelisation, as those algorithms often rely on suboptimal heuristics to able to run in distributed fashion, such as data split and voting It is certainly possible, using deployment tools such as Airflow, to run arbitrarily complex, dynamically defined and highly automated data analytics pipelines involving parallelised tasks. However, these tools require low-level scripting and configuration and aren’t suited for quick “trial and error” experiments carried on by data scientists on a daily basis, often accustomed to try and re-try ideas quickly in interactive shells such as Jupyter or Spyder. Also, taking us back to the previously mentioned “big data software engineer vs data scientist” conundrum, organisations might prefer data scientists to spend their time focusing on experimenting with the data and generating business value, not on getting immersed in low-level implementation or deployment. What you will learn in this article? In this article, I will show how we can make use of Apache Hadoop YARN to launch and monitor multiple jobs in a Hadoop cluster simultaneously, (including individually parallelised Spark jobs), directly from any Python code (including code from interactive Python shells such as Jupyter), via Python multithreading. While the example will consist of training multiple machine learning models in parallel, I will provide a generic framework that can be used to launch arbitrary data tasks such as feature engineering and model metric computation. Some applications for multiple model parallel training are: Hyper-parameter tuning: For the same training data set, simultaneously train using different model types (say Logistic Regression, Gradient Boosting and Multi-layer Perceptron) and also different hyperparameter configurations, in order to find the optimal model type/hyperparameter set as quickly as possible; For the same training data set, simultaneously train using different model types (say Logistic Regression, Gradient Boosting and Multi-layer Perceptron) and also different hyperparameter configurations, in order to find the optimal model type/hyperparameter set as quickly as possible; Multi-label classification: Train multiple binary/multi-class classification models in parallel, where each model training task will use a different column as the label column, such that the resulting combination of models will effectively be a multi-label classifier; Train multiple binary/multi-class classification models in parallel, where each model training task will use a different column as the label column, such that the resulting combination of models will effectively be a multi-label classifier; Feature reduction: For a poll of previously ranked features, train multiple models, each using only the top N-ranked features as feature columns, with N being varied across the training tasks. Technical overview In our framework, I will call the main task, i.e. the Python code that creates the additional tasks to run in parallel, as the controller task, and the tasks being started by the controller task as the subordinate tasks. (I intentionally avoid using the expression “worker” to avoid confusion, as in Spark, “worker” is a synonym for Spark executor) The controller task is responsible for: Defining how many subordinate tasks should be run at the same time and what to do in case one of the tasks fail; Creating the subordinate tasks, passing the inputs to each task and getting their outputs, if any; Generating the inputs and processing the outputs of the subordinate tasks. An interesting aspect of YARN is that it allows Spark to be used both in the controller and subordinate tasks. Although neither is necessary, this allows us to handle arbitrarily large datasets without needing to worry ourselves with data engineering, as long as we have enough computational resources. Namely, the controller task can run Spark in client mode, and the subordinate tasks can run Spark in cluster mode: In client mode, the Spark driver runs in the environment where the controller’s Python code is being run (that we refer to as client environment ) , allowing the use of locally installed interactive shells such as Jupyter, whereas the Spark executors run in the YARN-managed Hadoop cluster , with the interactions between the driver and executors made via a third type of process named Application Master also running in the Hadoop cluster; ) allowing the use of locally installed interactive shells such as Jupyter, whereas the Spark executors run in the YARN-managed , with the interactions between the driver and executors made via a third type of process named Application Master also running in the Hadoop cluster; In cluster mode, both the driver and the executors run in the YARN-managed Hadoop cluster. Note that nothing prevent us to have the controller task also running in cluster mode, but interactive shells cannot be used in this way. The framework is illustrated in the figure below: Illustration of the parallelisation framework There are two things to note about the example above: Although in the example the controller task is also the driver of the Spark process (and thus associated with executors in the Hadoop cluster via the YARN Application Master), this is not necessary, although useful for example if we want to do some preprocessing on the data before deploying to the subordinate tasks; Although the subordinate tasks do not need to use Spark parallelisation, we will use the spark-submit command to launch them, such that they will always have a Spark driver, although not necessarily Spark executors. This is the case of process 3 above. Technical implementation Executing a subordinate task as a Spark job Before I delve into parallelisation, I will first explain how to execute a subordinate task from a controller task written in Python. As mentioned before, we will do so using the spark-submit shell script contained in the Apache Spark installation, such that the subordinate task will be technically a Spark job, although it does not necessarily has executors or Spark code as I mentioned before. In principle, we can use spark-submit from Python by simply calling the os.system function, which allows us to execute a shell command from Python. In practice, we need to be able to debug and monitor the task; for that purpose, it is better to use the excellent subprocess library. An example: import json import subprocess spark_config_cluster_path = "/home/edsonaoki/spark_config_cluster" app_name = "some_model_training" spark_config = { "spark.jars.packages" : "com.microsoft.ml.spark:mmlspark_2.11:0.18.1", "spark.dynamicAllocation.enabled": "false", "spark.executor.instances": "10", "spark.yarn.dist.files": "/home/edsonaoki/custom_packages.tar" } command = "lightgbm_training.py "\ "hdfs://user/edsonaoki/datasets/input_data.parquet "\ "hdfs://user/edsonaoki/models" spark_submit_cmd = “SPARK_CONF_DIR=%s spark-submit -name %s %s %s" % (spark_config_cluster_path, app_name, " ".join(['-conf %s="%s"' % (key, value) for key, value in spark_config.items()]), command) cmd_output = subprocess.Popen(spark_submit_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, bufsize=1, universal_newlines=True) for line in cmd_output.stdout: print(line) cmd_output.communicate() At the beginning of the code I set the path containing the cluster mode base Spark configuration, which is later used to change the SPARK_CONF_DIR environmental variable. This is an actually crucial step if the controller task is configured to run in Spark in client mode since the Spark configuration for cluster mode is typically different than for client mode. If you don’t know much about how to configure Spark in cluster mode, you can start by making a copy of the existing SPARK_CONF_DIR . Inside the spark-defaults.conf file we need to have spark.submit.deployMode=cluster instead of spark.submit.deployMode=client and certain configuration options, such as spark.yarn.rmProxy.enabled and the spark.driver.options.* options need to be disabled as there is no network-specific configuration for the driver when running Spark in cluster mode. Check the Spark on YARN documentation if you are in doubt. Of course, if the controller task is also running Spark in cluster mode, there is no need to have a separate configuration. Now, looking at the subsequent steps: app_name = "some_model_training" spark_config = { "spark.jars.packages" : "com.microsoft.ml.spark:mmlspark_2.11:0.18.1", "spark.dynamicAllocation.enabled": "false", "spark.executor.instances": "10", "spark.yarn.dist.files": "/home/edsonaoki/custom_packages.tar" } command = "lightgbm_training.py "\ "hdfs://user/edsonaoki/datasets/input_data.parquet"\ "hdfs://user/edsonaoki/models" spark_submit_cmd = “SPARK_CONF_DIR=%s spark-submit -name %s %s %s" % (spark_config_cluster_path, app_name, " ".join(['-conf %s="%s"' % (key, value) for key, value in spark_config.items()]), command) Here I set up the application name, additional Spark configuration options and the command to be executed by the spark-submit script. These are straightforward to understand, but the application name is particularly important in our case — we will later understand why. We also submit a custom Python package via the spark.yarn.dist.files configuration parameter, which as I will show later, is especially handy since the subordinate task runs in the Hadoop cluster and hence has no access to the Python functions available in the local (client) environment. Note also that I specify two HDFS paths as arguments to the lightgbm_training.py Python script (the subordinate task’s code), for a similar reason to above: since the Python script will run in the Hadoop cluster, it will not have access to any files in the client environment’s file system, and hence any files to be exchanged between controller or subordinate task must be either explicitly submitted via spark.yarn.dist.files or put into a shared file system such as HDFS or AWS S3. After preparing the spark-submit shell command line, we are ready to execute it using the subprocess.Popen command: cmd_output = subprocess.Popen(spark_submit_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, bufsize=1, universal_newlines=True) We set shell=True to make Python initiate a separate shell process to execute the command, rather than attempting to initiate spark-submit directly from the Python process. Although setting shell=False is generally preferable when using the subprocess library, doing so restricts the command line format and it’s not feasible in our case. The stdout , stderr , bufsize and universal_newlines arguments are used to handle the output (STDOUT) and error messages (STDERR) issued by the shell command during execution time. When we are executing multiple subordinate tasks in parallel, we will probably want to ignore all execution time messages as they will be highly cluttered and impossible to interpret anyways. This is also useful to save memory for reasons we will explain later. However, before attempting to run multiple tasks in parallel, it is certainly best to first make sure that each individual task will work properly, by running a single subordinate task with output/error messages enabled. In the example I set stdout=subprocess.PIPE , stderr=subprocess.STDOUT , bufsize=1 and universal_newlines=True , which basically, will direct all shell command output to a First In First Out (FIFO) queue named subprocess.PIPE . Note that when running a Spark job in cluster mode, subprocess.PIPE will only have access to messages from the YARN Application Master, not the driver or executors. To check the driver and executor messages, you might look at the Hadoop cluster UI via your browser, or retrieve the driver and executor logs post-execution as I will show later. Additionally, if file logging is enabled in the log4j.properties file (located in the Spark configuration), the messages from the Application Master will be logged into a file rather than directed to subprocess.PIPE , so disable file logging if needed. Finally, to display the output/error messages in the Python script’s output, I continue the code above as follows: for line in cmd_output.stdout: print(line) cmd_output.communicate() The purpose of cmd_output.communicate() is to wait for the process to finish after subprocess.PIPE is empty, i.e. no more outputs from the subordinate task are written to it. It highly advisable to read the entire queue before calling cmd_output.communicate() method as done above, to prevent the queue from increasing in size and wasting memory. Monitoring the subordinate task without using debug messages As I mentioned earlier, when we run tasks in parallel we do not want debug messages to be displayed; moreover, if a large number of tasks are sending messages to an in-memory FIFO queue at the same time, memory usage will increase messages aren’t being read from the queue as fast as they are generated. A version of the code from the previous section without debugging, starting with the call to spark-submit , is as follows: cmd_output = subprocess.Popen(spark_submit_cmd, shell=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) def getYARNApplicationID(app_name): state = 'RUNNING,ACCEPTED,FINISHED,KILLED,FAILED' out = subprocess.check_output(["yarn","application","-list", "-appStates",state], stderr=subprocess.DEVNULL, universal_newlines=True) lines = [x for x in out.split(" ")] application_id = '' for line in lines: if app_name in line: application_id = line.split('\t')[0] break return application_id max_wait_time_job_start_s = 120 start_time = time.time() while yarn_application_id == '' and time.time()-start_time\ < max_wait_time_job_start_s: yarn_application_id = getYARNApplicationID(app_name) cmd_output.wait() if yarn_application_id == '': raise RuntimeError("Couldn't get yarn application ID for application %s" % app_name) The code starts by launching the subordinate task as before, but with debugging disabled: cmd_output = subprocess.Popen(spark_submit_cmd, shell=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) Since there are no debug messages to be displayed when the process is running, we use cmd_output.wait instead of cmd_output.communicate() to wait for the task to finish. Note that although we won’t see the Application Master’s messages, we can still debug the Spark job’s driver and executor in runtime via the Hadoop cluster UI. However, we still need to be able to monitor the task from a programmatic point of view; more specifically, the controller task needs to know when the subordinate task has finished, whether it was successful, and take appropriate action in case of failure. For that purpose, we can use the application name that we set in the beginning: app_name = "some_model_training" The application name can be used by YARN to retrieve the YARN application ID, which allows us to retrieve the status and other information about the subordinate task. Again, we can resort to the subprocess library to define a function that can retrieve the application ID from the application name: def getYARNApplicationID(app_name): state = 'RUNNING,ACCEPTED,FINISHED,KILLED,FAILED' out = subprocess.check_output(["yarn","application","-list", "-appStates",state], stderr=subprocess.DEVNULL, universal_newlines=True) lines = [x for x in out.split(" ")] application_id = '' for line in lines: if app_name in line: application_id = line.split('\t')[0] break return application_id Observe that getYARNApplicationID parses the output of the yarn application -list shell command. Depending on your Hadoop version the output format may be slightly different and the parsing needs to be adjusted accordingly. If in doubt, you can test the format by running the following command in the terminal: $ yarn application -list -appStates RUNNING,ACCEPTED,FINISHED,KILLED,FAILED The tricky aspect is that this method can only work if the application name is unique in the Hadoop cluster. Therefore, you need to make sure you are creating a unique application name, for instance by including timestamps, random strings, your user ID, etc. Optionally, you can also add other filters when attempting to parse the output of yarn application -list , for example, the user ID, the YARN queue name or the time of the day. Since the Spark job takes some time to be registered in YARN after it has been launched using spark-submit , I implemented the loop: max_wait_time_job_start_s = 120 start_time = time.time() while yarn_application_id == '' and time.time()-start_time\ < max_wait_time_job_start_s: yarn_application_id = getYARNApplicationID(app_name) where max_wait_time_job_start_s is the time to wait for the registration in seconds, which may need to be adjusted according to your environment. The meaning of if yarn_application_id == '': raise RuntimeError("Couldn't get yarn application ID for"\ " application %s" % app_name) is straightforward; if there is no application ID, it means the Spark job has not been successfully launched and we need to throw an exception. This may also indicate that we need to increase max_wait_time_job_start_s , or change how the output of yarn application -list is parsed inside getYARNApplicationID . Checking the final status of the subordinate task After the subordinate task has finished, checking its final status can be done as follows: def getSparkJobFinalStatus(application_id): out = subprocess.check_output(["yarn","application", "-status",application_id], stderr=subprocess.DEVNULL, universal_newlines=True) status_lines = out.split(" ") state = '' for line in status_lines: if len(line) > 15 and line[1:15] == "Final-State : ": state = line[15:] break return state final_status = getSparkJobFinalStatus(yarn_application_id) where again, you may need to tune the parsing of yarn application -status depending on your Hadoop version. How to handle the final status is entirely up to you, but one possibility is to store the Spark job’s driver and executor log in a file and raise an exception. For example: log_path = "/home/edsonaoki/logs/%s_%s.log" % (app_name, yarn_application_id) if final_status != "SUCCEEDED": cmd_output = subprocess.Popen(["yarn","logs", "-applicationId",yarn_application_id], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, bufsize=1, universal_lines=True) with open(log_path, "w") as f: for line in cmd_output.stdout: f.write(line) print("Written log of failed task to %s" % log_path) cmd_output.communicate() raise RuntimeError("Task %s has not succeeded" % app_name) Using multithreading to execute subordinate tasks in parallel If not obvious, before attempting to execute subordinate tasks in parallel, make sure to test as many as tasks as possible without parallelisation, as debugging parallel tasks can be incredibly difficult. To perform parallelisation we will use Python’s concurrent library. The concurrent library uses multithreading and not multiprocessing; i.e. the threads do run in the same processor, such that from the side of the controller task, there is no real parallel processing. However, since the threads started in the controller task are in I/O mode (unblocked) when waiting for the subordinate tasks to finish, multiple subordinate tasks can be launched asynchronously, such that they will actually run in parallel in the side of the Hadoop cluster. While we can technically use the multiprocessing library instead of the concurrent library to achieve parallelism also from the controller task’s side, I would advise against it as it will substantially increase the memory consumption in the client environment for little benefit — the idea is that the “tough processing” is done in the Hadoop cluster. When we launch a Spark job, we are typically aware of the constraints of processing and memory in the cluster environment, especially in the case of a shared environment, and use configuration parameters such as spark.executor.memory and spark.executor.instances in order to control the task’s processing and memory consumption. The same needs to be done in our case; we need to limit the number of subordinate tasks that execute simultaneously according to the availability of computational resources in the cluster, such that when we reach this limit, a subordinate task can only be started after another has finished. The concurrent package offers the futures.ThreadPoolExecutor class which allows us to start multiple threads and wait for them to finish. The class also allows us to limit the number of threads doing active processing(i.e. not blocked by I/O) via the max_workers argument. However, as I mentioned before, a thread in the controller task is treated as being blocked by I/O when the subordinate task is running, which means that max_workers won’t effectively limit the number of threads. As result, all subordinate tasks will be submitted nearly simultaneously and the Hadoop cluster can become overloaded. This can be solved rather easily by modifying the futures.ThreadPoolExecutor class as follows: import concurrent.futures from queue import Queue class ThreadPoolExecutorWithQueueSizeLimit( concurrent.futures.ThreadPoolExecutor): def __init__(self, maxsize, *args, **kwargs): super(ThreadPoolExecutorWithQueueSizeLimit, self).__init__(*args, **kwargs) self._work_queue = Queue(maxsize=maxsize) This new class ThreadPoolExecutorWithQueueSizeLimit works exactly like futures.ThreadPoolExecutor , but it won’t allow more than maxsize threads to exist at any point of time, effectively limiting the number of subordinate tasks running simultaneously in the Hadoop cluster. We now need to define a function, containing the execution code of the thread, which can be passed as an argument to the class ThreadPoolExecutorWithQueueSizeLimit . Based on the previous code for executing a subordinate task from Python without debugging messages, I present the following generic thread execution function: def executeThread(app_name, spark_submit_cmd, error_log_dir, max_wait_time_job_start_s=120): cmd_output = subprocess.Popen(spark_submit_cmd, shell=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) start_time = time.time() while yarn_application_id == '' and time.time()-start_time\ < max_wait_time_job_start_s: yarn_application_id = getYARNApplicationID(app_name) cmd_output.wait() if yarn_application_id == '': raise RuntimeError("Couldn't get yarn application ID for"\ "application %s" % app_name) final_status = getSparkJobFinalStatus(yarn_application_id) log_path = %s/%s_%s.log" % (error_log_dir, app_name, yarn_application_id) if final_status != "SUCCEEDED": cmd_output = subprocess.Popen(["yarn","logs", "-applicationId",yarn_application_id], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, bufsize=1, universal_lines=True) with open(log_path, "w") as f: for line in cmd_output.stdout: f.write(line) print("Written log of failed task to %s" % log_path) cmd_output.communicate() raise RuntimeError("Task %s has not succeeded" % app_name) return True As you can see, the function uses the previously defined functions getYARNApplicationID and getSparkJobFinalStatus , and the application name, the spark-submit command line and the directory to store the error logs are passed as arguments to the function. Note that the function raises an exception in case the yarn application ID cannot be found, or the status of the Spark job is not successful. But depending on the case, we may just want the function to return a False value, such that the controller task knows that this particular subordinate task has not been successful and needs to be executed again, without need to run again the tasks that have been already successful. In this case, we just need to replace line raise RuntimeError("Couldn't get yarn application ID for application %s" % app_name) and raise RuntimeError("Task %s has not succeeded" % app_name) with return False The next step is to create a generic code to start the threads and wait for their completion, as follows: def executeAllThreads(dict_spark_submit_cmds, error_log_dir, dict_success_app=None): if dict_success_app is None: dict_success_app = {app_name: False for app_name in dict_spark_submit_cmds.keys()} with ThreadPoolExecutorWithQueueSizeLimit(maxsize=max_parallel, max_workers=max_parallel) as executor: future_to_app_name = { executor.submit( executeThread, app_name, spark_submit_cmd, error_log_dir, ): app_name for app_name, spark_submit_cmd in dict_spark_submit_cmds.items() if dict_success_app[app_name] == False } for future in concurrent.futures\ .as_completed(future_to_app_name): app_name = future_to_app_name[future] try: dict_success_app[app_name] = future.result() except Exception as exc: print('Subordinate task %s generated exception %s' % (app_name, exc)) raise return dict_success_app The mandatory arguments to the function are: a dictionary with application names as keys and the corresponding job submission command lines as values; the directory to store the error logs. The output of the function is also a dictionary containing the return value (True or False) of each subordinate task, indexed by application name. The optional argument is dict_success_app , that can be the return value from a previous execution from the function, in case we only want to run the subordinate tasks that have not been already successful. I will show later how that can be accomplished. For the reader’s convenience, I put together the complete code of the parallelisation framework below: import subprocess import concurrent.futures from queue import Queue class ThreadPoolExecutorWithQueueSizeLimit( concurrent.futures.ThreadPoolExecutor): def __init__(self, maxsize, *args, **kwargs): super(ThreadPoolExecutorWithQueueSizeLimit, self).__init__(*args, **kwargs) self._work_queue = Queue(maxsize=maxsize) def getYARNApplicationID(app_name): state = 'RUNNING,ACCEPTED,FINISHED,KILLED,FAILED' out = subprocess.check_output(["yarn","application","-list", "-appStates",state], stderr=subprocess.DEVNULL, universal_newlines=True) lines = [x for x in out.split(" ")] application_id = '' for line in lines: if app_name in line: application_id = line.split('\t')[0] break return application_id def getSparkJobFinalStatus(application_id): out = subprocess.check_output(["yarn","application", "-status",application_id], stderr=subprocess.DEVNULL, universal_newlines=True) status_lines = out.split(" ") state = '' for line in status_lines: if len(line) > 15 and line[1:15] == "Final-State : ": state = line[15:] break return state def executeThread(app_name, spark_submit_cmd, error_log_dir, max_wait_time_job_start_s = 120): cmd_output = subprocess.Popen(spark_submit_cmd, shell=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) start_time = time.time() while yarn_application_id == '' and time.time()-start_time\ < max_wait_time_job_start_s: yarn_application_id = getYARNApplicationID(app_name) cmd_output.wait() if yarn_application_id == '': raise RuntimeError("Couldn't get yarn application ID for"\ " application %s" % (app_name)) # Replace line above by the following if you do not # want a failed task to stop the entire process: # return False final_status = getSparkJobFinalStatus(yarn_application_id) log_path = %s/%s_%s.log" % (error_log_dir, app_name, yarn_application_id) if final_status != "SUCCEEDED": cmd_output = subprocess.Popen(["yarn","logs", "-applicationId",yarn_application_id], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, bufsize=1, universal_lines=True) with open(log_path, "w") as f: for line in cmd_output.stdout: f.write(line) print("Written log of failed task to %s" % log_path) cmd_output.communicate() raise RuntimeError("Task %s has not succeeded" % app_name) # Replace line above by the following if you do not # want a failed task to stop the entire process: # return False return True def executeAllThreads(dict_spark_submit_cmds, error_log_dir, dict_success_app=None): if dict_success_app is None: dict_success_app = {app_name: False for app_name in dict_spark_submit_cmds.keys()} with ThreadPoolExecutorWithQueueSizeLimit(maxsize=max_parallel, max_workers=max_parallel) as executor: future_to_app_name = { executor.submit( executeThread, app_name, spark_submit_cmd, error_log_dir, ): app_name for app_name, spark_submit_cmd in dict_spark_submit_cmds.items() if dict_success_app[app_name] == False } for future in concurrent.futures\ .as_completed(future_to_app_name): app_name = future_to_app_name[future] try: dict_success_app[app_name] = future.result() except Exception as exc: print('Subordinate task %s generated exception %s' % (app_name, exc)) raise return dict_success_app Example: Multi-label model training with 2-level parallelisation using Gradient Boosting binary classifiers In this example, I will show how to use the framework above to parallelise training of a multi-label classifier with hundreds of labels. Basically, we will train multiple binary classifiers in parallel, where the training of each binary model is itself parallelised via Spark. The individual binary classifiers are Gradient Boosting models trained using the Spark version of the popular LightGBM package, contained in the Microsoft Machine Learning for Spark (MMLSpark) library. Setting up the controller task By using the framework above, there are only two other things that the controller task needs to do: Prior to calling the executeAllThreads function, set up the application name and spark-submit command for each subordinate task; After returning from the executeAllThreads function, check which subordinate tasks have been successful and handle their output appropriately. For the first part, we can start by looking at our previous example where we are submitting a standalone subordinate job: spark_config_cluster_path = "/home/edsonaoki/spark_config_cluster" app_name = "some_model_training" spark_config = { "spark.jars.packages" : "com.microsoft.ml.spark:mmlspark_2.11:0.18.1", "spark.dynamicAllocation.enabled": "false", "spark.executor.instances": "10", "spark.yarn.dist.files": "/home/edsonaoki/custom_packages.tar" } command = "lightgbm_training.py "\ "hdfs://user/edsonaoki/datasets/input_data.parquet"\ "hdfs://user/edsonaoki/models" spark_submit_cmd = "SPARK_CONF_DIR=%s spark-submit -name %s %s %s" % (spark_config_cluster_path, app_name, " ".join(['-conf %s="%s"' % (key, value) for key, value in spark_config.items()]), command) What do we need to adapt the code for multi-label classification? First, for the reasons already mentioned, the application name needs to be completely unique. Assuming that the label columns of the dataset input_data.parquet are contained in a variable lst_labels , one way to ensure likely unique applications IDs for each subordinate task would something like: import time curr_timestamp = int(time.time()*1000) app_names = ["model_training_%s_%d" % (label,curr_timestamp) for label in lst_labels] This ensures that application names will be unique as long as the controller task is not started more once in the same millisecond (of course, if we have a shared YARN cluster other adaptions may be needed to make the application names unique, such as adding the username to the application name). We are yet to discuss how the subordinate task code contained in lightgbm_training.py looks like, but let’s suppose it: Performs some pre-processing on the training data, based on the label column (such as dataset balancing), using a function contained in the custom_packages.tar file submitted along with the Spark job file submitted along with the Spark job Trains the model based on the features column and the label column Saves the trained model in the HDFS system In this case, the controller task needs to pass the HDFS path of the training dataset, the HDFS path to store the trained models, and the label to be used for each subordinate task, via command-line arguments to lightgbm_training.py . This can be done as shown below: dict_spark_submit_cmds = dict() for i in range(len(lst_labels)): command = "lightgbm_training.py "\ "hdfs://user/edsonaoki/datasets/input_data.parquet "\ "hdfs://user/edsonaoki/models "\ +lst_labels[i] spark_submit_cmd = “SPARK_CONF_DIR=%s spark-submit -name %s "\ "%s %s" % (spark_config_cluster_path, app_names[i], " ".join(['-conf %s="%s"' % (key, value) for key, value in spark_config.items()]), command) dict_spark_submit_cmds[app_names[i]] = spark_submit_cmd Of course, there are many other ways to customise the subordinate tasks. We might want to use different model training hyperparameters, different datasets, different Spark configurations, or even use different Python scripts for each subordinate task. The fact that we allow the spark-submit command line to be unique for each subtask allows complete customisation. For the reader’s convenience, I put together the controller task’s code prior to and until calling executeAllThreads : import time spark_config_cluster_path = "/home/edsonaoki/spark_config_cluster" curr_timestamp = int(time.time()*1000) app_names = ["model_training_%s_%d" % (label,curr_timestamp) for label in lst_labels] spark_config = { "spark.jars.packages" : "com.microsoft.ml.spark:mmlspark_2.11:0.18.1", "spark.dynamicAllocation.enabled": "false", "spark.executor.instances": "10", "spark.yarn.dist.files": "/home/edsonaoki/custom_packages.tar" } dict_spark_submit_cmds = dict() for i in range(len(lst_labels)): command = "lightgbm_training.py "\ "hdfs://user/edsonaoki/datasets/input_data.parquet "\ "hdfs://user/edsonaoki/models "\ +lst_labels[i] spark_submit_cmd = “SPARK_CONF_DIR=%s spark-submit -name %s "\ "%s %s" % (spark_config_cluster_path, app_names[i], " ".join(['-conf %s="%s"' % (key, value) for key, value in spark_config.items()]), command) dict_spark_submit_cmds[app_names[i]] = spark_submit_cmd executeAllThreads(dict_spark_submit_cmds, "/home/edsonaoki/logs") For the second part, i.e. what the controller task should do after returning from executeAllThreads , assuming that the successful tasks have saved the trained models in the HDFS system, we can just open these files and process them as appropriate, for instance applying the models to some appropriate validation dataset, generating plots and computing performance metrics. If we use the parallelisation framework presented earlier as it is, there won’t be “unsuccessful subordinate tasks” as any failure will result in an exception being raised. But if we modified executeThread to return False in case of task failure, we might store the returning dict_success_app dictionary in a JSON or Pickle file such that we can later investigate and fix the failed tasks. Finally, we can call again executeAllThreads with the optional argument dict_success_app set such that we re-run only the failed tasks. Setting up the subordinate task Let us now write the code of the subordinate task in the lightgbm_training.py script. The first step is to read the input arguments of the script, i.e. the path of the training dataset in the HDFS filesystem, the path to store the models and the name of the label column: import sys train_data_path = sys.argv[1] model_path = sys.argv[2] label = sys.argv[3] Since we are using the Spark version of LightGBM, we need to create a Spark session, which we do as follows: from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() spark.sparkContext.addPyFile("./custom_packages.tar") Note that there is no need to set up any configuration for the Spark session, as it has been already done in the command line submitted by the controller task. Also, since we explicitly submitted a custom Python package custom_packages.tar to the Spark job, we need to use the addPyFile function to make the contents of the package usable inside our code, as the package is not included in the PYTHONPATH environment variable of the Hadoop cluster. The code that does the actual processing in the subordinate task is pretty straightforward. The subordinate task will read the training data, call some pre-processing function inside custom_packages.tar (say custom_data_preprocessing.datasetBalancing ), perform the model training, and save the trained model with a unique name back in the HDFS file system: from custom_data_preprocessing import datasetBalancing from mmlspark import LightGBMClassifier df_train_data = spark.read.parquet(train_data_path) df_preproc_data = datasetBalancing(df_train_data, label) untrained_model = LightGBMClassifier(learningRate=0.3, numIterations=150, numLeaves=45)\ .setFeaturesCol("features")\ .setLabelCol(label) trained_model = untrained_model.fit(df_preproc_data) trained_model.write().overwrite()\ .save(model_path + "/trained_model_%s.mdl" % label) spark.stop() The full code of lightgbm_training.py is put together below for the reader’s convenience: import sys train_data_path = sys.argv[1] model_path = sys.argv[2] label = sys.argv[3] from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() spark.sparkContext.addPyFile("./custom_packages.tar") from custom_data_preprocessing import datasetBalancing from mmlspark import LightGBMClassifier df_train_data = spark.read.parquet(train_data_path) df_preproc_data = datasetBalancing(df_train_data, label) untrained_model = LightGBMClassifier(learningRate=0.3, numIterations=150, numLeaves=45)\ .setFeaturesCol("features")\ .setLabelCol(label) trained_model = untrained_model.fit(df_preproc_data) trained_model.write().overwrite()\ .save(model_path + "/trained_model_%s.mdl" % label) spark.stop() Conclusion It is easy to see that the framework presented in this article can be re-used for various tasks other than multiple machine learning model training. A question is that may arise is whether it can be used for different cluster environments, for instance with Spark on Mesos rather than Spark on YARN. I believe so, but some adaptations are needed as the presented code relies heavily on the yarn command to monitor the subordinate tasks. By using this framework, data scientists can focus more of their time on designing the data tasks, not on manually executing them for dozens or hundreds of small variations. Another advantage is that by harnessing parallelisation, the tasks can be done in much less time, or from a different perspective, without requiring multiple data scientists to work simultaneously to complete the tasks in the same amount of time. Naturally, this article presents only one of many ways to improve data science automation. Organisations that realise that the time of data scientists and other skilled tech professionals is highly valuable will certainly find increasingly more ways to help these professionals focus on higher-level problems.
https://towardsdatascience.com/how-to-train-multiple-machine-learning-models-and-run-other-data-tasks-in-parallel-by-combining-2fa9670dd579
['Edson Hiroshi Aoki']
2019-11-12 06:14:24.952000+00:00
['Machine Learning', 'Spark', 'Data Science', 'Big Data', 'Python']
Are You A Walking Paradox Like Me?
It can get lonely and confusing out here, for us boxless ones — those of us who don’t feel like we really fit in with a specific category. It’s even hard to describe the category of “boxless”. Here are some signs that you’re boxless like me: You’ve felt lonely because nobody else seems to think and experience life the way you do. On the surface, it might look like you fit in but you don’t feel a sense of true belonging for who you really are. Your experiences with therapy and coaching are mostly that you were a better therapist or coach to yourself than they were. This adds an additional layer of loneliness because this is the person who is supposed to see you — like, really see you — and even they don’t seem to get it. You’ve outgrown your parents emotionally and aren’t sure what to do about that. You can blend in and get along with most anyone, but rarely do you feel a true sense of belonging. Your relationship with most friends involves you listening to them talk and/or helping them through their problems. On those rare times when you try to open up about your experiences, you don’t usually get a reaction from them that actually helps. You feel deeply and think deeply and have a rich inner world. This has its perks but sometimes you study less-aware people and you envy their simplicity because it seems easier to be happy that way. You are sensitive to your surroundings, other people, and your inner world. You are affected by things more deeply than many other people seem to be and you hold on to emotions longer. This is a gift, but instead of using it to your advantage, you beat yourself up for being too sensitive because in our society, sensitivity is equated with weakness. You push yourself too hard and then burn out. You’ve had periods of your life that cycle through pushing, collapsing into burnout and hiding away, then pushing yourself again to start the cycle over. You are spiritual, but wary of blindly following any religion or guru. You have a still small voice always thinking, “There has got to be a better way.” If you can relate to most all of these bullets — I see you. Please know you are not alone. Please know it is my life’s mission to walk beside you, to enter your inner world with you and help you find all the answers that are already hidden inside of you.
https://medium.com/just-jordin/are-you-a-walking-paradox-like-me-8d3e7a61682b
['Jordin James']
2020-11-15 22:57:34.671000+00:00
['Mental Health', 'Self', 'Psychology', 'Spirituality', 'Inspiration']
Sometimes My Mind Makes Me Hate Writing
(This story was originally written on October 25th, 2018. It was a good snapshot of my life at the time, so I thought I would republish it. During these psychotic episodes, I have very little control of my mind, and I tried my best to capture the chaos that went on that day.) I’ve been having a rough time lately. I’m trying to write every day, but the situation in my mind has been appalling. Some days, I can’t even get an extra thought in with all the racket competing for attention. I feel like my mind has a mind of its own. But you see, I have goals. These goals are not nice to have but written in stone with a chisel. I have to do something about my financial state of affairs, and I need to do it now. I’m going to admit something hard for me to say. I’m on Social Security Disability (SSDI). There, I said it. I don’t know why I’m ashamed of it because I don’t have a choice in the matter. I haven’t been able to keep a job for a long time, and freelancing is problematic because I have problems with consistency. Clients don’t want to hire you if you can’t deliver on deadlines day after day. I’ve proven over time that I can’t, you know, deliver. When I have days like today, where the voices in my head are challenging each other for airtime and I can’t form a thought — much less write anything worthwhile — I become anxious and depressed. Seriously. It took me 2 hours to write the last 217 words. I’ve been trying to come up with a way to explain the situation I’m in, but all that comes to mind is FML — fuck my life. I’ll try again. Photo by Ksenia Makagonova on Unsplash I can’t hold a job — I’ve proven it time and time again. Along the road I’ve walked the past fifteen years are the shattered, smoking husks of lost opportunities. Freelancing seemed a viable option to supplement my SSDI, but like any job, they expect you to deliver on deadlines. It’s not personal — it’s business. I get it. You don’t have to explain to the crying man sitting in the corner. I thought a solution would be to get something going on Medium or write articles for blogs on my own time and schedule. I’ve been reading the advice of others about what I need to do to be successful on Medium, or with writing in general. One of the first things always mentioned is you have to write every day and put out content seven days a week. That’s just not realistic for me. I sound like a complainer — I know, I disgust myself. But I’ve tried writing and publishing every day, and I even made a schedule. I went one step further and tracked my time to see where it was all going. I sit at my desk and try to type. I try to make the words flow. Today I ended up with my head in my hands, screaming for my brain to please shut up. I finally gave up and rested in bed with my laptop open. I’m struggling to write this post, 20 words at a time. My family tries to help, but I can’t tell them that every small noise they make rings in my head like a dinner gong. I can’t tell them everything irritates me. The worst thing is — if I can’t get my mind under control, we may not eat next month, or the month after that, because my Social Security can’t last forever. Again, FML. I can feel the panic building. My stomach feels like I ate a 5-pound burrito and the contents are pushing into my throat in preparation to throw up. My hands are shaking. Music. I need music. Ed Sheeran — take me away. Dogs are barking. I still hear them. My daughter, Zoey, is chattering happily in the next room. Control. A woman’s voice is droning on in the back of my head. I’m ignoring her, but she’s persistent. My medication isn’t doing anything to help. PANIC. I need to take a break. Photo by Adeolu Eletu on Unsplash An hour of Netflix and my mind has quieted somewhat. Zoey is sick today and sitting at my desk watching funny YouTube videos on my phone. It’s as calm as it gets around here — no better time than now to write a few words. Sometimes, it takes a little distraction for me to be able to focus. Does that make any sense? I need to focus on something other than what’s going on in my head. Sometimes, the things that live in my head are so disturbing that it takes a lot of noise to drown it out. Photo by Tim Marshall on Unsplash More breaks. I can’t keep it reigned in. If SHE is not talking in my head, it’s an old woman. Nothing they say makes sense. I’m scared because I don’t want Flora to find out the voices are back. She thought the medication was helping, but it’s not. The only time it’s quiet is when I’m drunk, but I promised myself I wouldn’t self-medicate. There are also problems with the headaches. When I drink, the headaches get worse. Worse yet, when the headaches are screaming in my head, so are the voices. I know they’re not real. I’ve been dealing with the people in my head long enough to know the people aren’t real people. My mind creates everything. Knowing it doesn’t help. Knowing it makes me feel like more of a freak. I have to stop. Photo by Will Porada on Unsplash I shoveled the food into my mouth, more out of habit than hunger. I didn’t even taste it. Every little noise distracts my mind — even the sound of the fork touching the plate was torture. I yelled at Zoey again. She was just playing, but my mind convinced me it was bothering my wife while she was working. I can’t control the anger that builds in my chest. Now I feel horrible. I’m such an asshole. Photo by Edin Hopic on Unsplash I forget kids don’t hold grudges. When I went to check on Zoey, she smiled and hugged me. It’s scary to think that so many people count on me. I don’t want to lose what’s left of my mind.
https://jasonjamesweiland.medium.com/sometimes-my-mind-makes-me-hate-writing-9e79b98c3088
['Jason Weiland']
2020-02-13 03:39:49.541000+00:00
['Digital Life', 'Mental Health', 'Self', 'Mindfulness', 'Writing']
7 Powerful Psychology Lessons That Will Boost Your Digital Marketing Game
1. Emotional Marketing There are two types of strategies that affect consumers’ buying habits: Rational marketing that promotes the quality and usefulness of the product, emphasizes the benefits and appeals to the rational or logical consumer. Emotional marketing that approaches consumer on a personal level and focuses on the tone, lighting, and mood to increase loyalty and boost conversions. It’s been proven that consumers base their purchase decisions around feelings and emotions rather than the rational information of products’ features and attributes. So, it’s worth remembering that customers will more likely be loyal to brands that evoke positive emotional response. Use this knowledge in your content marketing strategy and create content that: Inspires, creates excitement and interest. Reminds of special moments. Sparks conversations, reactions, and engagements. Apple is the perfect example of a company that uses emotions to connect with their consumers and increase brand loyalty. Apple’s marketing strategies tend to create a desire to become a part of a lifestyle movement, to be a part of something bigger. Recently, Apple joined Instagram and their #ShotoniPhone campaign fully encompasses those values. Instead of focusing on shiny product shots, Apple invites regular users around the world to share their iPhone photography with others. 2. Social Proof According to social psychologist Robert Cialdini, social proof is one of the most important tactics for influencing and convincing customers. Social proof or social influence is based on the fact that people love to follow the behavior of others. We tend to adopt the beliefs or mimic the actions of people we trust and admire. Implement this knowledge in your marketing strategy by using: User-generated content, testimonials and reviews. Influencer marketing. Social plugins and sharing buttons. For instance, clothing company Old Navy cooperated with social influencer Meghan Rienks on Instagram, Twitter and YouTube. In her videos, Meghan suggested style ideas to her followers using items from Old Navy, thus providing a powerful social proof. 3. Grounded Cognition Grounded Cognition theory is based in the principle that people can experience a story that they read, watch or hear as if it was happening to them. It also states that people tend to forget dry facts and figures. If you want your customers to remember your message, you have to incorporate it into a story. Taking this into account you can boost your marketing by: Speaking to your audience in a friendly way. Telling the stories they can empathize with. Sharing a personal story or experience. High Brew Coffee provides a great example of a personal story that enables the audience to connect with the brand. The founder of the company, David Smith, together with his wife have shared their story of coming up with their business idea. They let the audience know exactly where it comes from — a long trip through the Caribbean with their whole family. 4. Paradox Of Choice Giving people the freedom of choice can positively influence your marketing efforts. However, too many choices make people nervous and can negatively impact conversion rates. According to psychologist Barry Schwarz, providing people with limited range of choices reduces customers’ anxiety and leads to better marketing results. Use this knowledge and: Emphasize a few key points at a time. Create clear CTAs. Give your customers no more than two clear paths to follow. The Paradox of Choice theory can be applied also if you wish to offer your customers a wider range of choices. For example, while Amazon offers millions of products, they still manage to avoid choice overload. It’s done by highlight few different categories of products, each with up to 7 product options. 5. Information-Gap Theory George Loewenstein proposed that people experience a strong emotional response when they notice a gap between what they know and what they want to know. This means that you have to create a feeling of curiosity within your audience and give them information that fulfills their need for knowledge. An effective way to incorporate it in your content marketing is by creating powerful headlines. There are plenty of free online tools that can help, such as: Take an example from the digital marketing expert Neil Patel who is a master of strong headlines that create curiosity and generate clicks: 6. The Commitment and Consistency Theory This theory states that if you make a small commitment to something, you are more likely to say yes to a bigger commitment in the future. This means that if you get your customers to make a small commitment towards your brand, like signing up for a newsletter, they are more likely to make a larger commitment e.g. in the form of a purchase or membership. To improve your marketing strategy, start with small commitments like: Ask for customers’ contact details. Invite them to subscribe to a newsletter. Ask prospects to share your content on social media. Offer them to sign up for an e-book or webinar. Search Engine Journal, for instance, takes advantage of this theory and offers a free webinar for their website visitors. Although it’s simply asking for a name and an email address, it’s already a small commitment the user makes towards the brand. 7. Loss Aversion Theory Loss aversion theory refers to the tendency of people to avoid losses rather than acquire gains. The negative feelings associated with loss are even twice as powerful as good feelings of gain. You can effectively use this theory in your advantage if you analyze your audience, learn their fears and create content that emphasizes benefits of your brand that eases those fears. There are many analytical tools that can help you know your audience better, for example: ModCloth used this theory in their email reminders. After a few days after not making a purchase, customers receive a reminder that inventory is running low and the item they looked at might soon not be available anymore. Wrapping It Up Using psychological theories is a great way to improve the success of your marketing messages without any additional technologies or big budgets. These theories can help you better understand your customers, consider how your customers think and create a content that cuts through the information overload we’re all bombarded with.
https://medium.com/the-pushcrew-journal/7-powerful-psychology-lessons-that-will-boost-your-digital-marketing-game-38bbc7b661e9
['Alex']
2019-10-10 19:38:46.202000+00:00
['Marketing', 'Marketing Strategies', 'Growth Hacking', 'Psychology', 'Digital Marketing']
The 5 things we’ve lived by to create a truly international business from Day One
If you’re building a startup, you have a LOT of decisions to make early on. Most of them will be wrong and that’s OK. You learn from them and move on. But when it comes to attracting markets beyond your country’s borders, you have to make the right decisions from Day One. This is especially true if your home market is not the United States or any other English speaking country. I’ve learned that the hard way with my first startup affinitiz. affinitiz was one of the very first social networks. (Mark Zuckerberg was still in high school to put things in perspective.) affinitiz, my first startup, only targeted the French market. It was one of the very first social networks back in 2001. It never really took off. It was launched in 2001 and was built as a French-only business. The website was in French, the app was in French, our PR was targeting French media, and so on. As a consequence, it was constrained to a small, limited market (France) and it never became big enough to survive. After 5 years, it had 400,000 registered users in France. I have no doubt that had it been an international business from Day One, it would have had at least 10 times that and would have been a valuable business. If I had reached that 4 million registered user mark in 2006, I would have sold my company and made millions. But instead, as a French-only business, affinitiz made only $150,000 in annual revenue. I shut it down in 2009. My current business, Agorapulse, has users in 170 countries and 14,300 cities around the world. It also generates more than $3.3M in annual recurring revenue and is profitable. Only 20 percent of its revenue comes from France. If I had built Agorapulse the same way I built my first startup (French-only), it’d be dead by now. Doing the right things to make sure your startup is truly international can make or break it. Here’s what I’ve learned from doing it right with Agorapulse. 1) Use English as your working language for EVERYTHING If English is not your native language, deciding that it should be your working language is no small commitment. But think about it. Language is everywhere: your code, naming policy, website, app, your support content, emails, etc. If you want to grow a global business, you’ll have to assemble a global team right away. Note: Most of them won’t speak your native language. If you start building your business in your native language (say, French) and, two years down the road, you hire a native Spanish speaker who doesn’t speak a word of French, you’re fucked. (Pardon my French there.) If you launch your website in French (or any other language that isn’t English) and two years down the road you’re ready to go international, your website will only have built authority in your native language, and none in other languages. Good luck with that. Long story short: You HAVE to do everything work related in the only universal language there is — English. For many founders, it’s hard (and certainly not intuitive) to put aside your native language and use a “foreign” language in everything you do. But it’s the ONLY way to build an international business. If you start building your app, website, support doc and so on in a language that’s not English, you’ll create roadblocks that will very quickly become way too hard to overcome. 2) Localize everything (and watch your words) Localize, localize, localize. This is especially true for your code and your app interface. Every word, every sentence, tool tip, and button has to be a language variable. It’s also true with your website. Sure, it’s time consuming at first, but it will soon become a lifesaver as you start adding more languages to the mix. It’s a discipline and it’s not always easy to do, but it’s worth it. Otherwise, you get a hodgepodge of languages on your page — which isn’t impressive to anyone. Here’s an example from Hootsuite where French and English words are mixed up on the same page. The ability to uncorrelate the code from any given language will also allow you to use translation software such as Webtranslateit and let your marketing / product teams manage translations on their own. Your tech guys will save time (and headaches) and your product / marketing teams (or localization team as you grow bigger) will have the flexibility they need. Localization also plays a role in the UI of your product or website. As I mentioned earlier, English should be your working language. So when your UI/UX guy works on screens, all the wording should be in English. I’ve learned the hard way that English has much shorter words or sentences than any other language (well, at least, French, Spanish or Portuguese, the three other languages we use). For example, “Moderation rules” will become “Règles de modération” in French and, all of a sudden, that button where all the text fits in one nice line in English looks a doesn’t look as nice in French. In English: In French: Spanish and Portuguese looks very similar to French in terms of length and how it impacts UI. When you design your product UI or your website in English and expect to localize them in other languages down the road, keep that in mind. Like with this menu on our app, it looks good in English: But since “ads comments” is a much lengthier phrase in Spanish, it falls beneath the navigation bar and loses any dropdown menu functionality. Key takeaway: leave some UI breathing room whenever you can! 3) Hire native speakers Now that you’ve localized your app and website and you’ve begun creating your content in English, you’re ready to localize EVERYTHING. In the early days, you’ll likely have a small team with no native speakers. You’ll be tempted to work with translators to fill in the gaps. When you search on Fiverr or Upwork, it looks easy: there are a LOT of people who claim they can localize / translate your content. There are even companies that specialize in localization jobs. I’ve tried them all. Trust me when I say localization agencies/companies and freelancers don’t work. It’s a tall task for these outsourced workers to know your jargon, ecosystem, and product. The level of onboarding, proofreading, and micromanaging these people unfamiliar to your business is overwhelming. It’s MUCH faster and easier to have your localization capabilities in-house. That’s what we’ve done by hiring native speakers in English (in Ireland and the U.S.), Spanish (in Mexico), and Portuguese (in Brazil). The awesome thing about having native speakers embedded in your team (as opposed to external service providers) is that they learn your product, ecosystem and jargon along the way. After 4 to 6 months, they know everything they need to know to do their job. As localization needs rarely constitutes a full-time job, these team members can also help the company by providing customer support, giving demos to prospects, and helping expand your business in countries that speak their native language. A win-win to me — because localizing your app and website is not enough to grow. Growth will only occur if you offer the whole stack in their own language: website content app support sales Just to illustrate, we’ve offered our website and app in Spanish and Portuguese almost from Day One (2013). For 2 years, the MRR from Spanish and Portuguese speaking countries remained painfully low. But look at what happened after we hired our first full-time Spanish speaking team member: And what happened after we hired our first Portuguese speaking team member: You get the point, right? 4) Consider a remote (or semi-remote) organization Hiring native speakers on your team can be challenging in two ways: It’s a bigger financial commitment. It can be hard to find native speakers of your targeted languages if you need them locally in your office. The solution we’ve found to these two challenges is to hire our native speakers remotely. That’s actually the reason why we’ve become a “semi remote” company. Read more about our story here: This solution has worked great for us. First, it’s MUCH easier to find native Spanish speakers in Spanish speaking countries! Who would’ve thunk it? :) Same goes for Portuguese and English. If we had to hire them in Paris, our “home”, we’d have a hard time coming up with enough great applicants. Benefit #1: You have more opportunities of hiring not only native speakers, but great team members! When you go beyond the confines of your headquarters, you open up the pool of potentially great coworkers. Think of other cities around the world with great talent pools — with a remote or semi-remote approach, the talent in those areas is within your reach. Benefit #2: You’ll find more affordable resources if your remote team members live outside of San Francisco, New York, Paris, London or other major “western” big city. Opening positions that can be held by people living in places where the cost of living is cheaper helps a lot with the cost. That difference gets even bigger if you can hire people in countries where the cost of living is cheaper than yours. It’s obviously not a key factor, but in the early days, when every penny counts, the ability to spend 50% less on a resource just because her cost of living is 50% less than yours is a great win/win. 5) Deal with the friction more effectively Everything in a multilingual company is much more complicated, takes more time, and brings more challenges than if only one language is used. And startups don’t want additional friction that will consume more of its already scarce resources. I get it. But it’s only true if you do everything in all languages from the start. The best way to deal with that friction is to always start everything with one language — English — and then test, iterate, and measure for as long as necessary until you get to a state or a process that works well. Then, and only then, do you localize. For example, our support knowledge base was offered only in English for a loooong time. When it got to a point that we wouldn’t have to change it too much, we localized it. When we run ads, we always test and iterate on one language and then localize. Sometimes we even just use English on our ads, even if we target worldwide. In a nutshell, we don’t always localize everything. We try to reduce the friction as much as possible. Get the right tools If you’re going to run a remote (or semi-remote) team like I’ve suggested, you’ll need tools to work efficiently. I’ve shared most of the tools we used in this blog post: Here are a few key ones that will work well for your from-Day-One international business. Webtranslateit Webtranslateit is our go-to tool for localizing our app. It makes the process straightforward and you’re guaranteed not to let anything slips through the cracks. Multisite Language Switcher (WordPress plugin) We’ve chosen to use a multisite WordPress install for multiple reasons that go beyond the topic of this post. Long story short: Having only one WordPress instance with a language plugin didn’t offer us the flexibility we needed. The Multisite Language Switcher plugin, however, allows you to switch from one language to the others across your multisite Wordpress setup for any page or blog post. It makes localizing each page and blog post pretty straightforward too. We love Support Hero for many reasons. Most of them are detailed here: One of Support Hero’s greatest features is that if offers multilingual support and makes it easy to create different versions of your support documentation in different languages and see what has been translated and what has not (and needs to be done). If your startup is based in the U.S., you can wait longer to go international, but make sure you build the foundation for your future expansion, like localizing your code and website. Your turn! Do you have a global business? Any tip you’d like to share? Or are you struggling to go international and would like more details about how we’ve done it? Just ask!
https://medium.com/agorapulse-stories/the-5-things-weve-lived-by-to-create-a-truly-international-business-from-day-one-941add21ba9a
['Emeric Ernoult']
2017-03-22 22:41:21.213000+00:00
['Localization', 'SaaS', 'International Development', 'Entrepreneurship', 'Startup']
9 Traits You Should Slowly Remove From Your Day-to-Day Life
9 Traits You Should Slowly Remove From Your Day-to-Day Life #1 Overthinking the little things Photo by Ivana Cajina on Unsplash We’re all human. I know, so profound. This isn’t the most enlightening piece of knowledge I’ve offered, but hear me out. In the hustle and bustle of our lives, it’s easy to lose perspective. We are all, in fact, human beings. We are biological machines that take in oxygen to fuel our cells and exert carbon dioxide. In fact, I had a real human-chemistry experience the other day. I was wrapping up my emails for the day at work when one of my workmates rushed in. Their left eye was red and they struggled to keep it open. They anxiously asked if “I knew chemistry.” Now imagine what kind of thoughts were computing in my head. What about chemistry? What do you need me to do to your eye? I’m so confused and not qualified to do whatever it is you’re about to ask of me. Lucky for me, my workmate’s contact lens was simply stuck, and they needed me to work with their chemistry student for a few minutes. I felt instant relief. Oh so you’re working with a chemistry student, I thought. I can totally do that. I felt much better after absolutely overthinking the situation. Why do I stress so much? Overthinking isn’t my only mentally draining character trait. There’s a long list of other thoughts and feelings I know I need to reduce in my life. These traits wear us out and put us down, and for no good reason. Life presents us with new problems everyday. There’s a positive way to go about dealing with each one.
https://medium.com/illumination/9-traits-you-should-slowly-remove-from-your-day-to-day-life-8408bd9038f7
['Ryan Porter']
2020-12-16 23:50:46.893000+00:00
['Life Lessons', 'Productivity', 'Motivation', 'Self Improvement', 'Ideas']
Deep Learning for Developers
Photo by Jason Leung on Unsplash So you have been working as a Software Engineer for many years, you know different frameworks/languages/libraries, you do know the best practices and use them. I will try ensure you understand what Deep Learning is and things you should know about it from Developer’s point of view But then in the background you can hear some buzz going on around data science, artificial intelligence, machine learning, deep learning and your inner evil starts tickling the impostor syndrome that makes you feel behind on this topic. In this blog post, I will try ensure you understand what Deep Learning is and things you should know about it from Developer’s point of view. I.e. we will try to avoid going deep into maths. Let’s go! Let’s start with a business requirement: we are going to create an API, which can recognise if there is a flower in the image (see the picture of this blog post, where a robot is looking at a lego flower). What do we need to do, to implement it using Deep Learning? How to represent an image as a matrix? So imagine a square image, which is 64 px x 64px. Every pixel has an RGB (red/green/value) value, where (0, 0, 0) would stand for black, (255, 255, 255) for white. So if you wanted to represent an image as a matrix — it’s simply a three dimensional matrix, where dimensions are 64 x 64 x 3. This should be a cold shower for many devs, who, just like me, hate adding matrices and math stuff into code. But Deep Learning requires that. Matrices used: images (data) and labels We will have two kinds of sets of images, one set for training and one for testing (to see the accuracy of the trained model). So let’s say if we have a data set of 100 images, we may put it into a matrix and have a four dimensional one: 100 x 64 x 64 x 3 (64px x 64px, 3 — RGB). Then, each image should have a label, which says 0 (false) or 1 (true) to indicate if you, as a human, see a flower in that image. This is the model pre-training, where you need to give some examples to software so it knows what a flower is. What is Logistic (a.k.a. Sigmoid) Function? So one of the cryptic terms you’ll hear when looking into Deep Learning is a Sigmoid function, or a Logistic Function. It uses the Euler’s number and gives values between 0 and 1. In deep learning, the algorithms return values between 0 and 1 to give the probability “how likely there is a flower”. Then we round that value (e.g. 0.7 becomes 1) to a binary one. There is no need to go into internals of Sigmoid, since it’s very easy to define in code or use as an abstraction. What is Jupyter? Jupyter is like an IDE for Data Scientists. If JupyterHub is used, then it’s also a versioning system. Jupyter is a web-based tool where you can create “notebooks” (*.ipynb extension), these consist of python code and comments/plots/images/tables/etc. Basically, you read Jupyter notebook a some article and run the lines of code block by block: What is NumPy and TensorFlow? NumPy is a Python library which abstracts many scientific computations. For Deep Learning, we are mostly interested in operations with matrices (multiplication, transposing, shape shifting). If we had to do it in plain python — neither it would be efficient hardware-wise, nor you’d enjoy writing that code. TensorFlow is an ecosystem of libraries for Machine Learning for different languages (Python, Java, JavaScript, etc.). Deep Learning is a subset of Machine Learning, therefore we are going to use it. A great thing about TensorFlow, is that it has a lot of predefined data sets / trained models, so you already may use them instead of having to train yourself. Before looking into code examples: If you want to try running python notebooks, you may use Google’s Colab to have an environment setup quickly Training data set — it’s a set of images to train your model, i.e. let’s say you have 1000 images and for each of them you assign a binary value wether there is a flower in it or not Testing data set — it’s a set of different images from training data set, again with binary values assigned. This data set is used like a fitness function in software engineering, to tell how accurate your model is. Similarly to human mind, if you learn maths by doing maths tests, the result will likely be better if during exam you will get an identical test that you already did before rather than a completely new one. X and Y in data sets: x corresponds to a matrix of images, whereas y represent a binary (1 or 0) value that corresponds to “yes” or “no” Some code: Finally, some code that you may test out on Jupyter. For the purpose of explaining it given a real world example, I will avoid naming things in x, y, z and similar notations, just so you understand what is what. Let’s start with the simplest, we will use NumPy: import numpy as np Next, let’s introduce data. For the simplicity fo this example, all pixels of all training and testing images will be zero, but in real world, you’d need to import images of the same size (e.g. 64px x 64px), each pixel has 3 values (red/green/blue, i.e. RGB values) and convert them to matrices: # Constants training_images = np.zeros((10, 64, 64, 3)) # 10 images, 64px x 64px, 3 — RGB testing_images = np.zeros((2, 64, 64, 3)) # 2 images, 64px x 64px, 3 — RGB training_images_labels = np.zeros((1, 10)) # labels for 10 training images testing_images_labels = np.zeros((1, 2)) # labels for 2 training images Since Logistic Regression doesn’t by default accept 4 dimensional matrices, we need to come back to a humanly understandable, two dimensional model, i.e. flatten the data into a two dimensional matrix (or a table), where the amount of columns means the amount of images and rows — all pixels stacked: def flatten_images(images_matrix): return images_matrix.reshape(images_matrix.shape[0], -1).T flattened_training_images = flatten_images(training_images) flattened_testing_images = flatten_images(testing_images) Let’s define the sigmoid function (yes, TensorFlow has an abstraction for it, but just for the sake of understanding it): def get_sigmoid(z): return 1.0 / (1 + np.exp(-z)) The activation value (again, this is a calculus thing related to logistic regression) where you accepted flattened data (images), weights and bias (more on that — later): def get_activation_value(flattened_data, weights, bias): return get_sigmoid(np.dot(weights.T, flattened_data) + bias) Calculating weights (something close to a probability) near each image and the bias: def get_weights_and_bias(flattened_training_data, training_data_labels): values_per_data_entry = flattened_training_data.T[0].shape[0] # How many pixels an image has amount_of_training_data = flattened_training_data.shape[1] # How many images we have weights = np.zeros(values_per_data_entry) bias = 0 iterations = 1000 # You can set almost any value and optimise it learning_rate = 0.5 for index in range(iterations): activation_values = get_activation_value(flattened_training_data, weights, bias) weights_derivative = np.divide(np.dot(flattened_training_data, np.subtract(activation_values, training_data_labels).T), amount_of_training_data) weights = weights — learning_rate * weights_derivative bias_derivative = np.divide(np.sum(np.subtract(activation_values, training_data_labels)), amount_of_training_data) bias = bias — learning_rate * bias_derivative return weights, bias And here’s the final place where you actually train the model and put everything into one place: def train_model(flattened_training_data, training_data_labels, flattened_testing_data, testing_data_labels): # Gettings weights and bias weights, bias = get_weights_and_bias(flattened_training_data, training_data_labels) # Calculating predictions for each entry in the data set training_data_predictions = get_activation_value(flattened_training_data, weights, bias) testing_data_predictions = get_activation_value(flattened_testing_data, weights, bias) # We only care about binary predictions, i.e. “it is a flower” or “it is not”, so rounding training_data_predictions = np.around(training_data_predictions, 0) testing_data_predictions = np.around(testing_data_predictions, 0) # That’s it! Just for the sake of testing you may now check accuracy of your model: accuracy_of_this_model = 100 — np.mean(np.abs(testing_data_predictions — testing_data_labels)) * 100 print(‘{}%’.format(accuracy_of_this_model)) To run the model, you’d need to insert the values defined earlier: train_model(flattened_training_images, training_images_labels, flattened_testing_images, testing_images_labels) Running it would take a few seconds and then print 100%, because all values in our data matrices were zeros and not actual RGB colour values. I.e. this model was doomed to succeed. What wasn’t covered in this blog post Many things! So in classical explanations we would see what Neural Networks are, explanations why do they look/act similarly to human brain and went deeper into calculus, so that you could write a Deep Learning software without using TensorFlow. You could even do it without NumPy, but it wouldn’t be as efficient because of heavy operations with matrices. If you did enjoy this brief and simplified intro to Deep Learning and want to know more, I do recommend digging deeper into it in https://www.deeplearning.ai/ — they have a series of different courses that covers all you need to know and start applying at your work. Summary It might be extra difficult to truly understand how deep learning works, but you don’t necessarily need to know all of that just to get started. We could see, that flower recognition in images might be relatively easy. In reality, if we had such task coming from business, we’d likely use Google Cloud Vision API (which we extensively use in Zedge for wallpapers) or some other service to do the job. But don’t forget, that Deep Learning could be applied to more things than just images.
https://medium.com/zedge/deep-learning-for-developers-366a02691459
['Tomas Petras Rupšys']
2020-12-18 07:26:53.949000+00:00
['Machine Learning', 'Software Engineering', 'Deep Learning', 'Artificial Intelligence']
The Best Things I Discovered in 2020
What a year. Some got rich. Some discovered what ignorance can do. Some learned harsh lessons and retreated to the golf course to lick their wounds and find their ego again. 2020 was a tough year. I’ve never worked in a business environment quite like it, where nobody wants to spend any money. It wasn’t all bad. 2020 was a year that taught us resilience and love. We survived together in isolation via Zoom calls. Here are the best things I discovered in 2020. 20. A book called “Your Music and People” Derek Sivers is Tony Robbins for weird people. I like weird. Derek’s books are simple to read and the wisdom is powerful. You can read all of his books in a few hours. He ruthlessly edits tangents to leave you with pure gold. 19. The Last Time I Had Sex With My Wife Greyson Ferguson wrote a story with this title. I read every word and felt all of his pain. Writing that moves me emotionally is a rare find. This story is inspiration for anyone who wants to write with emotion and make people feel something. 18. Local walks I spent most of the year not being allowed to go beyond 5 kms from my home. Melbourne had one of the harshest lockdowns anywhere in the world. I basically couldn’t do anything. So I had to get used to finding things to do with my girlfriend. We took a walk around our neighborhood every day. My neighborhood looks and feels like a new suburb. Sometimes we ignore what’s right in front of our eyes. We get sold lies by travel agents that we need to be in Hawaii to be happy. 2020 showed us travel won’t make you happy. 17. Bose noise-canceling headphones When you live next to a train line in a student apartment, things get noisy. Bose headphones pump white noise into your ear so your brain can concentrate. I used these headphones to crank up movie soundtracks and write lots of content online. 16. My Octopus The movie, My Octopus Teacher, was so powerful. You go into the film thinking it’s going to be a documentary. Then you get taken down the rabbit hole of how a single octopus lives. It’s hard to believe the relationship between a man and an octopus was captured on camera. After watching this film you will question everything you know. You will learn to notice the small things and get lost in your curiosity. 15. Whole food plant-based eating Cutting out meat, seafood, dairy, oil and sugar has lifted my energy levels. Energy is life. It allows me to perform at my best and focus on writing for hours on end. The closure of restaurants helped me stay disciplined. Now I don’t want to go back to fried food life. If you want more energy, do what my 104 year old grandma used to say: eat plants. 14. Giving up SMS Not sure why this communication channel exists. Who trades phone numbers anymore? SMS is a brain drain for me. Trying to write on a tiny phone keyboard is my definition of hell. Audio messages, video messages and messenger apps on a desktop/laptop work better. The best mode for your phone is aeroplane mode. It helps you think. 13. “Earth Deluxe” for reminders of beauty The instagram account “Earth Deluxe” is just what I needed when I couldn’t leave my home for most of the year. You can travel with your mind, rather than on a plane, with these gorgeous images. When you feel like you have nothing, you always have a sunset. 12. Creative communities I’ve always tried to do everything creative, alone. Creative loneliness is a bad idea. I learned in 2020 that creative communities are incredibly powerful. Many of my new virtual friends this year have come from a couple of writing communities. I made it a habit to do video calls with people from the community every week. It helped me feel connected to this crazy, shutdown world — where everything you try to do is canceled. What if the answer to “what do I do next” is found in a creative community of people just like you, trying to achieve the same goals as you? 11. The Atlantic Their long-form essays are the bomb. They taught me what real writing is, although their extremely long paragraphs do my head in and make it hard to follow the words along the page. 10. Twitter Threads Nicolas Cole got me onto these. Twitter threads are a better way to use twitter. They turn twitter into a blogging platform. Twitter threads force you to be concise and cut out all the extra words and sentences readers don’t need. On twitter, you can say whatever you want. I found that liberating in 2020. 9. Family When the world turns into an apocalypse you miss your parents. They remind you of where you came from. I was separated from my family for most of 2020 due to lockdown and covid restrictions. This made me appreciate family even more. Phone calls became more important. Thankfully they are all okay. Family acts as a reset when chaos temporarily takes over the world. 8. iPhone 12 Okay, calm down. I got a new iphone and fell in love with photography again. Most cameras on phones suck. Try taking a picture at night with your phone and you’ll see what I mean. The iPhone 12’s camera is unbelievable and makes the upgrade worth it. The lightning-fast 5G network opens new possibilities for apps, too. 7. Free email courses I’d never heard of this concept. A free email course helped me engage with readers this year. I realized how much people appreciate when you go deep on a subject and don’t force them to pay money for it. 6. Teaching 2020 was the year I launched an online course. I’ve wanted to do it for years. I’d tried before and failed lots of times. The best part was unexpected. Watching all the students flock into the private online community was a deeply emotional moment. Within a few days the community was buzzing with activity and people were taking everything I’d learned as a writer and applying it. To see what you’ve learned be reused in real-time is a ridiculously cool feeling. Teach others what you know to feel fulfilled. 5. Loom This handy tool allows you to record your screen and send links to the videos you capture. You can use this tool to help you create your own online course. 4. Proper Finance Gurus The world of money completely changed forever in 2020. This was the year I took the time to understand finance at an even deeper level. These financial gurus taught me a lot: #1 by a mile: Raoul Pal Raoul Pal Ray Dalio Alex Saunders Ivan Liljeqvist Paul Tudor Jones Anthony Pompliano Daniela Cambone, Stansberry Research Michael Saylor, Microstrategy 3. Todd Brison You can’t have him. He’s all mine (Okay, I’ll share him with you.) Todd writes the best emails I have ever seen. Those on my email list get to read them. Every time Todd drops one people go crazy and my inbox lights up. People love personality fused with helpful content. Todd is the hipster yoda of writing. Plus, we taught a writing course together. 2. Blockchain investing People said I was stupid for investing in Ethereum and Bitcoin. My original investment has gone up 17,900%. Bitcoin is the best performing asset of the last decade and was up 170% in 2020. It pays to ignore the critics and do your own research. You can make enough money to retire early and never work a normal job again if you get yourself a basic financial education. 1. Humanity The secret to 2020 was forcing myself to see the positive. Watching humanity endure one of the toughest times in history made me emotional. I spent a lot of time looking for how people stayed positive. The fitness instructors, musicians, and everyday people in Europe using their balconies to spread hope, love, positivity and support were incredible. I’ve never seen anything like it. While a virus stormed the world and killed a lot of people, everyday folks found it in their hearts to help complete strangers. Thinking about the beauty of humanity in 2020 is enough to bring a grown man like me to tears. 2020 showed us what we’re capable of. 2021 and beyond will show us our ability to recover and make a tremendous comeback.
https://medium.com/the-ascent/the-best-things-i-discovered-in-2020-5307cabeb22e
['Tim Denning']
2020-12-16 21:03:23.617000+00:00
['Books', 'Self Improvement', 'Life', 'Money', 'Writing']
I wear this smile like a mask
I wear this smile like a mask Poetry in free verse No, I am not lying, I am only showing you what you want to see, because it’s easier to pretend than to explain why I am not alright. I am afraid that if I try, you will argue, tell me how I have everything I need that there is something wrong with me if I am still sad. But how do I explain that the opposite of happy isn’t always sad, and there’s a difference between not knowing where I want to be and not wanting to be where I am right now? I am an expert at pretending to be happy when I am not. I think we all are. That we bury our sadness beneath those layers of fake smiles and laughter that fails to hide the shadows around our eyes. This is our blessing, this is our curse, and we would rather pretend than explain to you why we are not alright.
https://medium.com/resistance-poetry/i-wear-this-smile-like-a-mask-6d030fd5603f
['Anangsha Alammyan']
2020-07-23 19:41:40.508000+00:00
['Self-awareness', 'Mental Health', 'Depression', 'Poetry', 'Resistance Poetry']
Why You Should Trade Split Decisions for “Flip Decisions”
Why You Should Trade Split Decisions for “Flip Decisions” Use Flipism to make in-the-moment choices. Photo by Pocky Lee on Unsplash In their book The Leading Brain: Neuroscience Hacks to Work Smarter, Better, and Happier, Friederike Fabritis and Hans Hagemannthat describe flipping a coin as a powerful way to make decisions. But not in the way we’d normally expect. Usually, we have two options. Option A is heads and Option B is tails. We flip the coin, and whichever side the coin lands on, we go with that option. But this is not the ideal way to make decisions. There is a better, more intuitive way and it’s more reflective of what the brain actually wants. In his article at Inc.com, Jeff Haden writes: “If you’re torn between two choices of seemingly equal merit, flip a coin. If you’re satisfied or relieved by the decision the coin made for you, then go with it. On the other hand, if the result of the coin toss leaves you uneasy and even makes you wonder why you used a coin toss to decide such an important decision in the first place, then go with the other choice instead. Your ‘gut feeling’ alerted you to the ‘right’ decision.” A study from researchers in Switzerland documented a similar process. They told participants that one side of the coin would allow them to take a job at a more prestigious firm with higher pay and longer hours, and the other side would be at a less prestigious firm with lower pay and more flexible hours. The coin was then flipped into the air, but it was never revealed which side it landed on. Research participants were asked to decide which choice their subconscious brain wanted more while the coin was in the air. This choice revealed their underlying desire. While the flipping of the coin acted as the catalyst for decision making in this study, a second study was performed. In it, researchers suggested participants go for specific choices in a restaurant menu. It was clear that when certain menu items were suggested to participants, they formed stronger opinions about what they wanted. Their final decisions either leaned toward or strongly away from the recommended item. Whether or not people followed the recommendation didn’t matter. What mattered was the fact that people became much more decisive. This phenomenon is known as Flipism. Psynso describes Flipism as: “[a] pseudophilosophy under which all decisions are made by flipping a coin. It originally appeared in the Disney comic “Flip Decision” by Carl Barks, published in 1953. Barks called a practitioner of “Flipism” a “Flippist.” Flipism can be seen as a normative decision theory, although it does not fulfil the criteria of rationality.” Flipism should probably be taken with a grain of salt. However, when making split decisions or acting in the moment, it can be a really powerful tool. In a piece about in-the-moment decision making, Neil Patel highlights the positive results he’s had using his instinct, and how those results compound when he became more and more confident in his gut. In fact, studies show that “the more you pay attention to the outcome of trusting your intuition in combination with facts, the better your future decision-making can become.” The name “split decisions” simply reveals the conflict you face in those moments when you need to take them. The name doesn’t offer any solutions. “Flip decisions,” however, offer a valuable tool in deciphering which direction to go when you have a quick decision to make. Toss the coin up in the air, forget about it, and your mind will be made.
https://medium.com/big-self-society/why-you-should-trade-split-decisions-for-flip-decisions-3f43034da5eb
['Jordan Gross']
2020-11-20 14:10:58.770000+00:00
['Leadership', 'Mental Health', 'Self Improvement', 'Psychology', 'Inspiration']
How Not To Apply To An Accelerator (part 6)
This is part 6 of my “self-defense essay”. If you missed the prior installments, start here with The #EpicNovelFail The #LinkerFail (a.k.a. The “You find it”) “Can’t I just send you my pitch deck? It’s all in there.” I get that question from time to time and it’s a fair question. The entrepreneur has put a lot of time into crafting his deck and making it look pretty. Why fill out an application if the data are in the deck? In many cases, the data are not all there. Our application questions represent the minimum amount of info we need to feel comfortable inviting a startup to the next stage of the process. I would guestimate that well over 80% of the investor decks we see are missing the answer to at least one of our questions. These aren’t bad decks. Many are likely very effective in getting the startup a meeting with potential investors. They just don’t have all the info we want to see. The other answer is a bit more subtle. As I mentioned, every Dreamit reviewer sees hundreds of applications over the course of a few short weeks. Even if a deck is ‘complete’, each deck would still present the information in its own way and in its own order. We would have to hunt through the deck to find where the answer to a specific question is while mentally checking off the boxes to make sure all the bases were covered. That adds time and mental load to a process that already consumes massive amounts of both of these scarce resources. Tip: don’t respond to an application question with “Please see my deck/website/video (link here).” Next up: The #PoorAttentionToDetailFail
https://medium.com/dreamit-perspectives/how-not-to-apply-to-an-accelerator-part-6-66f517006f32
['Andrew Ackerman']
2016-11-17 18:48:34.653000+00:00
['Entrepreneurship', 'Startup']
From Print to Online: Is the Truth Worth Paying For?
The Truth Is… Truth is bland. It lacks the glitter that catches our attention. It does not take any sides, so no one wants it. From the streams of information that floods our minds every day, the stream of truth is the least appealing. It’s like a ruin in the middle of a bustling city. It’s there, but no one really cares about it. But in 2016 something happened that increased the worth of truth. Donald Trump became the President of the United States. So the first spike in the digital subscription of the Times came soon after Donald Trump was elected. And shortly after Trump labeled the press “the enemy of the people,” the Times along with Droga5 NY came up with a campaign, The Truth Is Hard. That campaign video was played at the Oscars in 2017. It was the Times' first televised campaign in a decade. The Truth Is Hard ad at the Oscars The Truth Is Hard short documentary The aim of the campaign was to show people that knowing the truth is important. And there is a lot of effort that goes into unearthing the truth. The Times wanted the curious reader to understand that by paying for the subscription, they will support the cause of truthful reporting. After the first teaser at the Oscars and subsequent print advertisements, the newspaper came out with a slew of short documentaries reinforcing the message. The documentaries ranged from the heart-wrenching stories of how kids were separated from their parents at the Mexico border, to the appalling conditions of the Rohingya refugees in Myanmar. The Truth Is Worth It: a story about immigrant children separated from their parents at the Mexican-US border The Truth Is Worth It: a story about the plight of Rohingya Muslims in Myanmar Toby Treyer and Laurie Howell, the creative directors at Droga5 who led the campaign, explained it in an interview. They said: “We thought, wouldn’t it be amazing if we could show everything that went into a headline, but do it as if the journalist was discovering it as they were writing the story?” The advertisements were giving the public a sneak peek into the life of a New York Times journalist. By showing how hard it is to get to the depth of the stories, dealing with hostile governments, anxious locals, grief-stricken mothers, and rogue assassins, the ads showed us how valuable truth is.
https://medium.com/better-marketing/from-print-to-online-is-the-truth-worth-paying-for-61eb76fc3aaa
['Mehboob Khan']
2020-11-20 15:42:51.955000+00:00
['Marketing', 'News', 'New York Times', 'Journalism', 'Advertising']
Azure — Deploying React App With Java Backend on AKS
Azure — Deploying React App With Java Backend on AKS A step by step guide with an example project AKS is Microsoft Azure’s managed Kubernetes solution that lets you run and manage containerized applications in the cloud. Since this is a managed Kubernetes service, Microsoft takes care of a lot of things for us such as security, maintenance, scalability, and monitoring. This makes us quickly deploy our applications into the Kubernetes cluster without worrying about the underlying details of building it. In this post, we are going to deploy a React application with a Java environment. First, we dockerize our app and push that image to the Azure container registry and run that app on Azure AKS. We will see how we can build the Kubernetes cluster on Azure AKS, Accessing clusters from outside, configuring kubectl to work with AKS cluster, and many more. Example Project Prerequisites Install Azure CLI and Configure Dockerize the Project Pushing Docker Image To Container Registry Creating AKS Cluster Configure Kuebctl With AKS Cluster Deploy Kubernetes Objects On Azure AKS Cluster Access the WebApp from the browser Summary Conclusion Example Project This is a simple project which demonstrates developing and running React application with Java. We have a simple app in which we can add users, count, and display them at the side, and retrieve them whenever you want. Example Project If you want to practice your own here is a Github link to this project. You can clone it and run it on your machine as well.
https://medium.com/bb-tutorials-and-thoughts/azure-deploying-react-app-with-java-backend-on-aks-4466adda8cfc
['Bhargav Bachina']
2020-12-16 06:02:34.690000+00:00
['DevOps', 'Cloud Computing', 'Web Development', 'Kubernetes', 'Programming']
Nine Ways to Tell Your Design Story on Medium
There are so many examples of successful design writing on Medium. Here are just a few that exemplify how you can use the platform. I. Share your knowledge You have a wealth of expertise and experience that readers would find of interest. Facebook product design director Julie Zhuo regularly writes on topics like design process, management, common mistakes, and more: Pasquale D’Silva explains designing for animation: A group of designers from leading tech companies collaborate to share best practices, lessons, and stories: II. Reveal your process Give readers a peek behind the curtain with insight into how their favorite products were made. Vanessa Koch, who worked on the resdesign of Asana, provided insight into the process: III. Announce a feature or product Press releases are passé; instead, many designers write Medium posts to showcase new features or products. When Foursquare underwent a redesign and launched Swarm, Sam Brown and Zack Davenport revealed their design thinking in announcing both: IV. Solve a problem Show how design can be used to solve a problem. Shortly after Caitlin Winner arrived at Facebook, she noticed that the “friends” icon didn’t adequately represent both women and men. So she redesigned it: V. Engage with your audience While you can always broadcast your ideas on Medium, the real value is its network — the ability to interact with your readers to advance thinking. Jennifer Daniel, Erika Hall, Mike Monteiro, and others launched Dear Design Student to solicit and provide advice: VI. Promote your company’s design talent Competition for designers has never been fiercer. Showcase your company’s design bench with a dedicated publication, like Facebook’s, Uber’s, and (naturally) Google’s design teams did: VII. Write ‘non-design’ design stories You can write about design without writing about the design process. Medium designer Marcin Wichary goes deep on typography and language: Basecamp founder Jason Fried does design criticism — of the Drudge Report: VIII. Relate to adjacent fields Design doesn’t exist in a vacuum, of course. Designers work closely with engineers, researchers, user support, product scientists, content strategists, and others to craft their products. Andrei Herasimchuk is just one of the many designers who’s written about whether designers should learn to code: Khosla Ventures’s Irene Au explains how designers work effectively with management: IX. Adapt a speech Many designers give talks at conferences like SPAN across the country and around the world. You can easily adapt your speech and publish it on Medium, like Google designer Rachel Garb did.
https://medium.com/google-design/nine-ways-to-tell-your-design-story-on-medium-36edb2936bb5
['Kate Lee']
2015-11-06 18:06:11.832000+00:00
['Medium', 'Writing', 'Design']
Is immunity against corona virus available in the market or is it available at home?
Is immunity against corona virus available in the market or is it available at home? Find out! This is the most common question of today . Photo by G.lodhia. Is immunity produced in a special kind of food or is it available in any capsule for? There are uncertain questions to this topic, people are not able to understand immunity. The most common similarity about immunity and disease is just that it is caused in our body. We do not inhale or intake any diseases or immunity. Disease is caused by viruses or bacteria or something else also. Immunity is our body’s mechanism. The stronger we are in fighting the disease, the weaker will be the action of the disease in the body. The affect of any diseases that means the attack which the disease gives the body can be diminished or negligible, only if we are vaccinated against it or if our body’s fighting mechanism which is called its immunity is stronger. As there is no vaccine against coronavirus, the only technique we have to save ourselves is keeping proper hygiene, taking precautions and boosting our immunity. Immunity is our body’s soldier -White blood cells called WBC and also called leucocyte. Leucocyte protects our body against foreign materials which enters while breathing means while respiration. We can increase our immunity by intaking some kind of materials available at our home. We find turmeric, ginger, garlic, green tea in our kitchen. These are used to increase immunity. Even vitamin C and vitamin E tablet or capsule aids in increasing the immunity. Vitamin D is synthesized in our body when we get the sun rays of the morning time, capsules are also available in fact injections are also available. There are some personal habits also which decreases our immunity and those should be avoided like avoiding harmful snacks, by exercising regularly, by getting adequate sleep, stress should not be taken and cleanliness of a body is mandatory. If we improve and strengthen our immunity, we can make the virus non functional in our world or we can make its effect deleted from our body. NO DISEASE. NO IMMUNITY. NO LIFE .NO DEATH . Viruses cause diseases. Corona viruses is a group of RNA viruses normally 0.125 µm that is 125 nm is the size of coronavirus as said by the experts. The smallest size discovered is 0.06 µm. The biggest size is 0.14 µm. The overall size as said by the experts is in between 60 to 140 nm which is 0.06 µm to 0.125 µm. This corona virus has spikes which measures 9 to 12 nm ,it gives it a shape like a solar one. Coronavirus is bigger than some smallest dust particle . One meter is 1000000000 nanometers. photo by G.Lodhia. Earlier there existed severe diseases like corona viruses disease(covid 19 ) named SARS and MERS which had caused pandemic. There is No specific vaccine or medicine for corona virus. Investigation to find a vaccine is in progress. 2 to 14 days is the incubation for the symptoms to be seen if affected by coronavirus.It has symptoms like Respiratory tract failure Cold Bronchitis Pneumonia Gut diseases Fever Sore throat Loss of smell or taste Photo by G.Lodhia With hospitalization, there can be over-the-counter treatments also.Such as Rest Avoid overexertion Drink plenty of water Use proper masks Keep proper hygiene Do not touch things unnecessarily Sanitize your hands often or wash them properly more than usually done Prevent touching your face parts. Photo by G.Lodhia Conclusion: Prevention is better than cure, applies axactly for corona virus disease. Hope This Could Enlighten Some Ways To Be Done At Home To Live In This Lock Down Days .
https://medium.com/illumination/is-immunity-against-corona-virus-available-in-the-market-or-is-it-available-at-home-find-out-d8ba357fd182
['G.Lodhia M. Edu']
2020-06-28 15:27:29.660000+00:00
['Article', 'Coronavirus', 'Virus', 'Blog', 'Writing']
Loss of trust in American democracy is a crisis we have to confront
From CNN Anthony Marx and Jamie Woodson write that American faith in democracy and the media has declined significantly in the last 40 years, but there are ways to increase trust in these institutions. They require every American to take an active role. Full story here
https://medium.com/trust-media-and-democracy/loss-of-trust-in-american-democracy-is-a-crisis-we-have-to-confront-4027e8e8212b
['Knight Commission On Trust', 'Media']
2019-02-07 19:32:32.315000+00:00
['Trust', 'Journalism', 'Media']
Do You Think Hard Work Equals Success — Think Again
Do You Think Hard Work Equals Success — Think Again Lessons from “Outliers: The Story of Success” by Malcolm Gladwell In NPR’s How I Built This podcast, the host, Guy Raz always asks his guest one question; “How much do you attribute your success to luck and how much do you attribute to hard work?” Episode after episode, entrepreneurs ponder on his question and answer whether their success was attributed to success or luck, or both. From all of the episodes I’ve listened to, the founder of Canva, Melanie Perkins, answer stuck out to me the most. In the podcast, she answered Raz like so: I think it's a very interesting question because I think that if you zoom out of luck, then you’ll say, where were you born, who were your parents, what was the education that you got, you know, having good health. There are so many layers of luck. So if you look at all of those things then I couldn’t be luckier. Then on the other side of it, I think we planned enough seeds where eventually one of them grows, so that's kind of another version of luck, maybe you plant 1000 seeds, eventually one of them will grow. You can attribute one of these seeds as luck or hard work for planting 1000 seeds. So I would say little column A, little column B. Melanie Perkins was very articulate in her answer. Perkins elegant answer matches with Malcolm Gladwell’s book, “Outliers: The Story of Success”. In summary, Outliers proves that hard work does not always equate to success. Rather, success is a combination of lucky events and hard work. The first example Gladwell pointed out in his book was one interesting fact about every single professional hockey player. Take a look at the two charts below and try to see if you can see a pattern: Outliers: Page 20 Outliers: Page 21 Do you see a pattern for a person who is more likely to be a professional hockey player? Most of the players were born in January. This is because in Canada, cut off date for schools in January. So a kid born in January can be a couple of months older than kids born later in the summer. While growing up, a couple of months of difference is a lot in children. This means that kids born in January were slightly older, slightly taller which gave them advances. Kids born in January got more training. As a kid, a couple of hours of training time doesn’t equal a lot. However, over time, these kids who got a couple more training here and there ultimately get better than kids who did not. This means that you are more likely to become a professional hockey player if you were born in January. This sounds like luck to me. Although you are more likely to become a hockey player if you were born in January, not every single kid born in January ends up playing hockey professionally. This is where the hard work comes in. Success is a combination of hard work and luck.
https://medium.com/the-innovation/do-you-think-hard-work-equals-success-think-again-bf31d7a43c29
['İlknur Eren']
2020-12-27 19:25:04.194000+00:00
['Books', 'Self Improvement', 'Productivity', 'Advice', 'Reading']
Medium Article Format
Other Tools for Writing and Editing Your Medium Article Customizing Your Article’s Properties Before and After Publication Miscellaneous Quick Answers to Questions About Medium Formatting How can I center text on my Medium article? You cannot center text utilizing the Medium editor or toolbar. Can I automatically post my WordPress blog articles on Medium? No, unfortunately you cannot automatically post your blog posts on Medium. This used to be an option in WordPress through a specific integration, but Medium discontinued this option. Can I edit more than one article at a time?
https://medium.com/blogging-guide/medium-article-format-bc06439c4e7c
['Casey Botticello']
2020-02-23 01:49:05.061000+00:00
['Format', 'Typography', 'Design', 'Medium', 'Writing']
Why Companies Should Pay Attention to the Trend of Minimalist Consumers
Deconstructing digital devices These dangers of new technologies can also come from the digital devices themselves. The tools that have become our daily lives, such as smartphones, compact computers and touch-sensitive tablets, have features designed to do everything. As such, hardware designers, be it Apple, Sony or Samsung, have promoted devices that make us pay attention to several things at the same time. They were based on the idea that increasing the multifunctionality of devices would bring more value to the consumer. Yet, as neuroscience studies show, the brain is very good at doing only one thing at a time, as neural networks gather information simultaneously and not successively. As a result, these technologies lead to distracting and permanently addictive behavior for activities that require little concentration. Many consumers have become aware of the need to have devices that only provide one service at a time (for example, by turning off social network or call notifications, or filtering applications). Some others have started to think about creating new kinds of products that address a single need. The Light Phone, for example, is a phone that meets the basic functionality of a normal phone, like the models before smartphones, i.e. calling and SMS, and nothing else. Some others have conceived computers that would perform only a few cognitive tasks. These initiatives are in line with what Mark Weiser and John Seely Brown called in their seminal article the revolution of “calm technologies”, i.e. less invasive technologies that are deployed in the peripheries of our senses and make less noise. They started from the conviction that technology must be made to serve the human being, the consumer who needs to minimize the influence of the machine on his work and his life.
https://medium.com/curious/why-companies-should-pay-attention-to-the-trend-of-minimalist-consumers-ec52039ecd37
['Jean-Marc Buchert']
2020-12-22 15:48:12.307000+00:00
['Minimalism', 'Product Design', 'Productivity', 'Consumer Behavior', 'Marketing']
Not Every Developer Wants to Become a Manager — And That’s Okay
Not Every Developer Wants to Become a Manager — And That’s Okay Companies should create clear career paths for individual contributors Photo by Jaromír Kavan on Unsplash I have only worked in startups with flat hierarchies. Even at companies where there are no clear titles, you usually find three kinds of engineers: The junior developers — fresh out of school; the tech leads to whom everyone reaches out for help and whose technical opinions matter the most; and in the middle, between the juniors and the tech leads, a vast ocean of software engineers with various skills and experiences. One topic that repeatedly came up in our retros is the lack of career growth opportunities. This topic seemed to puzzle some tech leads who thought that there were a lot of projects and a lot of new things to learn. There were surely a lot of learning opportunities. Still, when the only feedback you get in your 1:1 meetings is “You’re doing great, keep going,” you don’t feel like you’re progressing. As software engineers, we want our opinion to matter — we want to have an impact. The obvious next step is to become a tech lead but it’s unclear how we get such a position. Or if we even want it.
https://medium.com/better-programming/not-every-developer-wants-to-become-a-manager-and-thats-okay-e7d76b3efd0e
['Meriam Kharbat']
2020-02-17 15:08:24.570000+00:00
['Careers', 'Management', 'Programming', 'Startup', 'Software Engineering']
How To Be A Successful Business Owner?
BUSINESS LESSONS How To Be A Successful Business Owner? Surviving The Early Stage Of Business Ownership Photo by Joshua Earle on Unsplash Many entrepreneurs answer the question of why they went into business with either their passion or the need for an income. The problem is, there is so much more to it. We enter into business with a particular skillset. Some of us are experts in one area. Some of us are a Jack of All Trades, Master of None. The true winner in the Entrepreneurial landscape has a mindset that combines the … → Go Getter that tackles the challenges in front of them without delay → Analyst that looks at the details, reviews results, and makes plans based on them → Communicator that shares knowledge with team members → Delegator that knows what to have done by others → Regulator that stays informed and in compliance with laws → Networker that stays connected to the outside world through local meetings and online forums. → Recruiter that attracts and vets the right people → Bean Counter that makes sure there will be a tomorrow by planning strategically and minimizing expenses → Coach that trains, supports, and acknowledges the team → Evangalist that doesn’t spend a day without building awareness and promoting their business → Director that tracks everything → Organizer strives to improve processes → Visionary that regularly thinks of new ways to attract customers, conduct business, and introduce new products and services ALL OF THIS IN ONE PERSON… seems IMPOSSIBLE. Exhausting → YES, but impossible no. I am an example of a Jack of All Trades that took on a challenge in a completely different and heavily regulated industry, Cosmetology. Previously, I had a service-based sole proprietorship and a product-based sole proprietorship. I knew nothing about hair other than what I had observed or picked up from a friendship of 20 years. Due to the economy and an outdated concept, the business closed. I am proud of all that I did to try to make it work and gained so much from the experience that I want to share as a Mentor to other business owners. Being a business owner means wearing many hats. To be successful, new owners do not just dabble, but dig in and master. If I can do it, so can you. I am here to help!
https://medium.com/swlh/how-to-be-a-successful-business-owner-4169fcfdc08f
['Colette Becker']
2019-12-15 15:59:27.446000+00:00
['Startup Lessons', 'Business Owner', 'Startup', 'Operations Management', 'Entrepreneurship']
4 Eye-Opening Mindfulness Lessons I Learned from a Depressed Buddhist Monk
4 Eye-Opening Mindfulness Lessons I Learned from a Depressed Buddhist Monk How to make peace with your mind? Photo by THÁI NHÀN from Pexels After being a Buddhist monk for 12 years, Gelong Thubten became depressed. It was a shock for him. And for me, too, when he shared his story. I thought monks have it all figured out. When we think we have arrived, we are stuck. We can always learn something new about ourselves wherever we are on our journey. Gelong joined a 4-year retreat on a Scottish island, cut off from the outside world. No news, no internet, no meetings with people outside the retreat location. He describes the first two years as “falling through space with nothing to hold him”. Gelong thought this retreat is gonna be a piece of cake, and then he found himself depressed and anxious. When he reached rock-bottom after half of the retreat, something changed and made him overcome his depression. What is it that can even make a monk with 12 years of meditation experience depressed? Do you recognize his situation? Have we not also been locked away, cut off from the people we love by the pandemic. Since March, I barely left the house and only met very few people. For normal humans, the pandemic is like being on a retreat for monks. The next months are going to be tough with corona cases increasing everywhere in the world. Let’s see how Gelong Thubten overcame his struggle. What can we learn from him to get through this winter happy and energetic, instead of depressed and anxious?
https://medium.com/change-your-mind/4-eye-opening-mindfulness-lessons-i-learned-from-a-depressed-buddhist-monk-99ffa60ed0fc
['Karolin Wanner']
2020-10-16 12:31:50.861000+00:00
['Self', 'Mindfulness', 'Spirituality', 'Psychology', 'Mental Health']
4 Thoughts I had to Kill to heal from Anxiety
4. Life will never be as good as it once was Have you ever listened to an old song and thought to yourself: “Songs were so good back in the day”? Or maybe you had the same reaction when you stumbled upon your childhood pictures: “This was the best time of my life”. It’s because we only look back at the winners, the victories, the good times, the good feelings, and all the best memories from the past. Known as Survivorship bias or survival bias, it is the logical error of concentrating on the people or things that made it past some selection process and overlooking those that did not, typically because of their lack of visibility. In other terms, we completely ignore all the losses, failures, and all the bad memories from the past. We completely forget the fact that, back in the days, those old songs or those memories were not as good as our brain is tricking us to remember them. Change can be scary, which is why our brain is never happy with what it has in the present. But change also is inevitable. Thus, our brain always hopes and dreams of a perfect future, while nostalgically remembering only the best times of the past. It is always trying to trick us into feeling good so we get the little dopamine kick, which is the pleasure chemical of it. But that pleasure can only last long. Dwelling on the past, romanticizing our youth is sabotaging our growth. “The past is history, tomorrow is a mystery, but today is a gift. That is why we call it the present.” — Master Oogway Stop living in the past and future, they don’t exist. The only thing that exists is the present. Enjoy the present. Savor it. Be grateful for what you have. Life is precious, unique, and beautiful.
https://medium.com/change-your-mind/4-thoughts-i-had-to-kill-to-heal-from-anxiety-1c663fc74a07
['Douaa El Khaer']
2020-05-03 11:39:27.770000+00:00
['Psychology', 'Anxiety', 'Life', 'Mental Health', 'Mindfulness']
How I Launched a Successful Gig Business with Only Gig Workers
Five key reasons for this business model Running a gig business with gig workers can be efficient and smooth when done right. Here are five key reasons why I choose this business model: 1. Pay for what you need when you need it One of the most beneficial aspects of utilizing gig workers in a gig business is that you only pay for what you need when you need it. When the projects are flowing, I’m happy to be hiring these creative experts to do amazing work. But when things are slow, I don’t have any recurring salaries or overhead expenses chipping away at my bank account. I am able to quickly adjust expenditures whether in an up or down season, keeping my hard costs manageable. 2. No additional equipment Most freelancers have their own tools. For video production, it’s very common. Camera operators will typically have their own camera gear, lights, and other accessories. Editors will have the latest workstation loaded with editing software and plugins. Sound technicians will have the latest microphones, mixers, and sound tools. Since freelancers typically come with their own gear, I don’t have to purchase expensive equipment or keep up with the latest technology. 3. No office space For my gig business, I have chosen not to have a business office. I know this is not ideal for everyone, but I want as little overhead as possible, so I run all of my productions from a home office. The bulk of my work is production management and client communication, which I can do via the Internet and through phone calls. A home office is perfect for my situation. 4. High-quality service As I mentioned earlier, I quickly learned that I could produce much better work by hiring talented freelancers. As my productions expanded into Fortune 500 companies, I had to deliver high-quality work. This was key in transitioning my business from a small, mediocre video service into a thriving production company with a healthy list of recurring clients. I’ll say it again and again — if it weren’t for my team of creative freelancers, I wouldn’t be where I am today. 5. Scalability And finally, another great aspect of using gig workers in a gig business is scalability. The ups and downs of a gig business can be both terrifying and exhilarating. In the down times, yes, you can scale back accordingly and keep your costs slow. But when the projects are flowing, the ability to ramp up using freelancers is incredibly appealing. A few years ago, I was in one of those flow moments. I was juggling about 15 different productions using a variety of freelancers from around the nation. It was intense, but it was also incredibly rewarding both creatively and financially.
https://medium.com/swlh/how-i-launched-a-successful-gig-business-with-only-gig-workers-f8573d577f26
['Russ Pond']
2020-11-03 13:02:54.566000+00:00
['Entrepreneurship', 'Small Business', 'Gig Economy', 'Startup Lessons', 'Startup']
How journalists can use Instagram to engage and inform
During the process of redeveloping Me Explica in the Tow-Knight Center program, I have experimented with different tools and strategies to create engagement around my content. In my last article, I explained how my new strategy is to focus on social media because that is where citizens are mostly getting their news from. In its past iterations, Me Explica was an article-based publication, first as a blog, then as a site. As the years went by, I noticed that being on social media requires much more than simply posting links to your content. You need to truly engage with the reader, answer questions, address criticisms and sometimes even accusations. As a small publication, I am able to do it with little effort but I believe even bigger ones need to commit to talking directly with their readers. Having this in mind, I have been conducting a few experiments on Instagram. Once defined as a "photo-sharing" app, IG is very versatile and allows for the publication of text, photos, videos, cards (images with text), and videos with text (either with subtitles or only text). I will share what I have been doing, the tools used to create the content and brief observations about the results. Experiments My tests revolved around three kinds of content: (1) Cards Explainers on images that are perfect for the photo feed and can easily be shared on other platforms. Card about Petrobras' losses after a statement by the Brazilian President (2) Video explainers The presenter (me) talks directly to the audience in native videos that can be short (1 minute for the feed) or long (up to 10 minutes os IGTV). Video explainer about the militias in Brazil (3) Video Stories Even though the Stories feature only allows for smaller videos, creating the content on an external tool can be helpful in order to do something that lasts a little while longer. Considerations These experiments have shown that there is an opportunity to create engaging and informational posts on Instagram. The audience is interested in consuming journalism on a platform that is not made for long-form content but still allows establishing a quick connection to the news. Instagram may not be the best place to break the news but is a good tool to build on it. One major difficulty for journalists and outlets is to monetize their Instagram profiles. There is no option for doing that and there is no news showing that Facebook might be interested in building such features. Yet, we have plenty of success stories of news delivery on Instagram, such as former CNN White House correspondent Jessica Yellin, Poynter's Media Wise, and Uno.ar (from Argentina). Toolkit Having decided what kind of content you will post, you can use the many tools that can help journalists create posts quickly and efficiently at a low cost. I will share some of the platforms and products used to conduct the tests on Me Explica. Visuals: Canva Canva is one of my favorite tools. I use it to make presentations, design posts and covers for social media and to create the card I showed above. It is very intuitive: you need only to drag and drop. There are thousands of templates to choose from. The free version is already very good, but the pro subscription allows you to resize projects so you can post in multiple social media channels. https://www.canva.com Video: Lumen 5 I have only recently come across Lumen 5 but I'm already a fan. I used it to create the Stories video shown above. It helps you create social media videos very easily -from text or your own audio. Its artificial intelligence creates new frames automatically, speeding up the process. Anyone with little to or no design experience can use it and have great results. https://lumen5.com/ Smartphone videos: Cheap tools Showing my gear on Instagram I shared on my Instagram account some of the accessories I have been using to film my explainer videos and some people got interested in the equipment. Amazon was the source for both the selfie ring light that can be attached to any smartphone and the lavalier microphone. Considering that Instagram allows for more informal and amateur-ish videos, this set up is very helpful for filming on the go. All for less than 20 dollars. Here's the microphone: And here's the light: Conclusion Instagram is a great tool to explore new ways of delivering information to audiences that no longer want to visit homepages in search of news. It is a good testing ground to get a sense of what might and not might work to engage citizens. Even with its limitations in terms of generating revenue, it can be a good way for smaller outlets and individual journalists to get a sense of what resonates with audiences. It is worth experimenting.
https://medium.com/journalism-innovation/how-journalists-can-use-instagram-to-engage-and-inform-62b0ad80b74b
['Diogo A. Rodriguez']
2019-05-10 02:43:21.857000+00:00
['Journalism', 'Storytelling', 'Social Media', 'Instagram', 'Innovation']
Have We Reached the Phase of Smart Financial Crime Detection?
Have We Reached the Phase of Smart Financial Crime Detection? Financial Technology Why are financial crimes on the rise? Many people ask this question as crime-cases in the financial industry rise. Banks according to a McKinsey report¹ have lost millions of dollars in the last decade alone and this could worsen as criminals upgrade their financial crime tactics. Financial crime analytics can help financial institutions, investigators detect fraud, and money laundering, assess risk, and report on data to prevent financial crime. Each year, many cases of banking fraud² increase and despite stringent measures, losses continue to spike with financial institutions lacking concrete strategies to address this growing problem. Analytics help to pinpoint transactions that need further scrutiny, identifying the needle in the haystack of financial data. Photo by Bermix Studio on Unsplash With only a 1% success rate in recovering stolen funds, the financial services industry has realized that traditional approaches to dealing with financial crime are not working. Across the ecosystem, regulatory authorities, enforcement agencies, and financial institutions³ are working together to disrupt financial crime. This requires a proactive approach to predict and manage the risks posed to people and organizations and not merely to comply with rules and regulations. The challenges faced by financial institutions regarding money-laundering activities have increased substantially in the globalization era. Additionally, there is a rising menace of financial crime and counterfeiting. As money launderers become more sophisticated, the effectiveness of anti-money laundering policies is under heightened regulatory scrutiny. The probability of banks facing rigid penalties and reputation loss in case of shortcomings in AML management has increased. A good example of a tool used for financial crime detection is AMLOCK. This is the enterprise level end-to-end financial crime management solution. It integrates the best of anti-money laundering⁴ and anti-fraud measures to effectively identify, manage, and report financial crime. It provides various features that cater to profiling, risk categorization, transaction monitoring, and reporting requirements of financial institutions. Features that form part of this offering are in line with AML (Anti Money Laundering) regulations. In this article, I will explore current practices in financial crime detection, use cases and explore what the future looks in financial technology and fraud reduction. Overview Criminals are pervasive in their determination to identify and exploit vulnerabilities throughout the financial services industry. Their ability to collaborate and innovate necessitates a proactive approach towards responding to individual events, while disrupting crime networks. Combating #financialcrime is complementary to generating revenue. The big data analytical capabilities that enable a bank to personalize product offerings also underpin an effective approach to spotting and responding to criminal behavior. To out-pace fraudsters, financial institutions and payment processors need a quicker and more agile approach to payment fraud detection⁵. Instead of relying on predefined models, applications need the ability to quickly adapt to emerging fraud activities and implement rules to stop those fraud types. Not only should organizations be able to adjust their detection models, the models themselves should be inter-operable with any #datascience, machine learning, open source and AI technique using any vendor. In addition, to eliminate fraud traveling from one area or channel to another undetected, aggregating transactional and non-transactional behavior from across various channels providers greater context and spots seemingly innocuous patterns that connect complex fraud schemes. Artificial Intelligence For Financial Crime Detection Within financial institutions, it is not uncommon to have high false-positive rates that is, notifications of potential suspicious activity that do not result in the filing of a suspicious transaction report. For AML alerts, high false positives are the norm. The reason for this is a combination of dated technology and incomplete and inaccurate data. Traditional detection systems provide inaccurate results due to outdated rules or peer groups creating static segmentations of customer types based on limited demographic details. Photo by Jp Valery on Unsplash In addition, account data within the institution can be fragmented, incomplete and housed in multiple locations. These factors are part of the reason why alerts and AML are key areas to apply #artificialintelligence, advanced analytics⁶ and RPA. The technologies can gather greater insight, understand transactional patterns across a larger scale and eliminate tedious aspects of the investigation that are time-consuming and low value. AI can augment the investigation process and provide the analyst with the most likely results, driving faster and more informed decisions with less effort. AI based Intelligent Customer Insights Periodic reviews of customer accounts are performed as part of a financial service organization’s risk management process, to ensure the institution is not unwittingly being used for illegal activities. As a practice, accounts and individuals that represent a higher risk undergo these reviews more often than lower-risk entities. For these higher-risk accounts, additional scrutiny is performed in the form of enhanced due diligence. This process involves not only looking at government and public watch list and sanctions lists, but also news outlets and business registers to uncover any underlying risks. As one would think, such less-common investigations took the majority of the due diligence process because they typically required lengthy, manual searches and validation that a name was the individual or entity under review. With modern technologies like entity link analysis to identify connections between entities based on shared criteria, as well as #naturallanguageprocessing to gain context from structured and unstructured text, much of this investigation process can be automated. By using AI to perform the initial search and review of a large number of articles and information sources, financial institutions gain greater consistency and the ability to record the research results and methodology. Much like the AML alert triage example previously mentioned, the key is not to automate analysts from the process. Instead, AI automates the data gathering and initial review to focus the analysts on reviewing the most pertinent information, providing their feedback on the accuracy of those sources and making the ultimate decision on the customer’s risk level. Analytics for Financial Fraud Detection Innovation in the payments space is at a level not seen in decades. From mobile payments, to peer-to-peer payments⁷ to real-time payments, there are a growing number of payment services, channels and rails for consumers and businesses alike. But these myriad options also give fraudsters plenty of openings for exploitation, as well. Easy-to-exploit issues with these new payment services include their speed and lack of transactional and customer behavioral history. These issues put financial institutions and payment processors in a difficult position. If they block a transaction, they could negatively impact a legitimate user, leading the user to either abandon the platform or use a competitor instead. If the transaction is approved and it is fraudulent, it erodes trust in the payment provider and leads to a loss. Traditional fraud detection systems were designed for a relatively slow-moving fraud environment. Once a new fraud pattern was discovered, a detection rule or model would be created over a matter of weeks or months, tested and then put into production to uncover fraud that fit those known fraud typologies. Obviously, the weakness of this approach is that is takes too long and relies on identifying the fraud pattern first. In the time it takes to identify the fraud pattern, develop the model and put it into use, consumers and the institution could experience considerable fraud losses. In addition, fraudsters, aware of this deficiency, can quickly and continuously change the fraud scheme to evade detection. Case Studies of Financial Crime Technology Let us now explore some use cases of financial technology and how companies benefited in fraud reduction. 1. MasterCard To help acquirers better evaluate merchants, MasterCard created an anti-fraud solution using proprietary MasterCard data on a platform called MATCH that maintains data on hundreds of millions of fraudulent businesses and handles nearly one million inquiries each month. As the volume of data in its platform grew over the years, MasterCard staff found that its homegrown relational database management system lookup solution was no longer the best option to satisfy the growing and increasingly complex needs of MATCH users. Photo by CardMapr on Unsplash Realizing that there was an opportunity to deliver substantially better value to its customers, MasterCard turned to the Cloudera Enterprise Data Hub. After successfully building, integrating, and incorporating security into its EDH, MasterCard added Cloudera Search and other tools and workloads to access, search, and secure more data. 2. United Overseas Bank (Asia) The challenge UOB faced was the data limitations of their legacy systems. With legacy databases, banks are restricted by the amount of data as well as the variety. As a result, they miss key data attributes that are necessary for anti-money laundering, transaction monitoring, and customer analytics engines to work effectively. UOB established the Enterprise Data Hub⁸ as the principal data platform that, every day, ingests two petabytes of transaction, customer, trade, deposit, and loan data and a range of unstructured data, including voice and text. 3. Bank Danamon (Indonesia) Bank Danamon is one of Indonesia’s largest financial institutions, offering corporate and small business banking, consumer banking, treasury and capital markets. Bank Danamon uses a machine-learning platform for real-time customer marketing, fraud detection, and anti-money laundering activities. The platform integrates data from about 50 different systems and drives machine-learning applications. Using #machinelearning on aggregated behavior and transaction data in real time has helped Bank Danamon reduce marketing costs, identify new patterns of fraud, and deepen customer relationships. This is the Best Time to Implement AI for Financial Crime Detection Financial crime and corruption are at epidemic levels and many countries are unable to significantly reduce corruption. Regulators and financial institutions are looking to innovative AI technology to fix problems that have grown beyond their ability to solve with intuition and existing tools alone. To justify cognitive initiatives, financial services organizations need to show real return on value in such investments. IBM is able to demonstrate the value in a variety of use cases, as shown in the client success stories outlined in this white paper. A misunderstanding about artificial intelligence is the belief that it will replace employees. However, the financial crime analyst is and should always be an essential part of this process. AI, process automation and #advancedanalytics are tools that can perform analyses and tasks in a fraction of the time it would take an employee. Yet, the ultimate decision-making power still lies with those analysts, investigators and compliance officers for whom this technology provides greater insight and eliminates tedious task work. This augmented intelligence is the next phase of the fight against financial crime, and one that only together financial institutions, regulators and technology partners can win. What do you think? Is the current technology capable of addressing rising fraud cases and financial crime? Share your comments below and contribute to the discussion on Have We Reached The Phase Of Smart Financial Crime Detection? Works Cited ¹McKinsey Report, ²Banking Fraud, ³Financial Institutions, ⁴Anti-Money Laundering, ⁵Payment Fraud Detection, ⁶Advanced Analytics, ⁷Peer-to-Peer Payments, ⁸Enterprise Data Hub More from David Yakobovitch: Listen to the HumAIn Podcast | Subscribe to my newsletter
https://medium.com/towards-artificial-intelligence/have-we-reached-the-phase-of-smart-financial-crime-detection-9f3d98fb488
['David Yakobovitch']
2020-12-17 20:01:08.149000+00:00
['Opinion', 'Analysis', 'News', 'Artificial Intelligence', 'Technology']
A Layman’s Guide to Data Science: How to Become a (Good) Data Scientist
How simple is Data Science? Sometimes when you hear data scientists shoot a dozen of algorithms while discussing their experiments or go into details of Tensorflow usage you might think that there is no way a layman can master Data Science. Big Data looks like another mystery of the Universe that will be shut up in an ivory tower with a handful of present-day alchemists and magicians. At the same time, you hear about the urgent necessity to become data-driven from everywhere. The trick is, we used to have only limited and well-structured data. Now, with the global Internet, we are swimming in the never-ending flows of structured, unstructured and semi-structured data. It gives us more power to understand industrial, commercial or social processes, but at the same time, it requires new tools and technologies. Data Science is merely a 21st century extension of mathematics that people have been doing for centuries. In its essence, it is the same skill of using information available to gain insight and improve processes. Whether it’s a small Excel spreadsheet or a 100 million records in a database, the goal is always the same: to find value. What makes Data Science different from traditional statistics is that it tries not only to explain values, but to predict future trends. In other words, we use Data Science for: Data Science is a newly developed blend of machine learning algorithms, statistics, business intelligence, and programming. This blend helps us reveal hidden patterns from the raw data which in turn provides insights in business and manufacturing processes. What should a data scientist know? To go into Data Science, you need the skills of a business analyst, a statistician, a programmer, and a Machine Learning developer. Luckily, for the first dive into the world of data, you do not need to be an expert in any of these fields. Let’s see what you need and how you can teach yourself the necessary minimum. Business Intelligence When we first look at Data Science and Business Intelligence we see the similarity: they both focus on “data” to provide favorable outcomes and they both offer reliable decision-support systems. The difference is that while BI works with static and structured data, Data Science can handle high-speed and complex, multi-structured data from a wide variety of data sources. From the practical perspective, BI helps interpret past data for reporting or Descriptive Analytics and Data Science analyzes the past data to make future predictions in Predictive Analytics or Prescriptive Analytics. Theories aside, to start a simple Data Science project, you do not need to be an expert Business Analyst. What you need is to have clear ideas of the following points: have a question or something you’re curious about; find and collect relevant data that exists for your area of interest and might answer your question; analyze your data with selected tools; look at your analysis and try to interpret findings. As you can see, at the very beginning of your journey your curiosity and common sense might be sufficient from the BI point of view. In a more complex production environment, there will probably be separate Business Analysts to do insightful interpreting. However, it is important to have at least dim vision of BI tasks and strategies. Resources We recommend you to have a look at the following introductory books to feel more confident in analytics: Introduction To The Basic Business Intelligence Concepts — an insightful article giving an overview of the basic concepts in BI; Business Intelligence for Dummies — a step-by-step guidance through the BI technologies; Big Data & Business Intelligence — an online course for beginners; Business Analytics Fundamentals — another introductory course teaching the basic concepts of BI. Statistics and probability Probability and statistics are the basis of Data Science. Statistics is, in simple terms, the use of mathematics to perform technical analysis of data. With the help of statistical methods, we make estimates for the further analysis. Statistical methods themselves are dependent on the theory of probability which allow us to make predictions. Both statistics and probability are separate and complicated fields of mathematics, however, as a beginner data scientist, you can start with 5 basic statistics concepts: Statistical features . Things like bias, variance, mean, median, percentiles, and many others are the first stats technique you would apply when exploring a dataset. It’s all fairly easy to understand and implement them in code even at the novice level. . Things like bias, variance, mean, median, percentiles, and many others are the first stats technique you would apply when exploring a dataset. It’s all fairly easy to understand and implement them in code even at the novice level. Probability Distributions represent the probabilities of all possible values in the experiment. The most common in Data Science are a Uniform Distribution that has is concerned with events that are equally likely to occur, a Gaussian, or Normal Distribution where most observations cluster around the central peak (mean) and the probabilities for values further away taper off equally in both directions in a bell curve, and a Poisson Distribution similar to the Gaussian but with an added factor of skewness. represent the probabilities of all possible values in the experiment. The most common in Data Science are a Uniform Distribution that has is concerned with events that are equally likely to occur, a Gaussian, or Normal Distribution where most observations cluster around the central peak (mean) and the probabilities for values further away taper off equally in both directions in a bell curve, and a Poisson Distribution similar to the Gaussian but with an added factor of skewness. Over and Under Sampling that help to balance datasets. If the majority class is overrepresented, undersampling helps select some of the data from it to balance it with the minority class has. When data is insufficient, oversampling duplicates the minority class values to have the same number of examples as the majority class has. that help to balance datasets. If the majority class is overrepresented, undersampling helps select some of the data from it to balance it with the minority class has. When data is insufficient, oversampling duplicates the minority class values to have the same number of examples as the majority class has. Dimensionality Reduction. The most common technique used for dimensionality reduction is PCA which essentially creates vector representations of features showing how important they are to the output i.e. their correlation. The most common technique used for dimensionality reduction is PCA which essentially creates vector representations of features showing how important they are to the output i.e. their correlation. Bayesian Statistics. Finally, Bayesian statistics is an approach applying probability to statistical problems. It provides us with mathematical tools to update our beliefs about random events in light of seeing new data or evidence about those events. Image credit: unsplash.com Resources We have selected just a few books and courses that are practice-oriented and can help you feel the taste of statistical concepts from the beginning: Practical Statistics for Data Scientists: 50 Essential Concepts — a solid practical book that introduces essential tools specifically for data science; Naked Statistics: Stripping the Dread from the Data — an introduction to statistics in simple words; Statistics and probability — an introductory online course; Statistics for data science — a special course on statistics developed for data scientists. Programming Data Science is an exciting field to work in, as it combines advanced statistical and quantitative skills with real-world programming ability. Depending on your background, you are free to choose a programming language to your liking. The most popular in the Data Science community are, however, R, Python and SQL. R is a powerful language specifically designed for Data Science needs. It excels at a huge variety of statistical and data visualization applications, and being open source has an active community of contributors. In fact, 43 percent of data scientists are using R to solve statistical problems. However, it is difficult to learn, especially if you already mastered a programming language. is a powerful language specifically designed for Data Science needs. It excels at a huge variety of statistical and data visualization applications, and being open source has an active community of contributors. In fact, 43 percent of data scientists are using R to solve statistical problems. However, it is difficult to learn, especially if you already mastered a programming language. Python is another common language in Data Science. 40 percent of respondents surveyed by O’Reilly use Python as their major programming language. Because of its versatility, you can use Python for almost all steps of data analysis. It allows you to create datasets and you can literally find any type of dataset you need on Google. Ideal for entry level and easy-to learn, Python remains exciting for Data Science and Machine Learning experts with more sophisticated libraries such as Google’s Tensorflow. is another common language in Data Science. 40 percent of respondents surveyed by O’Reilly use Python as their major programming language. Because of its versatility, you can use Python for almost all steps of data analysis. It allows you to create datasets and you can literally find any type of dataset you need on Google. Ideal for entry level and easy-to learn, Python remains exciting for Data Science and Machine Learning experts with more sophisticated libraries such as Google’s Tensorflow. SQL (structured query language) is more useful as a data processing language than as an advanced analytical tool. IT can help you to carry out operations like add, delete and extract data from a database and carry out analytical functions and transform database structures. Even though NoSQL and Hadoop have become a large component of Data Science, it is still expected that a data scientist can write and execute complex queries in SQL. Resources There are plenty of resources for any programming language and every level of proficiency. We’d suggest visiting DataCamp to explore the basic programming skills needed for Data Science. If you feel more comfortable with books, the vast collection of O’Reilly’s free programming ebooks will help you choose the language to master. Image credit: unsplash.com Machine Learning and AI Although AI and Data Science usually go hand-in-hand, a large number of data scientists are not proficient in Machine Learning areas and techniques. However, Data Science involves working with large amounts of data sets that require mastering Machine Learning techniques, such as supervised machine learning, decision trees, logistic regression, etc. These skills will help you to solve different data science problems that are based on predictions of major organizational outcomes. At the entry level, Machine Learning does not require much knowledge of math or programming, just interest and motivation. The basic thing that you should know about ML is that in its core lies one of the three main categories of algorithms: supervised learning, unsupervised learning and reinforcement learning. Supervised Learning is a branch of ML that works on labeled data, in other words, the information you are feeding to the model has a ready answer. Your software learns by making predictions about the output and then comparing it with the actual answer. is a branch of ML that works on labeled data, in other words, the information you are feeding to the model has a ready answer. Your software learns by making predictions about the output and then comparing it with the actual answer. In unsupervised learning , data is not labeled and the objective of the model is to create some structure from it. Unsupervised learning can be further divided into clustering and association. It is used to find patterns in data, which are especially useful in business intelligence to analyze the customer behavior. , data is not labeled and the objective of the model is to create some structure from it. Unsupervised learning can be further divided into clustering and association. It is used to find patterns in data, which are especially useful in business intelligence to analyze the customer behavior. Reinforcement learning is the closest to the way that humans learn,i.e. by trial and error. Here, a performance function is created to tell the model if what it did was getting it closer to its goal or making it go the other way. Based on this feedback, the model learns and then makes another guess, this continues to happen and every new guess is better. With these broad approaches in mind, you have a backbone for analysis of your data and explore specific algorithms and techniques that would suit you the best.
https://medium.com/sciforce/a-laymans-guide-to-data-science-how-to-become-a-good-data-scientist-97927ad51ed8
[]
2020-01-06 15:43:36.312000+00:00
['Programming', 'Machine Learning', 'Data Science', 'Artificial Intelligence', 'Business Intelligence']
Better Marketing Newsletter: How to Make Your Content Stand Out in 2021
Hey y’all, In this issue, we’ve got articles about mindfulness and marketing, a bunch of 2021 predictions and trends, and an explanation of why you’re so obsessed with getting 10,000 steps in every day. We launched the Better Marketing Slack Community last week, and it’s been fun to engage in conversations about newsletter platforms, Medium design, gender in marketing campaigns, and more. If you’re interested in connecting with other Better Marketing readers, come join us! Featured Articles
https://medium.com/better-marketing/better-marketing-newsletter-how-to-make-your-content-stand-out-in-2021-2a13a5943804
['Brittany Jezouit']
2020-12-18 15:23:28.773000+00:00
['Meditation', 'Writing', 'Media', 'Newsletter', 'Marketing']
7 Common Dreams and What They Say About You
7 Common Dreams and What They Say About You How to use your dreams to understand yourself better Photo by Joshua Abner from Pexels Not many people pay enough attention to their dreams, and here’s why that should change. Dreams are manifestations of unconscious desires and wishes. They are signals from the brain and body. Carl Jung, a highly reputed Swiss psychiatrist, saw dreams as “the psyche’s attempt to communicate important things to the individual”, and he valued them above all else, as a way of knowing what was really going on. Dreams help you make connections about your feelings that your conscious self wouldn’t make. Think of them as free therapy sessions in your mind, nudging you to confront your suppressed emotions. Before we get into the dream interpretations, let me clarify how you can tell when a dream actually means something. PET scans and MRIs have shown that some dreams are mere “data dumps”, where you dispose of excess information that you collected during the day. Your brain discards “useless” memories, and saves the valuable ones. So, a random acquaintance or something you thought of during the day popping up in your dreams is very normal and may not signify something deep. However, many recurring dreams reveal unusual and sometimes bizarre symbolism that cannot be written off as a coincidence. These symbols are strongly connected to the psyche and can help dreamers understand themselves much better.
https://medium.com/indian-thoughts/7-common-dreams-and-what-they-say-about-you-c341222c2849
['Bertilla Niveda']
2020-11-09 07:52:46.573000+00:00
['Psychology', 'Dreams', 'Mental Health', 'Philosophy', 'Self']
When Trade Went Global
When Trade Went Global A review of Valerie Hansen’s “The Year 1000: When Explorers Connected the World — and Globalization Began” If I asked you when explorers connected the world for the first time, what would you say? A month ago, I would have said in 1492. Columbus sailed the ocean blue, a new exchange of food, ideas, animals, people, and microbes changed the world forever. That’s the first time the world was connected in any meaningful way. But with her new book, Valerie Hansen has convinced me of something that sounded illogical: the world had already been connected before Columbus. Columbus and the other 15th century explorers took it a step further, yes, but they were only continuing what had been started by the Vikings around the year 1000. Valerie Hansen’s most provocative thesis (one which I don’t want to overstate because she doesn’t) is that the process of globalization began and the world was connected for the first time around the year 1000. That is not to say these explorers created a sustained connection like the one seen in the era of the Columbian Exchange, because that is important in itself. Even more provocative is not a thesis of Hansen’s but a subpoint to support it: the Vikings made contact and traded with the Mayans. If this blows your mind, it did mine too. And if I can summarize the two pieces of evidence to support it: 1) the Mayans drew blonde-haired people in their art (okay evidence but explainable if you’re skeptical), and 2) the Mayans drew Viking slatted boats that were visibly different than any that the Mayans ever built. If they had never seen a Viking boat, how would they draw one when no one around them built boats like that? This evidence convinced me that the Vikings did make contact with the Mayans and had more of an effect on the pre-Columbian Americas than I previously thought. But to zoom in on this point of Hansen’s does not do justice to the entirety of the book. The Vikings’ travels are an important early point, but the larger argument is that the world was much more connected in and around the year 1000 than is often assumed by non-historians. To show this, Hansen takes the reader on a tour of the world focused around the year 1000. Usually, she goes back to about 800 or 900 to give context for each chapter, and she always continues the narrative in abbreviated form until ~1450 to show the effects of trade in the given region or civilization. However, she laser-focuses the narrative around the year 1000 as much as possible, giving credence to the book’s title. This includes an analysis of the Silk Roads, Trans-Saharan trade, and Indian Ocean trade that interwove the economies, cultures, and societies of the majority of the world by c. 1000. Most of this analysis is also focused on people groups that are not given much focus in most histories of globalization, as those histories most often begin the narrative with Columbus. Hansen is successful in convincing me that the process of globalization began around the year 1000 and that refusing to acknowledge the accomplishments of earlier societies in globalization leads to a history that is too eurocentric. The Year 1000, however, achieves a balance that highlights the achievements of almost all regions of the world. Most importantly, Hansen reveals and analyzes the economic, social, and cultural connections between these distinct regions. A world map by Sicilian cartographer al-Idrisi (1100–1165). It shows most of Afro-Eurasia. For fellow teachers of AP World History: Modern, this book is a tremendous primer to enter the world scene in the year 1200. I almost want to call it “The Global Tapestry: The Book” (a reference to the much-maligned name of Unit 1), but it goes back much further in time and also includes many concepts from Unit 2: Networks of Exchange. Some of the people groups explored in The Year 1000 which also overlap with my curriculum include the Kitan/Liao, the Song dynasty of China, the Seljuk empire, Srivijaya, the Angkor empire, the Maya, Great Zimbabwe, Ghana, and Mali. I look forward to using the book to supplement my teaching, and I think it will be a fantastic resource for many others. The Year 1000 taught me more about this specific period in history than all other books I’ve read combined. This is because of its relentless focus, yet the heavy emphasis on context and causation will help connect readers’ preexisting knowledge to subject matter they may have no background in. For that reason, I would recommend The Year 1000 to anyone even interested in world history. Anyone can pick it up and be successful, and it will serve as foundational knowledge for future learning as well. I received an eARC of The Year 1000 courtesy of Scribner and NetGalley, but my opinions are my own.
https://medium.com/park-recommendations/when-trade-went-global-e2b97d96c42d
['Jason Park']
2020-05-10 12:08:12.124000+00:00
['World', 'Nonfiction', 'Books', 'History', 'Book Review']
How to Become a Successful Writer in 5 Not-So-Easy Steps
Bleary-eyed I stumble From the bed to the floor Feel the carpet squish Beneath my toes. Silent, I tiptoe Down the dimly-lit hall To the table Where my journal waits. Open it, step inside My mind Where will it take me today? What makes a writer successful? Fame? Money? These days both are elusive. If these are your goals, you may not have the stamina for the expedition. The road may be arduous. There’ll be twists and turns along the way. And maybe even a dark forest or two. If you’re ready to find out if you have the fortitude for the journey, read on. Being a successful writer is finding joy in the journey, finding the sunlight through the trees, and making new discoveries along the way. It’s showing up, engaging in something meaningful, and celebrating your progress. If you’re ready to begin, here are the 5 not-so-easy steps: 1. Write. Daily. The inimitable Jane Yolen, author of 386 books, has this magic word she shares with writers — BIC. It stands for Butt in Chair. This is the first step in becoming a successful writer. You have to write. Daily. How many times have you fantasized about seeing your name on a book in a bookstore? Or imagined yourself reading to a room full of kids? Or speaking at a writer’s conference? None of these experiences will happen if you don’t do the work. Jane Yolen starts every day with a poem. She calls her morning poems “finger exercises” because they wake her up and get her ready to write. She likens the practice to “priming the pump so the water flows.” Yolen says she gets grumpy if she just rushes into one of her projects without writing her morning poem. I tried it this morning and the result is the poem above. It works. I was able to get into the flow of this article much quicker than I normally do. How do you prime yourself for writing? Try writing a poem each morning. Or, sit down and let your thoughts spill onto the page in a stream of consciousness. Then, tackle your big project. ‘Finger exercises’ remove the damns blocking your flow of words. If you want to be a successful writer get your butt in the chair, warm-up, and write. 2. Play the long game Kwame Alexander jokes that he is a “26-year overnight success.” Don’t expect instant success as a writer. Most successful authors take years to finally break out. Judy Blume received rejections for two years and attended writing courses at night before ever successfully publishing a book. Kwame Alexander self-published fourteen books before he finally found an agent. What is one trait all successful writers share? Determination. Play the long game. Stick with writing because you love it, because it completes you and gives your life meaning, not because you’re expecting instant success. Neal Porter, vice-president and publisher of Neal Porter Books, says picture books take 2–3 years from submission to publication. That’s a long time. If you want to be a traditionally-published writer, you’ll have to be patient. Even if you want to self-publish, you’ll still need patience. Make sure your book is the very best version it can be before you share it with the world. In the SCBWI interview with Jane Yolen she said she’s never understood all this stuff about writers bleeding onto the page. She’s joyous when she writes. Perhaps we all need to be a bit more joyous and little less serious when we write. Our joy will shine through in our writing. If you’re not prepared to play the long game and you can’t find the joy in your writing, it will show. 3. Read Reading the work of others will help you tune your ear. First, read for the joy of it. I can’t tell you how many times I’ve started a book and planned to read it with a writer’s eyes, only to get swept away by the story. If you want to study the craft of writing first, read for the sheer joy of it. Get this out of your system. Only then will you be able to go back and read like a writer. Find standout passages. What makes them special? Which scenes provoke emotional reactions? Why? Notice what techniques the writer uses. Look for the pauses in the story — those quiet, powerful scenes followed by loud, thumping action. How does the author blend the two? How does an author add layers of meaning and depth? What lies under the surface of the story? How do the scenes fit together? Look for the rhythm of the writing. Are the sentences short? Are they long? How does the author merge them seamlessly? What are the conventions of your genre? Read other books in the same genre. Look for the common elements and also for the ways the author distinguishes himself within the genre. Explore character. What are the protagonist’s flaws? What are his/her quirks? Everyone has them. Give you character some dimension. How does your character charm the reader? Does he or she have any endearing qualities? Are they a loyal friend or a well-meaning fool? When you tune your reader’s ear, you’ll notice a world of exciting possibilities open to you as a writer. Be bold. Go forth. Explore. 4. Stay in your own lane Stop looking ahead or over your shoulders at other writers. It may feel like a punch to the gut when you see someone who seems to come out of nowhere and zoom ahead of you, but remember, we’re all on our own journey and your success may not look like someone else’s. As your writing develops, try to discover your own voice. Sometimes, it helps to write in the style of other authors when you’re starting out. But, if you stick with writing long enough your goal should be to find your own voice and style. Find it by experimenting, taking chances, and being brave. “Every great or even every very good writer makes the world over according to his own specification. It is his world and no other. This is one of the things that distinguishes one writer from another. Not talent. There’s plenty of that around.”-Raymond Carver If you’re copying others, your writing won’t provoke an emotional reaction in the reader. Jill Santopolo, author and associate publisher of Philomel Books, looks for emotion in the books she takes on. “Those books that touch readers are the ones who really sell.” — Jill Santopolo Stay in your own lane. Stop comparing yourself with others. Be brave, take chances, experiment and practice. Listen to your intuition to help you find and hone your voice. “Know yourself. Know what matters. What are your priorities? What will you fight for?” — Julie Strauss-Gabel Where will your writer’s journey take you? 5. Put yourself out there “It is impossible to live without failing at something unless you live so cautiously that you might as well not have lived at all — in which case, you fail by default” — J.K. Rowling, Harvard Commencement address Sharing your work with others is scary. It makes you feel vulnerable. Do it anyway. It’s the quickest way to grow. In her interview with SCBWI on Saturday Judy Blume suggested you looking for someone supportive of your writing journey. She once had a writing teacher who didn’t believe in her so she left his class. It’s important to get the right people in your corner. You want people who are honest and direct, but also encouraging. Who will be in your superhero writing team? Who will challenge you? Who will hold you accountable? Who will celebrate with you when you finish your project? Create your own superhero writing team. Join a writing group. Support others. Help each other grow and develop. When you do, everyone wins. Face your fears. Hit publish. Find readers for your manuscript and deal with your discomfort because it’s the only way to grow. Conclusion If you want to be a successful writer, write every day, play the long game, read, stay in your own lane, and put yourself out there. You won’t regret it. “It is our choices, Harry, that show what we truly are, far more than our abilities.” — J.K. Rowling, Harry Potter and the Chamber of Secrets If you enjoyed this story, you may like these: Becky Grant is an orphan with the ability to harness the magical powers of gemstones which she uses to stop evil Emperor Amaru from taking over Zatonia, land of wise condors, jewel-eyed cave dwellers, and vicious boarmen. Oh, wait! That’s the protagonist of her debut middle grade novel, The Stone Seer. Nevermind. Becky is really a boring adult who loves to drink coffee, sing in the car, and live vicariously through her middle grade characters.
https://medium.com/a-novel-idea/how-to-become-a-successful-writer-in-5-not-so-easy-steps-10a2bf6662b1
['Becky Grant']
2020-10-16 15:13:05.081000+00:00
['Productivity', 'Success', 'Writing Tips', 'Inspiration', 'Writing']
Are You In The Wilderness Season?
If we held quiet, we could hear the bears, the crunch of leaves and branches underfoot. The soft sounds the cubs made. We watched them streak black through the trees. We didn’t expect to see them. My pop and I took a cabin in Vermont — ages ago, it seems — and the closest we’d ever come to a wild animal were the thoroughbred yearlings he broke. We lived in New York, after all. But there they were, not a mile from our cabin nestled deep in the woods, and we crouched down low and didn’t dare breathe. I wondered if they could smell us, how we sweated through layers of clothes in terror, awe, and fear. I held my pepper spray, ready. As if a little tube could protect me from a mother charging. My pop rolled his eyes, but asked if I had an extra. By then, we were shaking because they were close. Is it strange to say we could fear their weight in the distance, their hulk? We pressed our stiff bodies into the earth. The ground was cold and yielding, like a grave. I could taste the dirt, it stained my lips black. I remember the salt in my teeth. We lay like that for a few minutes and then they were gone. Disoriented, we got lost on the way back, and by the time we could collapse onto the rugs on the wood floors, the day had folded into black. On the drive back, we kept retelling the story, adding color and contouring the details as my pop and I were prone to do, but a part of me didn’t feel the story belonged to me. We didn’t belong in the woods. We didn’t know its language, couldn’t navigate the terrain. We were tourists, and it wasn’t until we got back to Long Island did I understand the depth of our foreignness. While we had our guides, maps, and compasses, we could still get lost. There will always be places where our discomfort snuffs out that which is warm and familiar. I couldn’t shake the cold or the bears out of my bones, and I remember a few weeks later riding the subway all day because this was what I knew. I knew every stop on every train from Brooklyn and Queens to Manhattan and the Bronx. Stations that had remnants of 70s and 80s grit and grime, and stops that were bleached clean and whitewashed new. This is my country. Sometimes, it seems there’s nothing more monstrous than forcing someone to stew in sadness. We’re desperate to flee the unknown and the possibility of a pain or loss that has no end. We anesthetize and dull the edges. Quiet unnerves us so we switch on televisions, fans — any form of white noise. We’d rather be uncomfortably comfortable than walk through the wild. We embrace noise and constant velocity because should we pause we face our reckoning. We have to deal with all that we’ve been dodging. When I first moved to Los Angeles, everything felt quiet. I had no subways or distractions or friends. Nothing was familiar, and the bears became expensive cars careening down the 10 or 405. Every day I woke to open-heart surgery. My skin felt like a graft that didn’t take. I had no tube of pepper spray to protect me from the grief of having lost my estranged mother to cancer. A constant sadness that threatened to swallow me whole. Then, the tsunami of questions. What was I doing here? What did I leave the comfort of New York? Why did I have to start over? What if I lost everything? What if I failed? What if I had to slouch my way back east in defeat? And then another layer deep: Should I have said goodbye to my mother before she died? What kind of woman had I become, and did I like this version of me? What life was I living and is what I wanted or intended? Should I go on when I can’t go on? I used to think of depression as a dark country. Those who suffered from it had a visa that would permit you entry, and we had no instruments to navigate our way through and out. We were never promised a return ticket. That country being the imperialist motherfucker it is, began to encompass moments of fear, uncertainty, unrest, anxiety, and despair. I realized I didn’t have to leave my house to find sadness. The unknown is always just beyond my reach. I didn’t need a foreign country or stamps on my passport — I could easily get lost in spaces that once felt familiar. I could lose my way coming home. So, this put me to thinking of my pop and I in the woods in Vermont. How the terror we felt from having gotten lost was never as long as we thought it was or could be. The pain is always temporary, even when we’re convinced we’ll never claw our way out. This year, I tumbled into the wilderness season knowing pepper spray (or a simple solution) wasn’t going to save me. There exists no simple or easy way out — but the road, cabin, or clearing does exist. This much I know to be true. I would be lying if I told you the following months don’t make me anxious. I have plans — like I had for 2020— but there are so many unknowns, wild cards, characters resurrected from the dead, and plot twists. Will I ever be able to leave this country? Will I finally have semblance of financial security? Will I get a German shepherd? What sustains me, what stops me from jumping out of open windows, is the clearing. Knowing every wilderness has its season. Every shape we take is temporary. And if we hold still and breathe, maybe the bears won’t make a feast of us. Maybe they’ll move on and disappear in the wild, through the trees.
https://felsull.medium.com/are-you-in-the-wilderness-season-bff39e757444
['Felicia C. Sullivan']
2020-12-29 02:12:47.880000+00:00
['Life Lessons', 'Mental Health', 'Self', 'Relationships', 'Writing']
One for the Road
FICTION One for the Road I drank myself senseless on Christmas Eve. I knew I shouldn’t have done that; I knew it was a bad idea; it was a terrible idea even, but I did it anyway. The bearded man at the other end of the bar raised his shot glass; he raised it as ceremoniously as a priest during a Mass, then he froze for an instant, tried to steady his swaying body, even though his hand holding the glass remained impeccably motionless, like that of a crane operator, and declared: “One for the road! Ladies and gentlemen, one for the road!” He downed it in one gulp, and the entire bar — full of patrons and smoke — also downed their drinks with him. I saw the sudden flashes of light, reflected from the bottoms of the raised glasses, flare up all around me, here and there, as if the starry night had crept into this crowded place and taken it over. I remember waiting for them to finish all that collective raising the toast and drinking — I had never been particularly fond of any mass actions, or inactions for that matter. And only then, when the last glass had landed safely on the runway of the counter or a table, did I allow mine to take off. “Don’t be shy, ladies and gentlemen! Don’t be shy!” he would reappear a few minutes later, as shaky and wobbly as before, and yell: “One for the road! One for the road, ladies and gentlemen! And Merry Christmas to you!” It took several of those high-proof farewells to knock him off his feet. Too much cordiality defeated him, apparently; so that, before long, two more or less sober Samaritans had to step up and tow him out of there, tow him back home — his insteps dragging on the ground, like twin turntable needles trying to record something. They left two parallel grooves in the fresh snow — so they had recorded something, after all: his path home. When I got out of there myself — yet without all that dragging and towing — I felt the sidewalk swim beneath me, just as if it tried to catch me off guard and smack me in the face. I saw everything swim and be in constant motion: the snow-caked shops closed for the night; the blazing street lamps forcing me to squint my eyes; the snowplows sailing majestically down the snow-covered streets like the monumental icebreakers that are about to reach and claim the North Pole; the trashcans being discreetly emptied by the warmly dressed garbage men roaming the empty streets — a swarm of nocturnal creatures creeping out of their lairs only after dark. The world, despite the pervading cold, seemed to be in a sudden mood for swimming — so I swam along with it. I swam down the road, my plump and short legs desperately worked beneath me — like the needles in the hands of one knitting a jumper — doing their fat best not to let me down; my long coat grazed and caressed the snowdrifts, like a delicate and passionate lover. Hazily, I saw — through the low-placed windows that I passed on my way — the cheery families gathered around their tables: the late Christmas dinners; I saw the warm and colorful blaze of fairy lights pour from each of the top-floor windows; I felt the festive mood suffusing the air; I saw the Christmas trees sag under the oppressive weight of all the sparkling and glittering nonsense being attached to them, as if they were generals with their chests hung with medals. And then I saw a splash of vomit on my right shoe — I must have stepped into something left by that yeller from the bar: I was a shrapnel victim. I dug the tip of my shoe into the nearby snowdrift — a portable shoeshiner, very convenient. Then it occurred to me; then it struck me — I couldn’t go back home like this; I just couldn’t return home empty-handed like this — it was Christmas Eve for crying and howling out loud. I staggered back, back to where the shops and flashy display windows could be found. I staggered back to the shops with their shelves bending and buckling under the overflow of presents, toys, mascots, shiny gift-wrapping papers, and everything that one might wish for on a day like this. I staggered back there, yet only to find them all closed and boarded up — such a cruel lack of compassion on a night like this, on such a special night like this. The tree sellers were gone as well and only the occasional heaps of green needles here and there — impossible to miss on the uncompromising whiteness of the fresh snow, like the blots of blood on the crime scene — marked the spots where they had till recently practiced their trade. Before I realized what I was really doing, I grabbed a spruce growing next to some building — the first one that I had encountered on my way. I kept wrestling and fighting with it, the snow and needles raining on me, on the ground, on the car parked nearby, until the fairly thin trunk gave up and broke with a snap. Eager to walk away from there as fast as possible, from the amputated tree trunk sticking accusatorily from the snow, from the incriminating evidence of my wooden crime, I cleaned it up; I straightened the spruce up, as if I were adjusting a child’s clothes before dropping it off at school. I barely climbed the stairs; it was dark in there. The tree kept brushing against the walls of the narrow staircase, showering the needles all the time, leaving a treacherous trail on the steps — I would deal with it later; I would deal with it tomorrow. I found the door to the apartment to be invitingly ajar — a warm light seeped from there, like the comforting heat from a fireplace. I silently walked in. In the living room, I saw a table; it was all set: full of empty plates; the empty chairs all around it, as if I intruded on a secret meeting of the huddling furniture. Was I too late? Did I miss it? Had they started without me? Was it that bad? Cursing myself, I propped the spruce against the wall; it poked and tilted the picture hanging there, like one fingering a loose tooth. I wiped my forehead — it was hot in there; too hot to my liking. Then I saw it: another Christmas tree, fully decorated, large and proud, sitting in the opposite corner of the room: a glittering impostor — they hadn’t even waited for me to do it. A sudden wave of drowsiness came over me; I could hardly keep my eyes open: I had to lie down. I directed my clumsy and more and more dragging steps toward the bedroom. The door was open; the bed was nicely made up — every tired man’s dream. I didn’t even bother to take off my coat, much less the shoes — it was my bed, after all, I could afford a hint of slovenliness, once in a while at least. It was warm; it was pleasantly soothing; it was fine. Then she appeared in the doorway, like a slice of bread jumping out of a toaster. “Out, out, out of here!” she started screaming right away. “Get out of here, now.” A tall and balding man dressed in a ridiculous sweater with a horizontal diamond pattern — was that the best he could do? — materialized right next to her; the silly robust grocer. “Jeez, not him again. Not that guy again,” the baldie squeezed into the room, past the speechless woman; he leaped to my side and started tugging at my coat’s sleeve, like a toothless dog maltreating a trespasser. “Sir, you can’t be here. Sir, you can’t keep coming here.” “Call the police,” the woman demanded. “Kids, call the police. Tell them that this man is here; he’s here again.” I saw a duo of girlish little heads peeking around the doorframe and looking like tiny flowers in a boutonnière. “Sir, it’s not your home,” the baldie went on and on. “Sir, you can’t be here. It’s not your home.” “Kids, call the cops,” the woman kept wailing. “Where’s the phone? Call the cops.” “Sir, get up. Sir, it’s not your bed,” the baldie pleaded. “Please go away. It’s not your home.” “It was my home,” I slurred, burying my head deeper into the pillow — a blissful smile on my face — into the soft bedclothes. “It used to be my home. It was my home once. I only had one for the road.”
https://medium.com/the-nonconformist/one-for-the-road-9b90780fd14e
['F. R. Foksal']
2020-12-29 09:44:40.141000+00:00
['Storytelling', 'Books', 'Short Story', 'Fiction', 'Flash Fiction']
My Top Ten Highest Earning Medium Stories for November 2020
My Top Ten Highest Earning Medium Stories for November 2020 Plus one honorable mention. Photo by Viacheslav Bublyk on Unsplash Last month was my best month yet for my earnings on this platform. I made over $15 — something I did not expect. In addition, I achieved one other milestone I didn’t expect. The story that had been chosen for distribution in October had so many views from Medium readers that I was able to see their interests. So, without any further delay, here are my top ten most popular Medium stories for November 2020, by the amount earned: A Simple Process for Tracking All Your Goals in Google Sheets This story made a whole 10 cents in November and has made $1.30 since I published it in January. In the post, I discuss how to set up a dashboard in Google Sheets so you can track all your goals. It’s received over 1000 views, but most of them came from Google. If you want to check it out, here it is:
https://medium.com/writers-blokke/my-top-ten-highest-earning-medium-stories-for-november-2020-b95acdb87304
['Erica Martin']
2020-12-03 02:59:55.988000+00:00
['Medium', 'Analytics', 'Motivation', 'Reading', 'Writing']
7 Typical Traits of Medically Unsocial People
Almost all people suffering from social anxiety denies it until they start seeing it infecting other parts of their life. It degrades their health, destroys their relationships, and decimates their dreams; leaving nothing behind but an unfillable void. No-one wants to live like this, and the majority of people don’t even accept it in their minds. But the reality begs to differ when the root cause of all their misery is right under their noses, unnoticed. We all want to live happily and become successful in life — you may have the right attitude, a burning desire, and an unshakable persistence to do so. But it doesn’t necessarily translate into success. You know why? Because of this one flaw in your personality — Social Anxiety. Obviously, it is curable but, not all can fight it easily. Here are the 7 major traits of medically unsocial people. If you find yourself in a similar situation, then it is time to stop denying it and seek some professional help, ASAP.
https://medium.com/mental-health-and-addictions-community/7-typical-traits-of-medically-unsocial-people-675073576b39
['Nishu Jain']
2020-12-02 17:54:25.381000+00:00
['Social Anxiety', 'Personal Development', 'Relationships', 'Mental Health', 'Psychology']
Yes, Post-Vacation Burnout Is a Thing
Yes, Post-Vacation Burnout Is a Thing If a holiday is supposed to leave you refreshed and restored, why are you often more tired than when you left? Photo by Ricardo Gomez Angel on Unsplash Have you ever come back from vacation feeling like you badly needed, well, a vacation? Complaining about how exhausted you are after a week in Cancun isn’t going to win you any sympathy from co-workers, but it isn’t unusual to experience a crash, even after a lovely holiday. It’s increasingly clear that skipping vacation — as more than half of Americans do — is bad for health and productivity, increasing your risk of both depression and heart attacks. It can also contribute to burnout, a syndrome recently defined by the World Health Organization as exhaustion, negativity, and loss of professional efficacy. Multiple studies suggest that detaching from work on vacation makes us more productive and creative. But time away isn’t always relaxing — particularly if you spend it flying with kids, appeasing in-laws, or checking email — and reentry can be brutal. An overflowing inbox and multiple fires to put out can leave you feeling more drained and frazzled when you return to your desk than when you left. With post-vacation burnout, as with most things, prevention is better than cure. Here are some tips to help avoid it. Choose the right vacation First of all, be sure you’ve planned a vacation that actually allows for recuperation. Occupational psychologist Sabine Sonnentag at the University of Konstanz in Germany has identified four ingredients that make a vacation restorative, but this is also about personal taste: A week-long mountain-climbing trip might be ideal for some as an escape from work, and simply exhausting for others. Ideally, schedule your vacation with at least a day’s buffer before you have to go back to work, to give yourself time to settle back in, do laundry, get a good night’s sleep. Set your out-of-office mindset At work before you leave, take some time to complete your most unpleasant tasks, so you don’t spend your whole vacation thinking about them. In addition to setting your email vacation response and Slack status, make sure your colleagues know everything they need to do, and designate someone to address any pressing issues that come up while you’re gone to reduce the chance of getting a panicked message that you need to respond to from your beach blanket. Next, write a detailed, not-too-ambitious to-do list for your first day or two back, so that you can stumble through the first few days of reentry without straining your jet-lagged brain. Don’t announce that you’re home for a day or two, and wait to update your status and out-of-office message. On vacation, truly relax Plan activities you find both relaxing and pleasurable, like idly browsing a bookstore, doing a jigsaw puzzle, or going for an easy hike. The key is to give yourself a break from trying to achieve anything. Try to maintain some control over how you spend your time and energy. This can be tough on family vacations — especially when traveling with small children — but it’s important to carve out some time to do what you want, even if it means taking turns watching the kids or hiring childcare. If your needs align perfectly with what your partner and family want to do, great. If not, arrange to strike out on your own at least once, whether for an early morning run or a solo museum visit. And relaxation doesn’t have to mean downtime, but it should mean abandoning the need to perform. Consider developing a new skill or building on one, such as kayaking or taking a cooking class in the local cuisine. The activity doesn’t have to be physically risky, or even all that hard, just mentally absorbing enough to keep you focused and in a state of “flow.” Activities that help you develop a new mastery help combat the discouragement and inadequacy that signal burnout on the job. Another upside? It’s hard to check your phone if your device is sealed in a drybag or your hands are covered in focaccia dough. As for your work-work, do your best to unplug from it: Nearly 30% of Americans work more than they thought they would on vacation, according to a 2018 study by the American Psychological Association. Constantly checking your email undermines the potential benefits of vacation, and may even negatively affect health and well-being afterward. It also disrupts the potential for creative inspiration that can arise from allowing yourself to be a bit bored, or simply letting your mind wander. Ease your reentry If you’ve managed to schedule a buffer day or two, use it to decompress, catch up on sleep, and savor the experience. Print out photos, record memories in a journal, or download recipes that will remind you of your trip. Write thank-you notes to your hosts and travel companions. Plan your next adventure. Once you return to work, be sure to go to bed and get up at your normal times, and don’t try to make up for being gone by resuming work in double time. And here’s a useful trick to ease yourself back into the pace of office communication: Don’t announce that you’re back at your desk for a day or two, and wait to update your status and out-of-office message. Recognize that a vacation isn’t a cure-all Even if your vacation was blissfully relaxing, don’t be surprised if the mood-boosting effects fade soon after your return — that’s normal. If you don’t feel any better after vacation, or you can’t enjoy it because of work stress, however, you might be suffering from work-related burnout. If you’re struggling with burnout, chances are your vacation was only a temporary fix. The chronic stress associated with burnout syndrome isn’t resolved by a week or two of time off, no matter how perfectly you plan it, notes Irvin Schonfeld, an occupational health researcher at the City University of New York. Schonfeld’s research suggests that people who score high on the Maslach Burnout Inventory may actually have a form of job-stress-induced depression. Emotional exhaustion, which Schonfeld describes as “the core of burnout,“ is also highly correlated with depression, “so it’s tricky to say that burnout is a separate phenomenon from depression,” he says. Finally, remember that vacation isn’t the only time you’re allowed to relax. Find ways to build “deliberate rest” into your day; take opportunities to let your mind wander or be entertained — even at work, or during your commute. And pretend you’re on vacation over the weekends. Research suggests that when people adopt a “vacation mindset” on weekends, they do less housework, and spend more time eating and having sex. They’re also happier on Mondays. Go figure.
https://forge.medium.com/yes-post-vacation-burnout-is-a-thing-ef614bc7d49f
['Emily Underwood']
2019-08-19 11:01:01.175000+00:00
['Vacation', 'Mental Health', 'Productivity', 'Live', 'Burnout']
Andrea Yates and the Cost of Ignoring Mental Illness
Photo courtesy of the Houston Police Dept. When a mother kills her children, it’s condemned as the most heinous crime a person can commit — a cold-blooded act of violence against the most vulnerable victims. But is it always so clear cut? In 2001, the case of Andrea Yates made the world ask that question — and to find the answer, they would have to reckon with the ways that religion, patriarchy, and mental illness can destroy a woman. She was born Andrea Kennedy on July 2, 1962, the youngest of five children in a Catholic family. Friends and classmates remember her as being very active in extracurricular activities and charity work. She was smart, too — a member of the National Honor Society, she graduated valedictorian of her class at Milby High School in Houston, Texas. But what they might not have known was that this driven perfectionist also struggled with depression and bulimia. But in her desperate need to appear flawless, she never allowed anyone to know what was going on inside her mind. After graduation, she went on to earn a degree in nursing, then went to work as a registered nurse at a cancer center. It was around this time that she met Russell “Rusty” Yates. Very soon after meeting, they moved in together. Rusty said that she was extremely uncomfortable with her body — dressing and undressing in the closet — and did not enjoy sex. While some might chalk this up to a strict patriarchal upbringing, it stands out as a red flag for a number of mental disorders. Rusty was a devout follower of the itinerant street preacher Michael Woroniecki. Woroniecki would travel around the country with his wife and kids preaching, mostly on college campuses, his fire-and-brimstone message. The religion he espoused was a stark fundamentalist Christianity with particularly regressive rules for women, whom he preached were naturally evil “witches” because they came from Eve. Women were not to educate themselves, work outside the home, or use birth control. Wives were expected to submit themselves to their husbands in all matters. Children, as well, were expected to be seen and not heard, and disobedience of any kind was to be punished with spankings or whippings. Mothers who didn’t beat their children, he taught, were condemning them to hell. Rusty introduced Andrea to Woroniecki’s teachings, and then to Woroneicki himself. Perhaps his strict fundamentalist teachings didn’t seem so foreign to her, since she had grown up in a Catholic household. Nevertheless, they lived together for two years before getting married, and in February 1994, she gave birth to their first child, Noah. Andrea, now a devout follower of Woroniecki, quit her job and studies to stay home and be a full-time mom. Later, Andrea would admit that after Noah’s birth, she began to have disturbing visions of knives and stabbings, and she even thought she heard the voice of Satan speaking to her. However, she told no one of these troubling visions. During that time, the Yateses and the Woronieckis became quite close, even considering each other family, and the women often watched each other’s children. Soon after Noah was born, they had to move from their four-bedroom home in Houston to a small trailer in Seminole, Florida, for a temporary job Rusty had taken. There, thanks to their anti-contraceptive beliefs, Andrea gave birth to two more sons: John in December 1995 and then Paul in September 1997. During this time, the Yateses kept in contact with Worniecki and his wife through their newsletter, videos, and letters. In their letters, the Woronieckis would often “diagnose” Andrea as being evil. “God knows how wicked you are,” he wrote. “You must accept the reality that your life is under the curse of sin and death . . .” Andrea was subjected to a near-constant stream of hateful messages like this from a man she believed spoke for God. Shortly after Paul’s birth, the Yateses moved back to Houston — this time, they purchased their home from Woroniecki: a used Greyhound bus that had been converted to a motorhome. There, in that 350-square-foot bus, Andrea was consumed with caring for a newborn, a toddler, and a preschooler. Besides the work of cooking for the family, feeding the older two, nursing the newborn, and cleaning up after everyone, she was constantly changing and washing cloth diapers (disposable diapers were not allowed by Woroniecki) and homeschooling the oldest. On top of that, she was caring for her aging father, who had Alzheimer’s. When friends or family members would question Rusty, or try to point out it was too much stress to put on his wife, he would shrug it off as being Andrea’s job. Meanwhile the Woronieckis continued condemning Andrea for not disciplining her children more. Apparently their normal childhood behaviors were seen as “disrespectful” and “not what God wants,” and the Woronieckis insisted that by not forcing the children to be more obedient by whipping them, Andrea was damning their souls to hell. In February 1999, their fourth child, Luke, was born. Four months later, Andrea called Rusty at work, begging for help. He arrived home to see her nearly catatonic, chewing her fingers. His solution was to take her and the kids for a walk on the beach. He claimed she seemed better after that, but the next day she tried to overdose on her father’s trazodone. Rusty took her to the hospital, where she was diagnosed with major depressive disorder and put on the antidepressant Zoloft. However, she had to be released after a short time as her insurance would not cover further inpatient services. After she was sent home, she began seeing a new doctor, who put her on the anti-psychotic drug Zyprexa. But at home, she was back under the spell of Woroniecki, who preached that drugs and medical care were of the devil. Andrea promptly flushed all of her Zyprexa down the toilet. Her mental health spiraled downwards: she was pulling her hair out and leaving bald spots, picking her skin until it bled, and not eating. She began hearing voices telling her to get a knife. One day Rusty came home from work to find Andrea holding a knife to her own throat. He again rushed her to the hospital. The hospital recommended electroshock therapy, which the couple refused. So the hospital sent her home with a combination of drugs, including the anti-psychotic Haldol, in conjunction with weekly visits to a psychiatrist. Thankfully, family members managed to convince Rusty that Andrea and the kids needed to get out of that bus. He purchased a three-bedroom, two-bath home in nearby Clear Lake, Texas. Now that she was out of that cramped bus, under a doctor’s close supervision, and taking the appropriate medication, she seemed to recover. Doctors warned the couple not to have another child, since women who suffer from postpartum depression and psychosis are at a much higher risk with each birth, and the episodes tend to worsen. However, now that Andrea was seemingly back to normal again, the couple decided to have another child. Rusty called Andrea’s severe postpartum depression, psychosis, and suicide attempts “like having the flu,” and that if she relapsed, they could just put her back on her meds and everything would be fine. The couple either didn’t know or didn’t care that going off psychiatric drugs can itself trigger severe reactions, and later, make it harder to treat the underlying issue. So in 2000, Andrea stopped taking both her psychiatric drugs and her birth control. In November, their fifth child, Mary, was born. The following March, Andrea’s father passed away. His death hit her hard, and she began showing symptoms of severe depression: lethargy, picking bald spots on her scalp, not drinking liquids. Over the next few months, she was in and out of psychiatric hospitals and clinics and subjected to an ever-changing mixture of psychiatric drugs. Rusty was warned by doctors not to leave Andrea alone. But Rusty still would not face the severity of Andrea’s problems. He arranged for his mother to come to the house to help Andrea regularly, but would leave her alone for short periods of time in order to “make her more independent” and, of course, so she wouldn’t become dependent on him and his mother for her “maternal responsibilities.” One day in May, his mother arrived at their home to find Andrea filling the bathtub at 4:30 in the afternoon. When questioned, she gave vague answers. This scared Rusty’s mother, so Andrea was sent back to the hospital, where she admitted she had thought about drowning herself and her children. June 18, 2001, Rusty took her back to her doctor because she was not getting any better. He reports that the doctor was frustrated that none of the drugs seemed to be working, so he told Andrea, “You need to think happy thoughts!” Two days later — June 20, 2001 — Rusty left for work around 9. His mother was scheduled to come to the home around 10, leaving Andrea alone with the children for an hour. As soon as Rusty left, Andrea filled the bathtub. She took John, who was 5, into the bathroom where she held him under the water until he was dead. Then she carried him into the master bedroom and carefully laid his body on the bed. She then brought in Paul, age 3, and repeated the process. Luke, age 2, was next. She then drowned 6-month-old Mary, but while she was still floating in the tub, Noah, age 7, came in and asked what was wrong with Mary. He tried to run away, but Andrea caught him, then drowned him, too. She left him floating in the tub, but took Mary and laid her in John’s arms on the bed. She then called 911, insisting they come to the house, but would not answer why. As soon as she got off the phone with 911, she called Rusty and told him to come home right away. He seemed to intuit what had happened, because he asked her, “How many?” and she answered, “All of them.” When the police arrived, her first words were, “I just killed my kids.” Her hair and clothes were still wet. At the station, the court-appointed psychiatrists described Andrea as “the sickest person” they had ever seen. She was nearly catatonic, emaciated, filthy, her scalp checkered with bald spots. Under questioning, she readily confessed to drowning all five of her children. Her reasoning was a delusion built entirely from Woroniecki’s teachings: she said she had killed her children so that they would go to heaven; if she hadn’t “sent them to God” now, they would surely keep “stumbling” and would go to hell. She said she knew she was already evil — that the Devil was literally inside of her — and damned to hell. So killing them, in her delusion, wouldn’t make any difference to her eternal soul, but would save her children’s. On July 30, 2001, she was indicted on two counts of capital murder. The prosecution held off charging her for the other three murders as a kind of fall-back: if they failed to get a conviction, they could then bring the other three charges without violating her right not to be tried for the same crime twice (i.e., “double jeopardy”). Andrea pled not guilty by reason of insanity, an extremely risky strategy. Nationally, only about 1 percent of criminal defendants take this plea, and of those, only about a quarter of them are successful. In addition, Texas has some of the strictest qualifications for an insanity defense. Known as the M’Naghten Rule, defendants must prove both that they have a mental disease or defect and that they could not tell right from wrong at the time of the crime. Andrea Yates’ trial began on February 18, 2002. While it was clear that she indeed had a mental disease, her ability to tell right from wrong was at the heart of the trial. The efforts she took in planning the crime, such as waiting until Rusty was gone and locking up the family dog, were used to prove she knew what she was doing was wrong. It also didn’t help that by the time of the trial, Andrea had been under psychiatric care and seemed more normal: she was lucid in a way she hadn’t been when she committed the crime, and her appearance was clean and well-groomed. As much as it hurt her case, the court psychiatrists could not have ethically withheld treatment. In March 2002, a jury deliberated only three and a half hours before rejecting the insanity defense and finding her guilty. Although the prosecution had sought the death penalty, the jury rejected it. Instead she was sentenced her to life imprisonment with eligibility for parole in 40 years. While in prison, she was placed on suicide watch, and later, hospitalized for refusing to eat. Her attorney filed an appeal, and in 2005, the Texas First Court of Appeals reversed her capital murder conviction. That same year, Rusty divorced her. In 2006, her retrial began. She again pled not guilty by reason of insanity, and on July 26, 2006, she was found not guilty by reason of insanity and ordered into the custody of a state mental hospital. She now resides in the low-security Kerrville State Hospital in Kerrville, Texas, where she receives treatment and counseling. In her free time, she makes cards and aprons that are sold anonymously, with the proceeds sent to a fund to help low-income women access mental health services. Every year, her case is brought up for review, but every year, she waives it. It seems Andrea Yates doesn’t want to be released. Public opinion was, and still is, split into those who see her as a cold-hearted baby killer and those who see her as a victim of her mental illness — and, possibly, manipulation by a maniacal cult leader. Thankfully, in the end, it doesn’t matter — Andrea Yates will most likely never leave the walls of Kerrville State Hospital alive.
https://delanirbartlette.medium.com/andrea-yates-and-the-cost-of-ignoring-mental-illness-b9e2f1f598ae
['Delani R. Bartlette']
2019-05-13 13:01:00.995000+00:00
['True Crime', 'Crime', 'Mental Health', 'Postpartum Depression', 'Psychology']
3 Open Source Tools for Ethical AI
#1 — Deon An ethics checklist for responsible data science, Deon represents a starting point for teams to evaluate considerations related to advanced analytics and machine learning applications from data collection through deployment. The nuanced discussions spurred by Deon can ensure that risks inherent to AI-empowered technology do not escalate into threats to an organization’s constituents, reputation, or society more broadly. With AI, the stakes are monumental, yet the dangers are potentially indistinct. Instances of algorithmic malpractices are not always clear cut. Deon checklist from DrivenData spans ethical considerations from Data Collection to Deployment As example of use case where Deon could have been implemented to improve data product governance, consider the influence of Russian-crafted fake news on the 2016 election. Though the threat did not stem from AI, the impact of the intentionally misleading media content was amplified by the recommendation algorithms of social media, where controversy begets interaction, and users are pushed towards increasingly extreme belief systems. This pattern of segmentation is beneficial to the algorithms underlying this technology as it leads to wider decision boundaries between classes — but it is detrimental to society, increasing the potential for foreign actors to sow division and undermine critical institutions such as the sanctity of national elections. Social media companies have come under fire as a result of their failure to detect and reject fake news content. By failing to act, these firms permitted their artificially intelligent recommendation engines to strengthen the destabilizing impact of the deceptive foreign media. If these firms had undertaken a systematic review of the potential ethical and social implications of fake news amplified by their algorithmic systems prior to the 2016 election, this effort may have resulted in a more robust plan to root out systematic disinformation campaigns. With the threat of AI-generated deepfakes looming as an ever more realistic weapon in information warfare, the hazard posed by the current state of unpreparedness is heightened going into the 2020 election.
https://medium.com/atlas-research/ethical-ai-tools-b9d276a49fea
['Nicole Janeway Bills']
2020-12-23 11:48:04.442000+00:00
['Finance', 'Machine Learning', 'Data Science', 'Artificial Intelligence', 'Python']
10 online courses for those who want to start their own business
As options for learning online continue to expand, a growing number of entrepreneurs are using them to keep their staff on the cutting edge or even for themselves. Using tools for online training, including videos, apps, and webinars, rather than sending employees to expensive training classes or bringing in pricey consultants to train on site, can save startup’s both time and money Small businesses are turning to online training for cost, quality, and access reasons,” says Nate Kimmons, Vice President of enterprise marketing at lynda.com. “Gone are the days of sending employees off to a two-day, in-person class. Online training serves as a 24/7 resource that the learner can access anytime, anywhere at their own pace from any device. It’s simple to use.” If you are thinking of trying online training, here are a few things to consider and examples of tools to get you started. Allow for flexibility. With face-to-face training, you usually get one chance to soak it all in. But many online programs are on-demand, meaning learners can move at their own pace and watch presentations again and again if needed. The added flexibility allows everyone to work at his or her own pace and better fit the training into a busy schedule, Go mobile. Online education also allows for flexibility across technology formats. Employees can learn at home, on the job, or anywhere they use their smartphone. Do your research. Not every online course is worth the money. Check out reviews and feedback offered by users of any given online course. Coming up now are examples of just ten courses available online offering curriculums in Entrepreneurship, Marketing, Marketing Psychology and Coding to name a few. All vary in price from free all the way up to €200 with links available to the courses for further details 1. Entrepreneurship: Launching an Innovative Business Specialisation · Developing Innovative Ideas for New Companies: The First Step in Entrepreneurship · Innovation for Entrepreneurs: From Idea to Marketplace · New Venture Finance: Startup Funding for Entrepreneurs · Entrepreneurship Capstone Develop your entrepreneurial mind set and skill sets, learn how to bring innovations to market, and craft a business model to successfully launch your new business. Enrol here. 2. Entrepreneurship Specialization · Entrepreneurship 1: Developing the Opportunity · Entrepreneurship 2: Launching your Start-Up · Entrepreneurship 3: Growth Strategies · Entrepreneurship 4: Financing and Profitability · Wharton Entrepreneurship Capstone Wharton’s Entrepreneurship Specialization covers the conception, design, organisation, and management of new enterprises. This four-course series is designed to take you from opportunity identification through launch, growth, financing and profitability. Enrol here. 3. How to Start Your Own Business Specialization Developing an Entrepreneurial Mind-set: First Step towards Success · The Search for Great Ideas: Harnessing creativity to empower innovation. · Planning: Principled, Proposing, Proofing, and Practicing to a Success Plan · Structure: Building the Frame for Business Growth · Launch Strategy: 5 Steps to Capstone Experience · Capstone — Launch Your Own Business! ‘This specialization is a guide to creating your own business. We will cover a progression of topics necessary for successful business creation including: mind set, ideation, planning, action and strategy, will be covered. Rather than just describing what to do, the focus will be on guiding you through the process of actually doing it. Enrol here. 4. Entrepreneurship: The Part Time Entrepreneur Complete Course For people who want to pursue Entrepreneurship without giving up your full time jobs succeeding with a Side Gig as a PT Entrepreneur Identify and take action on part time entrepreneur or side gigs that fit their lifestyle. Be ready to launch their new business. “Great course. Focused on Part Time which is nice as other courses are about full time and I am not read for that. Want to make some money as a freelancer and part-time for now. Educational and Instructor is very motivational and encouraging as well. Highly recommend.” Enrol here. Price: €200 5. SEO for SEO Beginners SEO tutorial for beginners: SEO optimise your site, get to the top of the search results and increase sales! Seomatico Get up to speed with the fundamentals concepts of SEO Discover how to find the best keywords for your website — ‘keyword research’ Find out how to increase your sites visibility in the search engine results — ‘on page optimisation’ Learn how to build more authority in your niche than competitors so Google puts you’re at the top of the search results Price: FREE In this SEO tutorial for beginners, you’ll learn about the Three Pillars of Powerful SEO: 1. Keyword Research: How to find keywords that attract visitors who want to buy 2. On Page Optimisation: How to increase your site visibility in the search engines 3. Off page optimisation: How to build authority on your site using links so Google knows you have the best content for it’s users. Enrol here. 6. Twitter Marketing Secrets 2017-A step-by-step complete guide Discover social media marketing secrets, gain 25000+ true twitter fans & followers, twitter Marketing tips! Reach 25k highly targeted followers in just weeks. Attract real and targeted followers with just zero money and 20 minutes a day. Become an influencer on Twitter and sell products and services right away. “1000+ highly satisfied students within just 5 days of the course launch” “Best Twitter Marketing Course on Earth!! This Course will Skyrocket your Twitter Career. I highly recommend taking this course” Price: €120 Enrol here. 7. Marketing Psychology: How to Get People to Buy More & Fast! Learn a set of “persuasion & influence tools” you can use to ethically motivate people to action in marketing & business Create marketing that grabs your customer’s attention, triggers curiosity and interest in your product, and ultimately persuades them to take action and BUY from you. ‘The psychology of capturing attention, and how to get people to think and dream about your brand, or the psychology behind getting people to rave for your product after you think that they’ve gotten sick of seeing it. How to design simple web pages and marketing materials that boost your conversions Enrol here. Price: €200 8. Coding for Writers 1: Basic Programming Learn to both code and write about code The course uses JavaScript, but talks about other programming languages as well, including providing a survey of common programming languages. It covers common Computer Science concepts, such as variables, functions, conditionals, loops, etc. Price: €45 Enrol here. 9. Smart Marketing with Price Psychology Improve online marketing success with fundamental psychological pricing research for your business and marketing Price your product or service to maximize revenue Understand how consumers perceive and think about prices Run promotions that make people think they’re getting an amazing deal Think about your full set of products and services in ways that maximize earnings Price: €35 Enrol here. 10. Entrepreneurship: 5 keys to building a successful business Learn the core components to starting a great business from an entrepreneur who made his first million at the age of 24 This course shows an understanding of how successful entrepreneurs think and how to apply that thinking in your own life. The foundations you’ll need to develop a business idea that truly resonates with consumers and addresses an actual market demand. Price: €90 Enrol here. Each of these online courses will be found to be very flexible as they only require a very small amount of your time per week. They all have excellent feedback from previous users and finally they are all accessible through an online app available through the various app stores on various mobile devices. They all tick three very important boxes. Launching your own business is very time consuming and requires your undivided attention. Enrolling in courses like these for your staff or even for your own benefit might give your business in the insight or kick it needs.
https://medium.com/the-lucey-fund/10-online-courses-for-those-who-want-to-start-their-own-business-a58572b00f1e
['Ian Lucey']
2017-03-30 12:37:39.082000+00:00
['Online Courses', 'Startup', 'Education', 'Entrepreneurship', 'SEO']
Roller Coaster Therapy
Roller Coaster Therapy Strap in. Photo by Marc Schaefer on Unsplash Living with addiction and chronic mental illness and visions of blood-hungry demons gnawing at your heels can be like spending your life on a roller coaster. A one-way trip to nowhere. You get used to the thrills and chills, the highs and the lows. You know once you go up, you’re about to come crashing down. You know what lies around the next bend. And still it’s exhausting. You lose a little of yourself upon the completion of each cycle. It’s a ride that has no exit point. You pulled the lever and hopped on. Or fate pulled it for you and threw you into your seat. Round and round we go. Now you’re whirling and twirling through the twilight sky into oblivion. You scream at the stars. You think about taking off your safety harness, jumping and plummeting to the ground four or five stories below. You couldn’t jump from any lower. The cars are whipping through too fast and you are enshrined in webs of metal. You wouldn’t be the first to jump. Where is the hope? Where is the healing? When and how does it end? You ask, and you are not alone. There are other lost souls traveling with you on other cars. In the beginning they were all screaming and either hanging on for dear life or throwing their arms in the air with reckless abandon. Screaming in excitement or in terror. Now they’ve all gone quiet. Their arms lay limp in their laps. Their eyes are bloodshot and glazed over. They are resigned to the maddening meaningless revolutions. Hope departed from their souls long ago, to be replaced by a leaden weariness, a torpor that has constricted their limbs and anesthetized their minds. They feel no anger, no desperation. Can they still be reached? Can they still be saved? In fact, they may be closer to salvation than most. Still others clamor for control. They think they can regain their agency in a situation that has spun far outside the arena of their thoughts and emotions. They refuse to surrender to a state of graceless passivity. They analyze, calculate, formulate plans. How can we make this work? How can we use this, this and this to our advantage? Where is the weak point? There has to be one. It’s just a ride, after all. I will not do this forever. I’m smart, I’m strong and I will not be rendered a captive observer to my own life experience. If we work together, we can overcome any challenge. All their ideas result in utter failure. They try again and again. They’re a stubborn group, but even they get worn down by the relentless whirling and spinning of the coaster. Time takes its toll. Soon they slump back in their seats, defeated and dejected. But there is still a glint of fire in their eyes. People begin yelling at each other, venting their frustration, their fear and their blackest despair. Blame spreads like wildfire. It is a deadly contagion, sickening everyone who hears it and internalizes it. The living dead say nothing, their bodies slumped and their jaws slack. But everyone else is looking for recourse from someone. They want to know why. Why them? They want to know how. How could this possibly be? What did they ever do to deserve such hellish torture? Where are all the people? And, inevitably, the question is asked, are we all dead? Some chew this over. Ponder it. Others discard it immediately as complete insanity. They want practical answers and practical solutions. The pain is palpable. It seems on the verge of coalescing into a living, breathing organism born of human agony. Finally a woman with a quiet but firm voice cuts through the bickering with a question, ‘What if the person or persons who designed this monstrosity wanted to break our minds and spirits so they could create us anew? We’re already a little broken. We’re all dying. What do we have to lose?’ She is instantly rebuffed by a torrent of indignant responses. ‘I’m not broken. How dare you!’ ‘So your solution is just give in to torture?’ ‘I have my dignity to lose, you bitch! And my life!’ ‘They just get to play god with our bodies and minds without our consent? Fuck them and fuck you!’ The woman takes it all in stride and lays back in her seat. Then a tall, solidly built man in overalls and a dark t-shirt emerges from the darkness. No one notices him at first. The coaster is speeding along too fast and the discussions and recriminations between passengers have descended into full-blown chaos. People have even taken off their safety harnesses and started taking swings at each other. The man apparently takes no notice. If he does, there is no trace of it on his placid expression. He walks calmly over to the lever and as the coaster comes swooping down he pulls it and the cars come to a screeching halt. A few people who unstrapped themselves tumble over the side of their cars, but quickly pick themselves back up and dust themselves off. There is great rejoicing. The passengers swarm the tall man, hugging him fiercely, rubbing his shoulders and patting him on the back. When they step further out into the open air they find themselves in the middle of a gigantic deserted theme park, with the rides all lit up and buzzing as if it were opening day. Everyone is confused and disoriented and absolutely exhausted. But they are free. They are free to walk the earth in whatever direction they choose. What a magnificent gift. Some are licking their lips at the thought of a drink. Others are scratching their arms at the thought of a shot of dope. All of them are angry. All of them are depressed. Some want to end it the first chance they get. This was punishment. This wasn’t treatment. The living dead are now animated enough to start stumbling towards the exit. Most of the people try to follow their lead. It seems the best way to go. The best way to continue their journey. To firmly cement their freedom and return to the glory of the wider world, in all its beautiful ruin. But they are quickly corralled by the tall man with the black shirt and overalls. He shakes his head. “You’re not ready yet.” One woman collapses to her knees and begins to shriek at the sky. An older gentleman approaches the tall man. “We’ve done our penance. Enough is enough, sir.” The man frowns. He looks truly sorry. “No, sir. I promise, on my heart and on my word as a good man of God, you will thank me when this ordeal is done.” Then he turns back to the crowd and points his finger to a ride off in the distance. “To the Tilt-A-Whirl!”
https://medium.com/grab-a-slice/roller-coaster-therapy-6be6c8c7a0dd
["Timothy O'Neill"]
2020-01-26 20:55:55.695000+00:00
['Addiction', 'Mental Health', 'Fiction', 'Psychology', 'Horror']
3rd Annual Global Artificial Intelligence Conference [January 23–25th, 2019]
Click the image to RSVP to the conference Global Big Data Conference’s vendor agnostic 3rd Annual Global Artificial Intelligence (AI) Conference is held on January 23rd, January 24th, & January 25th 2019 on all industry verticals (Finance, Retail/E-Commerce/M-Commerce, Healthcare/Pharma/BioTech, Energy, Education, Insurance, Manufacturing, Telco, Auto, Hi-Tech, Media, Agriculture, Chemical, Government, Transportation etc.. ). It will be the largest vendor agnostic conference in AI space. The Conference allows practitioners to discuss AI through effective use of various techniques. Join the AIMA Thought Leadership @ bit.ly/AIMA-MeetUp Large amount of data created by various mobile platforms, social media interactions, e-commerce transactions, and IoT provide an opportunity for businesses to effectively tailor their services by effective use of AI. Proper use of Artificial Intelligence can be a major competitive advantage for any business considering vast amount of data being generated. Artificial Intelligence is an emerging field that allows businesses to effectively mine historical data and better understand consumer behavior. This type of approach is critical for any business to successfully launch its products and better serve its existing markets. Annual Global AI Conference is extended to three days based on feedback from participants. The event will feature many of the AI thought leaders from the industry. Annual Global AI Conference is an event acclaimed for its highly interactive sessions. This conference provides insights and potential solutions to address AI issues from well known experts and thought leaders through panel sessions and open Q&A sessions. Speakers will showcase successful industry vertical use cases, share development and administration tips, and educate organizations about how best to leverage AI as a key component in their enterprise architecture. It will also be an excellent networking event for Executives( CXO’s, VP, Directors), Managers, developers, architects, administrators, data analysts, data scientists, statisticians and vendors interested in advancing, extending or implementing AI. SPEAKERS Over 100 leading experts in Artificial Intelligence area will present at our conference. Please send an email to events@globalbigdataconference.com for speaking engagements. YOU GET TO MEET You get to meet technical experts, Senior , VC and C-level executives from leading innovators in the AI space (Executives from startups to large corporations will be at our conference.) WHO SHOULD ATTEND CEO, EVP/SVP/VP, C-Level, Director, Global Head, Manager, Decision-makers, Business Executives responsible for AI Intiatives, Heads of Innovation, Heads of Product Development, Analysts, Project managers, Analytics managers, Data Scientist, Statistician, Sales, Marketing, human resources, Engineers, AI & Software Developers, VCs/Investors, AI Consultants and Service Providers, Architects, Networking specialists, Students, Professional Services, Data Analyst, BI Developer/Architect, QA, Performance Engineers, Data Warehouse Professional, Sales, Pre Sales, Technical Marketing, PM, Teaching Staff, Delivery Manager and other line-of-business executives. WHAT YOU WILL LEARN You’ll get up to speed on emerging techniques and technologies by analyzing case studies, develop new technical skills through in-depth workshop, share emerging best practices in AI and future trends. The depth and breadth of what’s covered at the annual Global AI conference requires multiple tracks/sessions. You can either follow one track from beginning to end or pick the individual sessions that most interest you. 1. Industry Vertical Use Cases ( Where AI applications are working/not working, What hot Technologies are used to implement AI, How to develop AI applications etc..) 2. Cognitive Computing 3. Chatbot 4. Data Science, Machine Learning & Deep Learning 5. IoT 6. Security 7. NLP 8. Computer Vision 9. Home Assistant 10. Robotics 11. Neural networks 12. Data Mining and Data Analytics 13. Speech Recognition, Image processing, Unsupervised Learning 14. Workshops Conference Location Santa Clara Convention Center, 5001 Great America Parkway, Santa Clara, CA 95054 (Map) CONFERENCE HIGHLIGHTS
https://medium.com/aimarketingassociation/3rd-annual-global-artificial-intelligence-conference-january-23-25th-2019-5ff91ec70467
['Federico Gobbi']
2019-01-08 23:25:44.118000+00:00
['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Marketing', 'Deep Learning']
Microservices — Discovering Service Mesh
Microservices — Discovering Service Mesh Service interactions in the microservices world deal with many non-functional concerns — service discovery, load balancing, fault tolerance, etc. Service Mesh provides the platform to manage these concerns in a more efficient and cleaner way. In this article, we will understand the framework in more detail along with a sample implementation based on Istio and Spring Boot. This is the 9th part of our learning series on Spring Boot Microservices. Photo by Ricardo Gomez Angel on Unsplash Why do we need Service Mesh? The unique proposition of Service Mesh does not lie in “what it offers” instead “how it achieves it”. It solves the problems, originated in the microservices architecture, with a different and mature perspective. It offers the platform where the non-functional concerns related to service interactions, are managed more efficiently. It ensures these operational concerns are not coupled together with the business logic, as was the case with earlier solutions. We already discussed multiple microservice patterns as part of our spring-boot learning series including Service Discovery, Load Balancing, API Gateway, Circuit Breaker, and many others. Before moving further, I assume, you have a basic understanding of these patterns. We will be discussing and referring to them throughout this article. If you do not have the background on them, it will be difficult to understand “What Service Mesh offers”. You can check out our learning series to get insight into these patterns if needed. If we rewind our previous exercises, we will find that the non-functional concerns are tightly coupled with the application logic. For instance, in our service discovery exercise, we implemented the load balancing on the client service side. In our circuit breaker exercise, we implemented the decorators, again on the client service end. These solutions work fine except that they restrict software maintenance, both from infrastructure and business perspectives. Different technologies, standards, and design approaches across the multiple microservices teams create a diversified set of implementations for the same set of problems. This creates a much bigger problem to solve. Assume the scenario, where we have to implement TLS Certificates across all the “service to service communications”. If different teams start working on this, this will become a long-lasting exercise. It's primarily due to the fact that the operation logic is bundled together with the application logic. This increases the implementation complexity multifold. Teams will be busier in resolving the inconsistencies rather than focusing on their core business logic. The primary objective of Service Mesh is to segregate the non-functional concerns, primarily dealing with connecting, securing, and monitoring services, from the application code. With the help of a separate infrastructure layer, we can enable the non-functional features with almost zero impact on the existing services. Consider the case of our e-commerce system. We have a Product Catalog Service responsible for product management and a Product Inventory Service responsible for product inventory management. If our portal is interested in getting the product details the call will look as shown in the figure above. With the traditional approach, the logic of Service Discovery and Circuit Breaker will be implemented along with the application logic of Product Catalog Service. Service Mesh promises to separate this out. Enabling operational concerns with the help of a separate layer improves the overall maintainability of the system significantly. Also, the changes can be managed more effectively as different teams can focus on different concerns. Development teams can focus on the business logic whereas the DevOps teams can focus on implementing the infrastructure concerns. How Service-Mesh Works? There are multiple service mesh technologies including Linkerd, Istio, Consul, AWS, and many more. More or less they work on the same architecture based on the proxies. Each of the business services is associated with one proxy each. So in our case Product Catalog Service will have one proxy and the Product Inventory Service will have another. The proxies reside alongside the services and this is the reason they are termed as — sidecar proxies. All the sidecar proxies reside in the data plane. They intercept all the calls to and from the service and enable the operational functionalities through it. The list of operational features is long. Few examples could be automatic load balancing, routing, retries, fail-overs, access controls, rate limits, automatic metrics, logs, tracing, etc. Most of these features operate at the request level. For instance, if Product Catalog Service makes an HTTP call to Product Inventory Service, the sidecar proxy on the Product Catalog end can load balance the call intelligently across all the instances of Product Inventory Service. It can retry the request if it fails. Similarly, the sidecar proxy on the Product Inventory Service side can reject the call if it’s not allowed, or is over the rate limit. Another important component in Service Mesh is the control plane which helps in coordinating the behavior of proxies and provides API to manipulate and measure the mesh. It’s responsible for managing the sidecar proxies, ingress/egress gateways, service registry, certificates, and other management aspects. Now that we understand the Service Mesh framework to some extent, let's see how it works on the ground. We will be using Istio for our sample implementation which is the leading service mesh framework. It's an open-source technology supported by Redhat, Google Cloud, IBM Cloud, Pivotal, Apigee, and other technology leaders. Sample Implementation Istio is designed to be platform-independent and supports services deployed over Kubernetes, Consul, or Virtual Machines. We will be using Kubernetes as the underline deployment platform. If you are new to Kubernetes, I suggest getting a basic understanding of it. You can visit my article on Working with Kubernetes for a high-level overview of this topic. For a detailed overview, you can visit its official website. Istio provides the platform to enable multiple features in the areas of traffic management, security, and observability. With this exercise, we will focus on enabling Service Discovery and Circuit Breaker patterns along with a flavor of API Gateway. I have already covered Service Discovery using Netflix Eureka in one of the previous exercises. Similarly, I have captured the Circuit Breaker pattern based on Resilience4j and API Gateway in separate articles. In each of these exercises, the implementation of the patterns is coupled to the application logic, significantly. In this exercise, we will implement these patterns independently, out of the services code. Service Mesh — Sample Implementation We will be implementing these patterns in the context of two Spring Boot based microservices — Product Catalog Service and Product Inventory Service. Assuming an external client is interested in getting the product details which include product availability as well, we will see how the API Gateway, Service Discovery, and Circuit Breaker patterns are implemented in this call. We will cover the exercise with the help of the following sections — Setting up Installing Kubernetes — We will be installing Minikube which can run a single-node Kubernetes cluster and best suited for learning purposes. I am using a Debian machine and used the following commands to download and install the minikube package. You can get the installation instructions for other platforms at https://kubernetes.io/docs/tasks/tools/install-minikube/. $ curl -LO ###### downloading package$ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb ###### installing package $ sudo dpkg -i minikube_latest_amd64.deb ###### starting kubernetes cluster $ minikube start Installing Docker — We will not be able to start the Kubernetes cluster as it needs an underline virtualization technology such as containers or virtual machines to function. We will be using the most popular container option here — Docker. The following command helps in installing Docker on my machine. You can check other installation options at https://docs.docker.com/engine/install/. # installing through convenience scripts for testing purpose $ curl -fsSL https://get.docker.com -o get-docker.sh $ sudo sh get-docker.sh # enabling current user to run docker $ sudo usermod -aG docker $current_user && newgrp docker Now that we have the Docker installed you can run minikube start to start the Kubernetes cluster. In this exercise, we will be building container images for our spring-boot based services. If you are new to this topic, you can get a crash course on this here. Installing Kubectl — This is the command-line tool to access Kubernetes APIs. We will be using it to manage service deployments. # installing kubectl on Debian sudo apt-get install -y kubectl Installing Istio — Here comes our primary framework. We have already installed its pre-requisites, so its installation should be smooth. $ curl -L ######### downloading latest release of istio$ curl -L https://istio.io/downloadIstio | sh - ######### including istio on path $ export PATH=$PWD/bin:$PATH ######### installing istio in demo mode $ istioctl install --set profile=demo ######### instructing istio to automatically inject envoy sidecar proxies $ kubectl label namespace default istio-injection=enabled By running the above commands, a lot of things have happened behind the scenes— We are ready to create and run containerized services with the help of Docker. We are ready with our Kubernetes Cluster. Istio has installed ingress and egress gateways to control the incoming and outgoing traffic. Istio is ready to deploy its side-car proxies. Deploying Services, Enabling Service Discovery It's time to deploy our services. Let's get the microservices code from our Github repository — This will get the code across all the samples. For the purpose of this exercise, we will be dealing with the samples present in the directory spring-boot/istio-example . Before we jump into creating the container images for our services, run the command $ eval $(minikube docker-env) to use the docker environment available with minikube. This will ensure that all the local images are stored in this environment and referred correctly during runtime. Let's create a container image for the Product Inventory Service. Change your working directory to spring-boot/istio-example/product_inventory . Dockerfile is already available for this service — ###### product inventory service ##### FROM adoptopenjdk:11-jre-hotspot as builder ARG JAR_FILE=target/product_inventory-0.0.1-SNAPSHOT.jar ADD ${JAR_FILE} app.jar ENTRYPOINT ["java","-jar","/app.jar"] Run the following command to build the docker image. This will update the local docker daemon respectively. ##### building container image for product inventory service $ docker build -t product-inventory:v0.0.1 . We need to create deployment and service configurations for our Product Inventory Service. Configurations are already available in the root directory of the service — deployment-def.yaml and service-def.yaml deployment-def.yaml — creating basic deployment configuration based on the container image product-inventory:v0.0.1 apiVersion: apps/v1 kind: Deployment metadata: name: product-inventory namespace: default labels: app: product-inventory spec: replicas: 1 selector: matchLabels: app: product-inventory template: metadata: labels: app: product-inventory spec: containers: - name: product-inventory image: 'product-inventory:v0.0.1' imagePullPolicy: Never ports: - containerPort: 8080 service-def.yaml — exposing product inventory service accessible inside the Kubernetes cluster. apiVersion: v1 kind: Service metadata: labels: app: product-inventory service: product-inventory name: product-inventory namespace: default spec: ports: - port: 8080 name: http selector: app: product-inventory type: ClusterIP Run the following commands to apply these configurations — $ kubectl apply -f deployment-def.yaml $ kubectl apply -f service-def.yaml The above commands have done quite a few things. Our Product Inventory Service is deployed and is accessible in the cluster. Istio has also installed a side-car proxy for this service. If you see the container pods by running kubectl get pods you will see something like this. product inventory service — running pod In the READY column, it displays how many containers the pod is running. If we investigate the running pod with the command kubectl describe pod product-inventory-db9686d7d-7xsz5 , we will see that the pod has two containers — product-inventory & istio-proxy This means along with our Product Inventory Service, Istio has already installed the side-car proxy. This proxy has the capability to intercept all the incoming and outgoing requests of the Service. Also by applying service-def.yaml we have instructed Istio to register the service in the service registry with the name product-inventory . Even if we create 10 instances of this service, we can communicate to it by referring just the DNS name — product-inventory . Service Discovery and Load Balancing will continue to happen behind the scenes. Similar to the Product Inventory Service lets configure and deploy Product Catalog Service ##### creating docker image for product-catalog service $ cd istio-example/product_catalog $ docker build -t product-catalog:v0.0.1 . ##### deploying product catalog service $ kubectl apply -f deployment-def.yaml $ kubectl apply -f service-def.yaml With this, our Product Catalog Service is up and running. Let's take a quick look at how the service is calling the Product Inventory Service. Open the code for ProductCatalogService.java and check the getProductDetails API. //get product details api public Product getProductDetails( Product product = mongoTemplate.findById(id, Product.class); ProductInventory productInventory = restTemplate.getForObject(" @GetMapping ("/product/{id}")public Product getProductDetails( @PathVariable String id) {Product product = mongoTemplate.findById(id, Product.class);ProductInventory productInventory = restTemplate.getForObject(" http://product-inventory:8080/inventory/ " + id, ProductInventory.class); product.setProductInventory(productInventory); return product; } In this, it's referring to Product Inventory Service with the DNS name — product-inventory . We have enabled service discovery for both our services and we should be able to make the call to getProductDetails API. But wait! To do this, we must enable access to Product Catalog Service from outside our cluster. And this will be done with the help of Gateway configuration. Implementing Gateway This is relatively simpler. You can find the gateway configuration in istio-example root directory with the name gateway-config.yaml . apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: product-catalog-gateway spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - '*' apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: product-catalog spec: hosts: - '*' gateways: - product-catalog-gateway http: - match: - uri: prefix: /product route: - destination: host: product-catalog port: number: 8080 We are defining a routing rule here instructing to forward all the requests starting with /product to the Product Catalog Service. Apply this configuration by running kubectl apply -f gateway-config.yaml . Our basic API gateway is ready with the above configuration. We can use it for multiple purposes but let's stick to our basic need. Run the following commands to get the service URL for Product Catalog Service ##### identifying service host and port $ export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}') $ export INGRESS_HOST=$(minikube ip) $ export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT $ echo $GATEWAY_URL ## returns something like 192.168.49.2:30682 ##### minikube tunnel facilitate creating a network route $ minikube tunnel & Now that we have the URL to access the service, let's run curl to access the product details for test-product-123. This should return the product details successfully. $ curl http://192.168.49.2:30682/product/test-product-123 Congratulations, you have successfully implemented API Gateway and Service Discovery for our services. Implementing Circuit Breaker Implementing the Circuit Breaker is also easy to configure. You can find the configuration in the root directory — circuit-breaker-config.yaml . apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: product-inventory spec: host: product-inventory trafficPolicy: connectionPool: tcp: maxConnections: 1 http: http1MaxPendingRequests: 1 maxRequestsPerConnection: 1 outlierDetection: consecutiveErrors: 1 interval: 1s baseEjectionTime: 3m maxEjectionPercent: 100 With this, we are applying the configuration on top of the Product Inventory Service based on DestinationRule. By setting maxConnections: 1 and http1MaxPendingRequests: 1 we are instructing to activate the circuit breaker if it receives requests from more than one connection at the same time. The circuit breaker will remain in an open state and will keep rejecting the requests till it starts receiving the requests from one connection only. You can use any performance testing tool to validate this behavior. You can also use the bundled tool called folio under the directory — sample-client . More details on this tool can be found here. Next Steps We successfully implemented API Gateway, Service Discovery, and Circuit Breaker with the help of Istio. We did not update our services to implement these infrastructure concerns. Instead, we used the separate infrastructure layer provided by Istio to enable them. We provided the configurations and Istio did the rest of the magic. We can use Istio to configure other traffic management aspects including request routing, fault injection, request timeouts, and ingress/egress policies. We can use its security layer to configure TLS certificates, authentication, and authorization. Observability is another important area of offering from Istio. We can use it to enable service monitoring, logs, distributed tracing, telemetry, and other visualizations. Check out more on its features at the Istio website. The solution provided by Service Mesh looks clean but as usual, it has some side effects too. It adds additional hops for each call in the form of side-car proxies. In our case, one hop is added at Product Catalog Service and another at Product Inventory Service. The additional hops in the form of proxies need additional resources — CPU and memory. As the microservices increase, this can increase the resource overhead to a good extent. We must keep a watch on this! Additionally, each of the microservices patterns, when implemented through Service Mesh, does limit the control and capabilities over the operational features to some extent. I am sure this concern will subside with time, as the technology matures. For now, this is one of the best approaches to manage your service to service interactions.
https://medium.com/swlh/microservices-discovering-service-mesh-409ed06b5128
['Lal Verma']
2020-10-30 19:30:17.377000+00:00
['Istio', 'Service Mesh', 'Microservices', 'Spring Boot', 'Software Engineering']
Turning a Page
Turning a Page Accepting and adapting to changes I now look out of my eight-floor window, each morning minus the anxiety I felt post-retirement. I feel no guilt about still being in bed, and not out at work as I know everyone else is also at home. photo by John- Mark Smith on Unsplash Whilst they are hurrying and scurrying, preparing for work from home, I too am planning my day. Unlike them, my screen time is limited, and I have the choice to decide what I want to do. It’s liberating to be the master of your own time, and I no longer envy my colleagues, who are now working from home, managing the household chores and their kids without the help they had previously. Most of them have become experts in half dressing for their online classes, waist down still in their shorts or pyjamas. Working from home was exciting for a short period, but most people long to go back to their previous routines. The novelty wore off soon enough, and they realised they missed the interaction going out to work provided. They also discovered that the work hours had increased as bosses realised they were now available twenty-four seven. photo by Helena Lopes on Unsplash The informal exchange between colleagues during coffee breaks that infused them with ideas and energy needed to take on challenging tasks is no longer an option. Continuous screen time, lack of movement and stress will have an unfavorable effect on their long-term health. We have all got accustomed to feeling safer and more secure within the confines of home, socializing with limited friends and family members. Evening walks, jogging, cycling, gardening are now replacing yoga classes and work out routines at gyms. The die-hard enthusiasts have the option of online courses, but most of us need to be out to avoid mental and emotional stress. Funny, how we have finally accepted this new lifestyle. The other day, we went out for one of those now rare dinners to a friend’s house, and it felt strange. Dressing up felt good, though wearing a mask took away some pleasure. After a long gap of six months of eating healthy and on time, the late dinner and the many array of dishes were tempting but gave me heartburn. Returning home past midnight, I wasn’t sure if the changes were all that bad. Our new routine is healthier; one leads a more disciplined life, ensuring that the food we intake is more nutritious rather than tasty. All the immunity boosters taken religiously have made me more aware, and I eat more consciously. My birthday a few days ago was different from the previous years. My spouse pleasantly surprised me with a gift I least expected. I had once expressed a desire for something and then forgot about it as I realized the expenditure was unnecessary, as this was not the time to pamper one’s vanity. My better half who usually forgets my birthday, and avoids making a frivolous expense, outdid himself, by taking me by surprise, going out to purchase the said gift and keeping it hidden till d- day. I haven’t stopped teasing him about his benevolence, wondering if, as I age, he considers me more endangered now!! Or is it his receding hairline making him more appreciative of my hair? photo credits to the author Anita Sud Another new one was the number of friends who attempted to reach out and call. I think all of us have realized the importance and significance of staying in touch and spreading love. The pandemic may have changed the way we live, but it has not affected our spirit. We are more in touch now with family and friends than before and do not take relationships and occasions for granted. The unpredictability of life has made us aware of people and their significance in our lives. Today we have only emotional expectations from our friends and family. The healthy, unhealthy competition that existed at work is history now. We appreciate small things and are less demanding. We have gained freedom from the noise and clutter of the past. One short outing instead of the many trips is good enough now. I marvel and wonder how we packed in so much into a day previously. Luckily, we humans can attach and detach quickly. We change abodes, habits and lifestyle promptly and let’s hope 2021 bring happiness, cheer and positive changes in all our lives.
https://medium.com/this-shall-be-our-story/turning-a-page-95042d280580
['Anita Sud']
2020-09-22 01:01:38.952000+00:00
['Life Lessons', 'Mental Health', 'Self-awareness', 'Relationships', 'Life']
Detect Spam Messages with C# And A CNTK Deep Neural Network
It’s a TSV file with only 2 columns of information: Label: ‘spam’ for a spam message and ‘ham’ for a normal message. Message: the full text of the SMS message. I will build a binary classification network that reads in all messages and then makes a prediction for each message if it is spam or ham. Let’s get started. Here’s how to set up a new console project in NET Core: $ dotnet new console -o SpamDetection $ cd SpamDetection Next, I need to install required packages: $ dotnet add package Microsoft.ML $ dotnet add package CNTK.GPU $ dotnet add package XPlot.Plotly $ dotnet add package Fsharp.Core Microsoft.ML is the Microsoft machine learning package. We will use to load and process the data from the dataset. The CNTK.GPU library is Microsoft’s Cognitive Toolkit that can train and run deep neural networks. And Xplot.Plotly is an awesome plotting library based on Plotly. The library is designed for F# so we also need to pull in the Fsharp.Core library. The CNTK.GPU package will train and run deep neural networks using your GPU. You’ll need an NVidia GPU and Cuda graphics drivers for this to work. If you don’t have an NVidia GPU or suitable drivers, the library will fall back and use the CPU instead. This will work but training neural networks will take significantly longer. CNTK is a low-level tensor library for building, training, and running deep neural networks. The code to build deep neural network can get a bit verbose, so I’ve developed a little wrapper called CNTKUtil that will help you write code faster. You can download the CNTKUtil files and save them in a new CNTKUtil folder at the same level as your project folder. Then make sure you’re in the console project folder and crearte a project reference like this: $ dotnet add reference ..\CNTKUtil\CNTKUtil.csproj Now I’m ready to start writing code. I will edit the Program.cs file with Visual Studio Code and add the following code: The SpamData class holds all the data for one single spam message. Note how each field is tagged with a LoadColumn attribute that will tell the TSV data loading code from which column to import the data. Unfortunately I can’t train a deep neural network on text data directly. I first need to convert the text to numbers. I will get to that conversion later. For now I’ll add a class here that will contain the converted text: There’s the Label again, but notice how the message has now been converted to a VBuffer and stored in the Features field. The VBuffer type is a sparse vector. It’s going to store a very large vector with mostly zeroes and only a few nonzero values. The nice thing about the VBuffer type is that it only stores the nonzero values. The zeroes are not stored and do not occupy any space in memory. The GetFeatures method calls DenseValues to return the complete vector and returns it as a float[] that our neural network understands. And there’s a GetLabel method that returns 1 if the message is spam (indicated by the Label field containing the word ‘spam’) and 0 if the message is not spam. The features represent the text converted to a sparse vector that we will use to train the neural network on, and the label is the output variable that we’re trying to predict. So here we’re training on encoded text to predict if that text is spam or not. Now it’s time to start writing the main program method: When working with the ML.NET library we always need to set up a machine learning context represented by the MLContext class. The code calls the LoadFromTextFile method to load the CSV data in memory. Note the SpamData type argument that tells the method which class to use to load the data. I then use TrainTestSplit to split the data in a training partition containing 70% of the data and a testing partition containing 30% of the data. Note that I’m deviating from the usual 80–20 split here. This is because the data file is quite small, and so 20% of the data is simply not enough to test the neural network on. Now it’s time to build a pipeline to convert the text to sparse vector-encoded data. I will use the FeaturizeText component in the ML.NET machine learning library: Machine learning pipelines in ML.NET are built by stacking transformation components. Here I am using a single component, FeaturizeText, that converts the text messages in SpamData.Message into sparse vector-encoded data in a new column called ‘Features’. The FeaturizeText component is a very nice solution for handling text input data. The component performs a number of transformations on the text to prepare it for model training: Normalize the text (=remove punctuation, diacritics, switching to lowercase etc.) Tokenize each word. Remove all stopwords Extract Ngrams and skip-grams TF-IDF rescaling Bag of words conversion The result is that each message is converted to a vector of numeric values that can easily be processed by a deep neural network. I call the Fit method to initialize the pipeline, and then call Transform twice to transform the text in the training and testing partitions. Finally I call CreateEnumerable to convert the training and testing data to an enumeration of ProcessedData instances. So now I have the training data in training and the testing data in testing. Both are enumerations of ProcessedData instances. But CNTK can’t train on an enumeration of class instances. It requires a float[][] for features and float[] for labels. So I need to set up four float arrays: These LINQ expressions set up four arrays containing the feature and label data for the training and testing partitions. Now I need to tell CNTK what shape the input data has that I’ll train the neural network on, and what shape the output data of the neural network will have: I don’t know in advance how many dimensions the FeaturizeText component will create, so I simply check the width of the training_data array. The first Var method tells CNTK that my neural network will use a 1-dimensional tensor of nodeCount float values as input. This shape matches the array returned by the ProcessedData.GetFeatures method. And the second Var method tells CNTK that I want my neural network to output a single float value. This shape matches the single value returned by the ProcessedData.GetLabel method. My next step is to design the neural network. I will use a deep neural network with a 16-node input layer, a 16-node hidden layer, and a single-node output layer. I’ll use the ReLU activation function for the input and hidden layers, and Sigmoid activation for the output layer. The sigmoid function forces the output of a regression network to a range of 0..1 which means I can treat the number as a binary classification probability. So we can turn any regression network into a binary classification network by simply adding the sigmoid activation function to the output layer. Here’s how to build this neural network: Each Dense call adds a new dense feedforward layer to the network. I am stacking two layers, both using ReLU activation, and then add a final layer with a single node using Sigmoid activation. Then I use the ToSummary method to output a description of the architecture of the neural network to the console. Now I need to decide which loss function to use to train the neural network, and how I am going to track the prediction error of the network during each training epoch. I will use BinaryCrossEntropy as the loss function because it’s the standard metric for measuring binary classification loss. And I’ll track the error with the BinaryClassificationError metric. This is the number of times (expressed as a percentage) that the model predictions are wrong. An error of 0 means the predictions are correct all the time, and an error of 1 means the predictions are wrong all the time. Next I need to decide which algorithm to use to train the neural network. There are many possible algorithms derived from Gradient Descent that we can use here. I am going to use the AdamLearner. You can learn more about the Adam algorithm here: https://machinelearningmastery.com/adam... These configuration values are a good starting point for many machine learning scenarios, but you can tweak them if you like to try and improve the quality of the predictions. We’re almost ready to train. My final step is to set up a trainer and an evaluator for calculating the loss and the error during each training epoch: The GetTrainer method sets up a trainer which will track the loss and the error for the training partition. And GetEvaluator will set up an evaluator that tracks the error in the test partition. Now I am finally ready to start training the neural network! I need to add the following code: I am training the network for 10 epochs using a batch size of 64. During training I’ll track the loss and errors in the loss, trainingError and testingError arrays. Once training is done, I show the final testing error on the console. This is the percentage of mistakes the network makes when predicting spam messages. Note that the error and the accuracy are related: accuracy = 1 — error. So I also report the final accuracy of the neural network. Here’s the code to train the neural network. This should go inside the for loop: The Index().Shuffle().Batch() sequence randomizes the data and splits it up in a collection of 64-record batches. The second argument to Batch() is a function that will be called for every batch. Inside the batch function I call GetBatch twice to get a feature batch and a corresponding label batch. Then I call TrainBatch to train the neural network on these two batches of training data. The TrainBatch method returns the loss and error, but only for training on the 64-record batch. So I simply add up all these values and divide them by the number of batches in the dataset. That gives me the average loss and error for the predictions on the training partition during the current epoch, and I report this to the console. So now I know the training loss and error for one single training epoch. The next step is to test the network by making predictions about the data in the testing partition and calculate the testing error. This code goes inside the epoch loop and right below the training code: I don’t need to shuffle the data for testing, so now I can call Batch directly. Again I’m calling GetBatch to get feature and label batches, but note that I am now providing the testing_data and testing_labels arrays. I call TestBatch to test the neural network on the 64-record test batch. The method returns the error for the batch, and I again add up the errors for each batch and divide by the number of batches. That gives me the average error in the neural network predictions on the test partition for this epoch. After training completes, the training and testing errors for each epoch will be available in the trainingError and testingError arrays. Let’s use XPlot to create a nice plot of the two error curves so we can check for overfitting: This code creates a Plot with two Scatter graphs. The first one plots the trainingError values and the second one plots the testingError values. Finally I use File.WriteAllText to write the plot to disk as a HTML file. I am now ready to build and run the app! First I need to build the CNTKUtil library by running this command in the CNTKUtil folder: $ dotnet build -o bin/Debug/netcoreapp3.0 -p:Platform=x64 This will build the CNKTUtil project. Note how I’m specifying the x64 platform because the CNTK library requires a 64-bit build. Now I need to run this command in the SpamDetection folder: $ dotnet build -o bin/Debug/netcoreapp3.0 -p:Platform=x64 This will build the app. Note how I’m again specifying the x64 platform. Now I can run the app: $ dotnet run The app will create the neural network, load the dataset, train the network on the data, and create a plot of the training and testing errors for each epoch. Here’s the neural network being trained on my laptop: And here are the results: The final classification error is 0 on training and 0.010 on testing. That corresponds to a final accuracy on testing of 0.99. This means the neural network makes 99 correct predictions for every 100 messages. These seem like amazing results, notice how the training and testing curves start to diverge at epoch 2? The training error continues to converge towards zero while the testing error flatlines at 0.01. This is classic overfitting. Overfitting means that the messages are too complex for the model to process. The mode is not sophisticated enough to capture the complexities of the patterns in the data. And this is to be expected. Processing English text is a formidable problem and an area of active research, even today. A simple 32-node neural network is not going to be able to generate accurate spam predictions. What if we increased the complexity of the neural network? Let’s double the number of nodes in the input and hidden layers: The neural network now has 766,881 configurable parameters to train during each epoch. This is a massive network, but what will the results look like? Well, check it out: Nothing has changed. I’m still getting a training error of zero and a testing error of 0.01. And the curves still diverge, now at epoch 1. Let’s go all out and crank up the number of nodes to 512: The neural network now has an astounding 12,515,841 trainable parameters. And here are the results: Again no change. What’s going on here? The reason this isn’t working is because the original neural network was big enough already. The 16 input nodes can check for the presence of 16 different words in a message to determine if the message is spam or not. That’s actually more than enough to do the job. The reason I’m getting poor results is because the meaning of an English sentence is determined by the precise sequence of words in the sentence. For example, in the text fragment “not very good”, the meaning of “good” is inverted by the presence of “not very”. If I simply check for the presence of the word “good”, I get a totally incorrect picture of the meaning of the sentence. My neural network is looking at all the words in a message at once, ignores their order, and simply tests for spam by checking if certain words appear anywhere or not. This approach is not good enough for language processing. So what do you think? Are you ready to start writing C# machine learning apps with CNTK?
https://medium.com/machinelearningadvantage/detect-spam-messages-with-c-and-a-cntk-deep-neural-network-a83aca2a209e
['Mark Farragher']
2019-11-19 14:56:22.994000+00:00
['Programming', 'Data Science', 'Artificial Intelligence', 'Csharp', 'Machine Learning']
Why I Started Writing Shorter Articles
I started with an article. And then another. And then another. My brain was on fire that weekend. I didn’t wake up with the plan to just sit and write. All I knew is that I wasn’t going to go out for the day without writing something I could put on my blog. The irony is that I had been trying to focus on longer content, but my hand was a bit forced. I had written an article about quantity over quality which I wanted to live up to. My time had been heavily constrained with work and my baby. I also found it was a lot less pressure to write something when I lowered my minimum word count for what I was willing to write. Aiming Closer My aim was to write articles which were roughly 1,500 to 2,000 words, with a minimum of 1,500. I used the same approach for pretty much any kind of article, with the exception of certain short articles. This kind of structure gave me a concrete goal to keep my writing more consistent. I could write things outside of this, but this was my general benchmark for a “complete” article. One day, I noticed I could knock out a short article which was roughly 1,000 words in less than half the time it took to knock out one which was 1,500 words. Writing two 1,000 word articles also left me feeling less tired than I did after a single 1,500 word article. I had many articles I stalled out on or forced an extra point into to make my minimum. I also had multiple factors from work and life which made it increasingly difficult to allocate time. If I could get the free time, I could knock out more, but that’s not really an option at present. I decided to start shooting for somewhere between 750 to 1,000 words as my minimum depending on the type of content. Certain content I wanted to hit 1,200 words before I felt finished, but if I stopped or ran out of ideas, I ended it and moved on. This strategy has worked amazingly so far. My writing has become more organic, though a bit more volatile. I can feel the growth from the writing process more immediately and I can keep the heat up. I have basically been able to double my output of content without feeling rushed or that I’m missing something. When I want to write more, I do. If I hit a dead end, I wrap it up and move on. Quantity Over Quality Sometimes you just need to do more to get more practice. Instead of obsessing over perfection, drop it and move on. Splitting up a task into smaller tasks means more practice with each individual component. By shrinking the minimum I was aiming for, I could produce more, and it ended up faster. There is more to writing the just writing itself, especially when creating blog content or writing for something like Medium. You have to consider research, planning, writing, rewriting and editing, media production or procurement, title creation and summary, and polishing. Some of these factors have a fixed cost, some grow evenly with the word count, and some can grow exponentially for minimum time required per step. Skill Sets Image by Free-Photos from Pixabay Each of these factors is also its own skill set. Research won’t make your writing itself better, but it provides better evidence and better topics to write about. Media production or procurement just enhances your writing and can help make a better product. Planning makes the writing more coherent and consistent and can give a scaffolding. Rewriting, editing, and polishing are all their own skills which temper writing into something better and better at different steps of the process. Title creation and summary writing are their own kind of writing entirely which impact how your writing is received. By shortening the writing cycle, I get more practice on the skills that can help shape my writing as well as my writing itself. I can also test more ideas since the cycle is much shorter and a bad article is less of a hit to my productivity. Repeating the process more means I can focus on how everything goes together instead of trying to kill 200 more words for the sake of a number on my screen. Working Around Time Constraints Image by annca from Pixabay My job has calmed down, but can still take a toll on my time outside the office. As my baby gets bigger and bigger, she gets to need more and more time with me. She doesn’t want to sleep early anymore either. Most tasks have a warm-up period before getting productive, and writing is no exception. I have fewer and fewer blocks of time I can allocate to my writing, so I had to simplify my workflow to make use of what I had. It takes me a lot longer to get into the flow when I have to catch up on a massive amount of text. Smaller articles have a lower associated cost to get back in the flow. I worked as an editor for years, so I have a specific workflow which requires periods of focus. The longer the article, the longer the period necessary. Shortening the writing cycle means I don’t need as much time so I can play fit my editing blocks in gaps of free time more efficiently. My kid may not cooperate to give me productive writing time for days if she’s going through a growth spurt. The more of the process I can fully complete, the easier it is to keep the momentum going. I can knock a shorter article out on a moderately bad night now. If the baby or work don’t cooperate, I don’t get stuck halfway through the process. Lowering Pressure Image by Jan Vašek from Pixabay More practice and lower time constraints on individual steps lead to less pressure. I don’t have to force the article, I can end it when I want. Setting a limit may arguably be restrictive, but I find it gives me structure which makes me write better. If I just sit down without some end in sight, I’ll either ramble or not write much. I may write for the sake of writing, but that doesn’t mean I don’t have a process for writing. If I set a minimum, I feel a need to reach it, but some articles just don’t have 1,500 good words in them. A good writer can arbitrarily hit that (or pretty much any arbitrary standard), but I never said I was good. Setting my standards lower and surpassing them has helped me keep on track and write better. Don’t make the bar low enough to be pointless, but an easy win is still a win and can still provide great feedback. Why It Works I tend to obsess over the ritual for my writing process. The structure makes me pace myself and not burn myself out. If you tell me to run a mile, I’ll sprint until I can barely walk the last 9/10ths (I’m also not a runner), but if you tell me to run for 20 minutes, I’ll jog at a consistent speed. Setting conditions and restrictions forces me to pace myself. Writing is a release for me, and by controlling the release, I am able to get the most out of it for myself. This advice may not be as applicable to you if you have plenty of free time and write organically. By slashing my articles down, I have more time to focus on other aspects of writing and perfect my overall process. I can fit more small sessions in where I can, and I feel a lot less pressure to finish an article. Try writing less and see if you don’t get more out of it. Featured image by Jess Watters from Pixabay
https://medium.com/swlh/why-i-started-writing-shorter-articles-a15be0e214e
['Some Dude Says']
2019-11-12 18:28:30.281000+00:00
['Writing Tips', 'Productivity', 'Writing', 'Self', 'Writer']