title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Spotisis- Analysis of my Spotify Streaming History
You guys might be wondering how Spotify did the 2020 wrapup for its every user. After reading this you will find it easy and anybody can do it with their Spotify account. I’ll be showing you details about my Spotify Streaming history and I’ll be comparing my music with one of my friend Sreelekshmy. Spotify lets you download your data All you have to do is to go to your privacy settings in the dashboard and click request data. It will be available in 3–4 days even though they tell you it might take 30 days. I’ll be using the JSON file “Streaming History0.json” they provided me for this project. You can find the source code at the GitHub link provided at the end of the article. So let’s get started. These are the things I’m going to analyze Timeline of my streaming history Day preference Favourite artist Favourite songs Diversity Spirit of songs Part A The First Song that I heard from Spotify was Old Town Road (Jessie James Decker Version) which was the first object in my JSON. Even though I heard my first song in the last November I didn’t use Spotify frequently. So the first thing I’m gonna share is my minutes streamed per day. You can see a Spike Starting to rise from March 25 2020. You guys know the reason :D. On August 28 2020. The Graph says I have Streamed about 260 minutes. Streaming history I use Spotify mostly when I’m working it makes me do my work faster. you can see that from the pie chart below. Favourite Artist — everybody who knows me I’m a big fan of A.R. Rahman and One direction. I have played One direction songs 352 times and A.R. Rahman songs 254 times. But when comes to the uniqueness of songs I have played 61 different songs of A.R. Rahman comparing to 41 of One direction. The bigger the circle denotes bigger is the uniqueness of the songs. The bigger circle indicates the uniqueness of songs of that artist Favourite Song- In the Graph, you can see one song Staying way ahead of the others. “The Nights”. This is my all-time favourite. The First time I heard this song was during my first year at college. May have heard this More than a thousand times in my entire life. Spotify provides each song with certain attributes if you’re a musician you might be familiar with some of those. The attributes that Spotify proves for a song are as follows: Danceability — A description of how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable. Energy — Energy is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. Instrumentalness — Predicts whether a track contains no vocals. “Ooh” and “aah” sounds are treated as instrumental in this context. The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content. Liveness — Detects the presence of an audience in the recording. Loudness — The overall loudness of a track in decibels (dB). Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typical range between -60 and 0 dB. Speechiness — Speechiness detects the presence of spoken words in a track. Valence — A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tempo — The overall estimated tempo of a track in beats per minute (BPM). In musical terminology, the tempo is the speed or pace of a given piece and derives directly from the average beat duration Mode — Mode indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Major is represented by 1 and minor is 0. Key — The estimated overall key of the track. I’ll be comparing five features of my top 5 songs Danceability, Instrumentalness , Speechiness, energy, loudness. Song Diversity Do I listen to positive songs? Spotify also provides one attribute called valence. The valence scale is from 0–1, with one being the most positiveness conveyed in the track. When I plotted the histogram of my top 50 songs it says I listen to less positive songs. When I plotted Venn diagram it says 28 of songs are low spirits(valence<0.5). Histogram of the valence of my top 50 songs Venn diagram Part B In this Part, I’ll be comparing my top 50 songs with one of my friend Sreelekshmy’s playlist. I hear more energetic songs but she hears songs having a positive mood than me. and danceability of my songs is higher than her. I have also compared tempo and other audio features of our playlist.
https://medium.com/analytics-vidhya/spotisis-analysis-of-my-spotify-streaming-history-50dc1dbbb6c
['Appu Aravind']
2020-12-07 15:13:40.556000+00:00
['Python', 'Spotify', 'Plotly', 'Data Visualization']
3 Things I’ve Learned After 3 Weeks on the Partner Program
I decided to dive into Medium’s Partner Program three weeks ago. I’ve talked about my long-term plan to build a creative life. A crucial piece of that was to start earning a side income. I’m working on the transition away from my current day job, and I knew that I needed to start building the foundation. I’ve been incredibly fortunate by being curated early in my process. That’s helped kickstart this whole project immensely. It’s both given me a massive boost of encouragement, and far exceeded what I thought I’d achieve this soon. I was expecting to earn a few cents this month. Instead, I broke past ten dollars. Even in my wildest dreams, I wasn’t hoping to make double-digits in my first month on Medium. It’s not just about the money, but the money is a validation that I’m on the right path. I’ve wanted to be a writer for as far as I can remember, and now I’m being paid for it. I’ve made a few minor mistakes so far, nothing major. I’ve also learned three essential lessons. 1. I rediscovered my integrity I try to live with integrity. It’s something I value highly in my day job. I consistently push to do the right thing, not just the quick fix. Sometimes that causes problems for me when I’m too unwilling to bend. I wasn’t aware of how this carried over to my beliefs as a writer. I quickly discovered that I still have the same value when it comes to my writing. I’ve passed on publishing this article twice already. I started putting this together around my 2-week mark while I was waiting for some stories to come out in publications where I’d submitted. I’ve decided to take the high-ground I was getting a bit uneasy about not having something released. I quickly wrote the original draft for this article. Most of it was just simple comments about curation, getting a signal boost from a publication, that sort of thing. There wasn’t anything new there. There was very little of myself in that writing. So I didn’t publish it. I hit the same feeling yesterday and nearly published this article then. It was late on Monday, and I didn’t have anything to release. It wasn’t bad. But it was nowhere near the work I’d recently written. For the second time, I held back on publishing this story.
https://medium.com/the-partnered-pen/3-things-ive-learned-after-3-weeks-on-the-partner-program-eb2ebb362eb3
['Andrew Dacey']
2019-12-03 14:55:10.036000+00:00
['Authenticity', 'Writing', 'Lessons Learned', 'Creativity', 'Integrity']
Silhouette Method — Better than Elbow Method to find Optimal Clusters
Silhouette Method — Better than Elbow Method to find Optimal Clusters Deep dive analysis of Silhouette Method to find optimal clusters in k-Means clustering Image by Mediamodifier from Pixabay Hyperparameters are model configurations properties that define the model and remain constants during the training of the model. The design of the model can be changed by tuning the hyperparameters. For K-Means clustering there are 3 main hyperparameters to set-up to define the best configuration of the model: Initial values of clusters Distance measures Number of clusters Initial values of clusters greatly impact the clustering model, there are various algorithms to initialize the values. Distance measures are used to find points in clusters to the cluster center, different distance measures yield different clusters. The number of clusters (k) is the most important hyperparameter in K-Means clustering. If we already know beforehand, the number of clusters to group the data into, then there is no use to tune the value of k. For example, k=10 for the MNIST digit classification dataset. If there is no idea about the optimal value of k, then there are various methods to find the optimal/best value of k. In this article we will cover two such methods: Elbow Method Silhouette Method Elbow Method: Elbow Method is an empirical method to find the optimal number of clusters for a dataset. In this method, we pick a range of candidate values of k, then apply K-Means clustering using each of the values of k. Find the average distance of each point in a cluster to its centroid, and represent it in a plot. Pick the value of k, where the average distance falls suddenly. (Image by Author), Elbow Method to find optimal k With an increase in the number of clusters (k), the average distance decreases. To find the optimal number of clusters (k), observe the plot and find the value of k for which there is a sharp and steep fall of the distance. This is will be an optimal point of k where an elbow occurs. In the above plot there a sharp fall of average distance at k=2, 3, and 4. Here comes a confusion to pick the best value of k. In the below plot observe the clusters formed for k=2, 3, and 4 with their average distance. (Image by Author), Scatter plot of clusters formed at k=2, 3, and 4 This data is 2-D, so it's easy to visualize and pick the best value of k, which is k=4. For higher-dimensional data, we can employ the Silhouette Method to find the best k, which is a better alternative to Elbow Method.
https://towardsdatascience.com/silhouette-method-better-than-elbow-method-to-find-optimal-clusters-378d62ff6891
['Satyam Kumar']
2020-10-18 18:34:02.056000+00:00
['Machine Learning', 'Artificial Intelligence', 'Education', 'Data Science', 'Clustering']
How Creative People Work
How Creative People Work I asked a bunch of writers, journalists, and other creatives how they manage their creative systems. Recently, I’ve been doing two things that have really helped my writing: 1. For the past year, I’ve been using a spreadsheet to track my writing, input, and a few other factors. I’ve found it really illuminating to see how streaks of writing daily can benefit me, when I really need to take a break to recharge, how reading/watching in new genres has helped me break out of ruts, and how exercise, unfortunately, is very highly correlated with me writing more, and drinking alcohol is not. This is not about productivity as much as finding a balance that lets me enjoy the act of creating, and make it special and distinct from my other work. Here’s the spreadsheet I use, which is a version of something I saw from writer Leigh Stein (if you haven’t read her excellent satirical novel “Self Care” yet, buy it here!). Feel free to make a copy and give it a try yourself, or adjust the columns, or create something totally new. 2. Since October, I’ve been using a simple word count template (created by author Hannah Orenstein) to track the progress on my novel draft on Instagram. This works for me because as a short piece writer, I was used to creating, submitting, and publishing on a much faster pace, and I missed the engagement with other people as I went. So now I get people weighing in, cheering me on, and giving me little bits of encouragement to keep going. Follow me on IG here to see them, and if you DM me there I’ll share the template with you! These two systems work for me. What hasn’t worked for me? Trello, A paper journal. My Notes app. Voice dictation. Many other things. But as you’ll see below, those tools DO work for a lot of other writers! I asked people to share their creative working systems as well as something they wanted to shout out this year. Check them out, and see if any of them might be something to try in 2021:
https://kunkeltron.medium.com/how-creative-people-work-712fb2bbca9b
['Caitlin Kunkel']
2020-12-16 16:21:47.575000+00:00
['Creative Process', 'Writing Life', 'Writing', 'Writing Tips', 'Creativity']
Ansible For AWS — Manage Your Cloud Infrastructure Easily
Ansible for AWS — Edureka Companies have invested a large amount of time and money developing and installing software to improve their operations. The introduction to cloud computing offered their business to access software on the internet as service which proved to be more efficient and safe. Integrating an IT automation tool like Ansible which will easily provision and manage your cloud infrastructure like AWS is like hitting the jackpot. And that’s what we’re going to talk about in this Ansible for AWS article. Agenda: Why Companies Migrate To The Cloud? Ansible Features Why Use Ansible For AWS Demo: Automate the provisioning of an EC2 Instance using Ansible Why Companies Migrate To The Cloud? As mentioned earlier, Could Computing lets companies access servers like software over the internet. To make it clear, Cloud Computing is like plugging into a central power grid instead of generating your own power. Cloud has become the new normal and this ends up saving a lot of time and money. Let’s have a look at a few advantages of why companies migrate to the cloud. 1. Flexibility: Business growth is never static. Cloud-based services are suitable for growing and fluctuating business demands. A feature to scale up and scale down your deployment based on the requirement makes it very flexible. 2. Disaster Recovery: Every business should have invested in disaster recovery. Every fortune company ends up investing a ton lot on disaster recovery. Startups and low budget companies lack the money and the required skill for this and are unable to have a proper functional disaster recovery trait. Cloud provides disaster recovery solutions for the customers to develop robust and cost-effective plans. 3. Automatic Software Updates: As you already know, the cloud is the service provided by the internet and hence all the servers are out of your reach or rather not your headache. Suppliers take care of them which includes updating when required and running regular security check-ups. This again ends up saving a lot of time and money. 4. Reduced Costs: Establishing a data center from scratch can get expensive. Running and maintaining adds up to the expenses. You need the right technology, right hardware, right staff with the right knowledge and experience which just sounds like a lot of work to me. Also, not very promising, there are a million ways this could go wrong. Migrating to the cloud gives you this plus point. 5. Scalability: The traditional way of planning for unexpected growth is to purchase and keep additional servers, storage, and licenses. It may take years before you actually use them. Cloud platforms allow you to scale up these resources as in when needed. This dynamic scaling goes perfectly for unpredictable growth. 6. Data Security: Most of the times, it’s better to keep your data on the cloud over storing them on a physical device like laptops or hard disks. There are high chances of these physical devices getting stolen or shattered. Cloud allows you to remotely either remove the data or transfer them to another server making sure that data remains intact and safe. 7. Increased Collaboration: Using cloud platforms allows the team to access, edit and share documents anytime, anywhere. They are able to work together hence increasing the efficiency. This also provides real-time and transparent updates. Ansible Features Ansible has some unique features and when such features collaborate with Amazon Web Services, leaves a mark. Let’s have a look at these incredible features: Ansible is based on an agentless architecture, unlike Chef and Puppet Ansible accesses its host through SSH which is makes the communication between servers and hosts feel like a snap No custom security infrastructure is needed Configuring playbooks and modules is super easy as it follows YAML format Has a wide range of modules for its customers Allows complete configuration management, orchestration, and deployment capability Ansible Vault keeps the secrets safe Why Use Ansible For AWS? Now that we’ve gone through the benefits of using a Cloud Platform like AWS and unique features of Ansible, let’s have a look at the magic created by integrating these two legends. 1. Cloud As Group Of Services Cloud is not just a group of servers on someone else’s data center but much more than that. You’ll realize that once you’ve deployed your services on it. There are many services available that let you rapidly deploy and scale your applications. Ansible automation helps you manage your AWS environment like a group of services rather than using them as a group of servers. 2. Ansible Modules Supporting AWS Ansible is used to define, deploy and manage a wide variety of services. Most complicated AWS environments can be provisioned very easily using a playbook. The best feature is, you create a server-host connection and then run the playbook on just one system and provision multiple other systems with an option to scale up and scale down as per requirement. Ansible has hundreds of modules supporting AWS and some of them include: Autoscaling groups CloudFormation CloudTrail CloudWatch DynamoDB ElastiCache Elastic Cloud Compute (EC2) Identity Access Manager (IAM) Lambda Relational Database Service (RDS) Route53 Security Groups Simple Storage Service (S3) Virtual Private Cloud (VPC) And many more 3. Dynamic Inventory In a development environment, hosts keep spinning up and shutting down with diverse business requirements. In such a case, using static inventory might not be sufficient. Such situations call for using Dynamic Inventory. This lets you map hosts based on groups provided by inventory scripts, unlike normal inventory which forces you to map hosts manually which is very tedious. 4. Safe Automation Assume that you have a team of 5 people and each of them has two subordinates under them who are not completely skilled. You wouldn’t want to give them complete access to the entire deployment process. That’s when you realize the need for restricting the authorization. Ansible Tower delivers this feature to restrict authorizations. So basically, you chose who can do what, which makes it easier to moderate. Also, Ansible Tower encrypts credentials and other sensitive data and you only give the subordinates access to relevant resources while restricting their access to irrelevant ones. Demo: Automate The Provisioning Of An EC2 Instance Using Ansible In this Demo section, I’m going to demonstrate how Ansible supports AWS by showing how to automate the starting and provisioning of an EC2 instance. Let’s get started. Step 1: Install Ansible on your server node and make an SSH connection between your server and the client nodes on AWS. In this case, I have created two EC2 instances, one server on which Ansible is installed and the other is the client. Step 2: Now make sure you have all the requirements installed. According to the documentation, these are the following requirements: Install python using the following command: $ sudo apt install python Install boto using the following command: $ sudo apt install python-pip $ pip install boto Boto is a python interface for using Amazon Web services. You’ll have to import it using the following command: $ python $ import boto $ exit() Step 3: You have to configure your AWS. Use the following command for the same: $ aws configure And add your AWS access key id, secret key and default region(which is optional). Write a playbook to start and provision an EC2 instance. $ sudo vi /etc/ansible/launch.yml Mention the below lines: --- - name: Create an ec2 instance hosts: web gather_facts: false vars: region: us-east-1 instance_type: t2.micro ami: ami-05ea7729e394412c8 keypair: priyajdm tasks: - name: Create an ec2 instance ec2: aws_access_key: '********************' aws_secret_key: '****************************************' key_name: "{{ keypair }}" group: launch-wizard-26 instance_type: "{{ instance_type }}" image: "{{ ami }}" wait: true region: "{{ region }}" count: 1 vpc_subnet_id: subnet-02f498e16fd56c277 assign_public_ip: yes register: ec2 It’s a good practice to know what the code does before actually executing it. Let me explain this playbook for better understanding. Name: It can be literally anything. A good practice is to keep a name that gives a basic description of the task it performs. Host: Mentions the name of the host list against which the playbook needs to be executed. In my case it’s web. gather_facts: This parameter tells Ansible to gather all the relevant facts, variables and other data for future reference. In our case, we’ve set it to false because we have no use of collecting facts(IP addr., Hostname, etc). vars: This section defines and initializes all the variables that we’ll be using in this playbook. We have four variables here: region defines the region in which the EC2 instance needs to come up defines the region in which the EC2 instance needs to come up instance_type defines the type of instance we’re trying to bring up. In our case, we are using t2.micro defines the type of instance we’re trying to bring up. In our case, we are using t2.micro ami defines the AMI of the instance we’re trying to bring up ec2: This is a module provided by Ansible used to start or terminate an EC2 instance. This module has certain parameters that we’ll be using to specify other functionalities of the EC2 instance that we’re trying to start. We start by mentioning AWS access key id and secret key using the parameters aws_access_key and aws-secret_key . and . key_name: pass the variable that defines the keypair being used here pass the variable that defines the keypair being used here mention the name of the security group. This defines the security rules of the EC2 instance we’re trying to bring up instance_type: pass the variable that defines the type of instance we’re using here pass the variable that defines the type of instance we’re using here image: pass the variable that defines the AMI of the image we’re trying to start pass the variable that defines the AMI of the image we’re trying to start This has a boolean value of either true or false. If true, it waits for the instance to reach the desired state before returning region: pass the variable that defines the region in which an EC2 instance needs to be created. pass the variable that defines the region in which an EC2 instance needs to be created. This parameter specifies the number of instances that need to be created. In this case, I’ve only mentioned only one but this depends on your requirements. vpc_subnet_id: pass the subnet id in which you wish to create the instance pass the subnet id in which you wish to create the instance assign_public_ip: This parameter has a boolean value. If true like in our case, a public IP will be assigned to the instance when provisioned within VPC. Step 5: Now that you’ve understood every line in the playbook, let’s go ahead and execute it. Use the following command: $ ansible-playbook /etc/ansible/launch.yml Once you’ve executed the playbook, you’ll see an instance is created. And TADA! You’ve successfully automated the provisioning of an EC2 instance. The same way you can also write a playbook to stop the EC2 instance. This brings us to the end of Ansible For AWS article. If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, Python, Ethical Hacking, then you can refer to Edureka’s official site. Do look out for other articles in this series which will explain the various other aspects of DevOps.
https://medium.com/edureka/ansible-for-aws-provision-ec2-instance-9308b49daed9
['Saurabh Kulshrestha']
2020-09-09 11:18:40.391000+00:00
['Amazon Web Services', 'Cloud Computing', 'DevOps', 'Ansible', 'AWS']
On Becoming A Writer
When Jeff Nobbs (my partner) and I returned to San Francisco last September after several months on the road, I fell into a deep panic because I didn’t know what the heck I was going to do next with my life. Naturally, the most frequently asked question of us at that time was: “So, what’s next for you guys?” This anxiety was further magnified because Jeff always had a good answer to the question — he had many options awaiting him, and by October, he’d picked one and pursued it. Yes, yes, I know comparison is the thief of joy, but I felt like I’d regressed to my post-college years, totally clueless as to what my life calling was, what I was good at, what I enjoyed and where I might add value. I was simultaneously paralyzed by the infinite optionality put forth by modern society. At one point, I’d listed out 30+ career paths I would be interested in exploring further, one of which was a “Wholeness Mentor” — like wtf is that? I don’t even know. For the next couple of months, I worked on a project here, a project there, and explored, however briefly, what it would be like to start a business with my best friend. It was fun while it lasted, but I came to realize while I liked the idea of being an entrepreneur, I didn’t actually like being one. The zealous city of San Francisco and its people had primed me into thinking everyone and anyone could be an entrepreneur, including myself, but when it came down to it, I, quite simply, didn’t enjoy the day-to-day of being one, of always having to be “on” and having to answer to everyone and their mothers. I’d deluded myself into believing I wanted to be an entrepreneur, but I quickly realized: 1) I’m not that motivated by problem-solving, and 2) I’m not that ambitious. These are lamentable things to admit to yourself, especially living in a city that often feels like it’s populated by the world’s valedictorians, over-achievers and people who won “Best All-Around” in their high school yearbook, where every single person wants to change the world for the better. To admit that I didn’t care much about making the world a better place without eroding my sense of self-worth was tough, really tough.
https://medium.com/sumofourparts/on-becoming-a-writer-6da293335a3d
['Renee Chen']
2019-04-08 23:04:24.032000+00:00
['Life Lessons', 'Writing', 'Self Improvement', 'Life', 'Creativity']
How to Support Secured Connections inside Micronaut’s GraalVM
Micronaut has great out-of-box support for GraalVM. I've tried a simple task to create a Micronaut function to be handled by Amazon API Gateway which connects to Amazon RDS PostgreSQL. The following command generates the skeleton of the function handling API Gateway Proxy using GraalVM: mn create-app graalvm-function --features aws-api-gateway-graal See Custom GraalVM Native Runtimes for more information about API Gateway and GraalVM functions. Next, we create a domain class, controller and repository using Micronaut Data JDBC. Feel free to copy from Pet Clinic example. How to use Micronaut Data JDBC goes beyond this post but there is one important part of the documentation which is Going Native with GraalVM. Please, follow the Configuration for Postgres section. Now we come to the part which is not included in any guide yet. If you deploy the function with deploy.sh command and test it either from the AWS Lambda or by setting up API Gateway endpoint and executing the HTTP command then you will get the following warning with exception: WARNING: The sunec native library, required by the SunEC provider, could not be loaded. This library is usually shipped as part of the JDK and can be found under <JAVA_HOME>/jre/lib/<platform>/libsunec.so. It is loaded at run time via System.loadLibrary("sunec"), the first time services from SunEC are accessed. To use this provider's services the java.library.path system property needs to be set accordingly to point to a location that contains libsunec.so. Note that if java.library.path is not set it defaults to the current working directory. com.amazonaws.serverless.exceptions.ContainerInitializationException: Error starting Micronaut container: Bean definition [javax.sql.DataSource] could not be loaded: Error instantiating bean of type [javax.sql.DataSource]: sun.security.ec.ECKeyPairGenerator.isCurveSupported([B)Z [symbol: Java_sun_security_ec_ECKeyPairGenerator_isCurveSupported or Java_sun_security_ec_ECKeyPairGenerator_isCurveSupported___3B] The reason is that the function sets up a secured connection to the PostgreSQL inside the VPC using SunEC library. The library is provided by libsunec.so file inside GraalVM distribution. There are two steps which need to be accomplished. First, copy libsunec.so file into the deployment archive function.zip . In Dockerfile located in the root of the project replace the line RUN zip -j function.zip bootstrap server with following two lines: RUN cp /usr/lib/graalvm/jre/lib/amd64/libsunec.so libsunec.so RUN zip -j function.zip bootstrap server libsunec.so Second, replace the last line of bootstrap script with following to let the application find libsunec.so library file: ./server -Djava.library.path=$(pwd)
https://medium.com/agorapulse-stories/how-to-support-secured-connections-inside-micronauts-graalvm-730800d2ed03
['Vladimír Oraný']
2020-01-21 14:26:38.207000+00:00
['Graalvm', 'Java', 'AWS', 'Micronaut', 'Tech']
At the Center of All Beauty
At the Center of All Beauty “The secret to contentment is low overhead.” —Fenton Johnson Photo: Pedro Robredo — Personal files of Fenton Johnson Award-winning writer Fenton Johnson explains that this advice in his new book, At the Center of All Beauty, Solitude and the Creative Life, is a variation on Marianne Moore’s version, “The cure for loneliness is solitude.” His book is balm, validation, even celebration of all “solitaries,” his description for those of us who, like him, actively cherish and thrive in solitude. We solitaries draw our creative and artistic juices from within ourselves, from the boundless field of empty space with which we love to surround ourselves, an open corral in which to let our muses romp and run wild, unfettered, uninfluenced by external stimuli or others. Not to be confused with only recluses or hermits, we solitaries love and delight in our own company and we are often avid social beings. I was reminded of the results of my Meyers-Briggs test (taken in the ’80s before it was dumbed down to psychobabble). I was mildly surprised to place on the spectrum toward introversion, believing myself outgoing and friendly. But the moderator explained that what this metric meant was that I felt drained after some time among people. I needed to recharge my mental and physical energies all by my lonesome. I realized how true that had been all my life. As often as I indulge in my social communities, my many urban tribes, my intimate friendship circles, I need to retreat to the silence of my own little nest. “Silence and solitude set the imagination free to roam, which may be why capitalism devotes itself so assiduously to creating crowds and noise.” Johnson fans will already know what he reveals early on, that he enjoyed the love of his life, a man who died of AIDS in Paris years ago. Since then, Johnson has cherished his rich literary life and as a writing professor been gratified to nurture aspiring writers and to be an icon for the worldwide writing community. At the Center of All Beauty encompasses an engaging weaving of Johnson’s own life as a life-long solitude-lover with the lives of nearly a dozen artists in various disciplines, including Henry David Thoreau, Emily Dickinson, Paul Cézanne, Walt Whitman, Eudora Welty, Nina Simone, Zora Neale Hurston, Rod McKuen, Rabindranath Tagore, and Bill Cunningham. Johnson illuminates the cultural prejudice toward loners and the societal pressure, sometimes counterproductive, to engage in coupling. Note that some of his subjects — Cézanne, Simone, Hurston, Tagore — did couple in marriage, yet still fit the profile of solitary, each fiercely protective of her/his interior space. Of Whitman and Cézanne, Johnson writes, “Each preferred the world of their vast and fertile imagination over the confining world of fact.” Describing what his subjects have in common, Johnson, a Zen Buddhist devotee, says, “each lost the self to find the self,” referring to Thoreau in the woods, Cézanne to his painting, Welty in her art, Tagore in his music and poetry, Simone in her music. “Perhaps . . . what defines my solitaries — a reluctance to sacrifice openness to all for openness to one.” While his book is a torch-bearer for solitaries, it is not a diatribe against coupling/marriage. It merely shines a bright light on the prevalent blind faith in those cultural assumptions, “the avalanche of messages telling us that marriage is our most noble means of self-sacrifice.” Like his subjects, he says, many solitaries “sacrifice ourselves . . . not for our individual wealth but for the common wealth.” Dickinson, for example, had an offer of marriage that she turned down, one can argue to humanity’s benefit, given the body of lofty work she left us. She demonstrates, as Johnson proposes, that “solitude, not marriage, is the more selfless choice.” What I personally loved about At the Center of All Beauty was learning more about the author. Wikipedia describes Johnson as the last of nine children in a whiskey-making family. (I wrote him to say I am the fifth of ten in a pasta-making family.) Like me, Johnson was raised in a Roman Catholic family, in his case right next to the Trappist Monastery Gethsemani in Kentucky. I envy his having known Thomas Merton, the Catholic convert whose writings on mysticism are still influential. Johnson acquaints us with his blood family, the rural setting of his youth, the southern foods at their table, much of which they cultivated. He gives us the sense that his parents, notwithstanding a large brood, had solitary lives, as well as an open-door policy, including lots of social activity with the neighborly monks. In his defense of solitaries, Johnson delves skillfully into the hidden subtext of our notions of love being fused to coupling. Commenting on the lifelong bond between Cézanne and the writer Émile Zola, he writes, “That such ecstatic friendship has fallen from our lives and art is due in part to our obsession with labels (gay, straight, married, single), and partly due to our elevation of church-designed, government-sanctioned marriage as the apogee of human relationship. Somewhere, in part in service to capitalism, the notion took hold that to be worthy of celebration, love must be certified by government or church edict, when my experience has that love does not submit itself to logic or reason, calendar or clock — that one may love differently perhaps, but as intensely in a moment as across a lifetime.” As Johnson trains our eye on the artist’s work one hears his religious breeding: “In Cézanne’s painting the sacred becomes flesh and dwells among us.” We hear his mystical vision as he notes how solitaries are some of the most agile at transcending the artifice of time: “Long before quantum physics, Cézanne understood that all moments are present to this moment.” Perhaps only the bona fide solitary can know the opiate-high of the zone, the flow, the gratifying choice of aloneness, where time is irrelevant or as Johnson quotes Albert Einstein: “This distinction between past, present, and future is an illusion, however tenacious.” He writes, “Only the marathon runner, high on endorphins, or the heroin addict, or the besotted lover in the presence of the beloved . . . can understand . . . what it means to live outside time — to live, in fact, not in the past or future but in the mystic eternal now.” Tipping the balance of prejudice toward pro-friendship is crucial, Johnson says, because “. . . the very survival of the species depends on our transcending ties based on blood and marriage . . . the ties of blood which perpetuate and reinforce conflict — recognizing instead the bonds of love, with friendship, not marriage, as the tie that binds.” “ . . . to understand only biological offspring as our children is to shortchange the great human impulse toward magnanimity, toward altruism.” Revealing that he practices “celibacy not as negation . . . but as joyous turning inward,” Johnson gives us a pearl to that end from famous solitary Dickinson who poeticized herself as an “Inebriate of air, debauchee of dew.” Johnson lyrically describes the Belle of Amherst as the “most promiscuous of celibates.” In these times of enforced solitude, what better book to shelter in place with, than this one, which squarely places you At the Center of All Beauty.
https://medium.com/nomudnolotus-writer/at-the-center-of-all-beauty-3561a083d732
['Camille Cusumano']
2020-06-09 01:24:31.005000+00:00
['Books And Authors', 'Nonfiction', 'Solitude', 'Writer', 'Creativity']
How To Do Less and Still Grow Your Business
Then one day I realized I couldn’t hold a conversation without popping my ears and twitching my right eye; interesting, I thought. Where the hell is this coming from, and what is my body telling me? I then realized I had been unsatisfied, controlling, and continuously swinging between two different kinds of guilts (not doing enough and doing too much at the same time) for the longest time. I needed a way out, as all this doing wasn’t even supporting my dreams, as I was utterly miserable, and I still struggling to make ends meet. Going back to my values I went back to a very simple question; why I was doing what I was doing and did my behaviors aligned with my values. It was nice to reinforce my whys and realize that I was doing the work that made me feel extremely satisfied and sparked my creativity. It was also hard to face that the way I was acting towards my family, self, and business wasn’t aligning with my values. When did it become ok to put myself care and the quality time spent with the people I loved the most in the backburner? I justified my busyness as I had so many things to tick off daily, but could I shorten the mental list I was relying on? To do so, I had to apply… The Law of the vital few In laymen’s’ term, I had to figure out what I was doing that was beneficial for my business. To give you an example, I struggled running FB ads; for some reason, I found them quite pricey and somewhat tricky to set up. I second-guessed my choice every time I clicked send, and I didn’t know how to read the analytics. Turned out, although I was investing a decent amount on my ads, I wouldn’t receive much in return, only hours of headaches and tons of insecurity. I then stopped running ads and…nothing changed. It showed me that I was investing much of my precious time in an activity I wasn’t clearly good at, and that didn’t spark any joy (more on it on the next point). Writing content, and doing research, instead, makes me tremendously happy, mainly because I know I’m able to reach and support a broader audience. Writing an article wasn’t always the most profitable task, but I knew that it was taking me in the right direction, as I was reaching the life and the hearts of my target audience. I just had to wait for the love to come around, and in the meantime, I had enough articles posted to create a program or two, plenty of webinars and workshops, and never-ending lead magnets. Content creation was my thing. Outsource To go back to the example of the Facebook ads, I wasn’t saying they don’t work (apparently, 85% of clicks on FB come from friends and paid ads, as it is almost impossible to grow in a platform that is clogged by information), but that I didn’t know how to make them work for me. That’s when I decided to outsource, and I also hired an amazing girl to create my IG posts. She would spend 10 minutes on it, whereas it would take me hours to come up with a decent design. Outsourcing is an incredible tool, and I applied it to different areas of my life; I do, for example, hire a cleaner to come and tidy up our home twice per month, so the only thing I have to do is keeping the kitchen clean until her next visit. I know that I come from a privileged place where I have the opportunity to hire a cleaner, but I have also been babysitting for other people to get the same favor in return. Also, I don’t spend a dollar on buying clothes, cosmetics, and shoes; unless I really need to. Find what floats your boat and start asking for the help you need. Focus on your mental health As long as I concentrated on doing, instead of being, I had literally no chance to grow. As long as stress and anxiety were taking a massive real estate’s space in my brain, I couldn’t see the light at the end of the tunnel. I used to focus on getting short-lived gains, which lead me to feel overwhelmed in the long term. I used to say yes to everything, for fear of missing out. I used to prioritize everything else but me, with the mentality of “I’d rather do it myself as I can’t be bothered asking.” I had to get to a miserable place of exhaustion before sitting down and focusing on patching up my broken cup, instead of filling it mindlessly. Also, journaling helped. I would love to say that I turned to meditation, and life became magic in less than a week; this would give you a tool that you can utilize and put in place. Unfortunately, meditation didn’t help me as much as it did in the past, but journaling came to rescue, as it was a moment of mindfulness in my weekly routine where I would sit, regroup and recharge. Fix your energetic leak, recharge your batteries, and find the way that works for you. As much as I would love to be able to work 80 hours a week and shine, my body and mind can get easily overwhelmed with 25 hours of week of intense focus. Find out who you are and grow from there.
https://medium.com/an-idea/how-to-do-less-and-still-grow-your-business-6c1586ca2198
['Claudia Vidor']
2020-12-07 06:38:28.121000+00:00
['Money', 'Self Improvement', 'Startup', 'Business', 'Productivity']
How Did Ancient People Deal With Boredom?
Photo by Joshua Rawson-Harris on Unsplash People around the world have been subjected to a number of changes in their lives over the last few months. If we’re lucky, we’re still healthy and employed. But that doesn’t make it easy — we’re stuck apart from our friends, subject to new anxieties about our health and our economic futures, waiting for faraway officials to tell us when and how we will be able to resume our “normal” lives. Perhaps the most common feeling for many of us right now is boredom. It’s why people are engaging in increasingly elaborate internet video challenges or baking complex breads. Frankly, it’s part of why I’m writing this right now. But we’re facing this time of boredom with more resources than anybody has before us — we have the ability to browse the internet, watch or listen to pretty much anything ever made, and order most goods over Amazon. In short, during the most boring month of our lives we still have more ways to entertain ourselves than some of the most privileged people centuries ago. Though the English word “boredom” did not come into use until Charles Dickens used it in Bleak House in 1852, boredom has been a well known phenomenon for most of human history. Indeed, imagining the tedium and repetition involved in pretty much any way of making a living in ancient or medieval times would be enough to terrify most of us in the overstimulated modern world. Imagine spending every day churning butter, or tending a field of crops, or weaving fabric — all by hand. Few people moved beyond their immediate surroundings; one historian estimates that 80% of medieval Europeans, for example, never traveled more than 20 miles from their homes. Most people in the past did the same (often physically grueling) tasks day after day in the company of the same small group of people. Their options for entertainment were slim — most people likely could not read in these societies, and the same myths and stories were repeated over and over. So how did they experience boredom, how did they cope, and what can we learn from them about our predicament? Despite that fact that, from our vantage point, boredom must have been endemic in the pre-modern world, the concept shows up unevenly in records from ancient societies. The ancient Greeks rarely referred to boredom, and didn’t really have a word that maps onto our concept of boredom. The words that they used to describe states like boredom also meant things like “distraction” or “disgust” or “irritation.” Aristophanes describes one character’s boredom at having to wait for the Athenian Assembly to begin through action rather than description — he says, “I groan, I yawn, I stretch, I fart, I don’t know what to do.” You may have been in this situation recently — unable to precisely name your mental state but restless nonetheless. A Roman bust of Aristotle (public domain) Perhaps boredom was such a ubiquitous part of ancient Greek life that it wasn’t necessary to name it precisely. In the same way that we don’t think about the fact that we are breathing all the time, maybe ancient Greeks didn’t identify their constant boredom, just its symptoms. There may have been cultural attitudes involved, as well. Some of Athens’ most prominent philosophers praised their fellow citizens’ ability to enjoy their leisure. Aristotle, for example, saw leisure as absolutely crucial for creating a learned and politically engaged populace. In his Politics, he wrote, “we should be able, not only to work well, but to use leisure well; for, as I must repeat once again, the first principle of all action is leisure.” Though it’s unclear how widespread these attitudes were, perhaps Greek cultural ideas about the beneficial effects of leisure made them more comfortable with the conditions that could lead to boredom. Romans, a culture that valued hard work and discipline perhaps more than the Greeks, discussed boredom more frequently, and in ways that map onto our modern experience. Seneca, the Stoic philosopher and playwright who lived between 4 BCE and 65 CE, describes the mix of restlessness and inertia that many of us feel these days: Thence comes that feeling which makes men loathe their own leisure and complain that they themselves have nothing to be busy with. For their unhappy sloth fosters envy, and, because they could not succeed themselves, they wish every one else to be ruined; then from this aversion to the progress of others and despair of their own their mind becomes incensed against Fortune, and complains of the times, and retreats into corners and broods over its trouble until it becomes weary and sick of itself. For it is the nature of the human mind to be active and prone to movement. Seneca emphasizes the ways in which boredom (he used the word taedium) can curdle into more harmful emotions like bitterness and anger if it’s not dealt with properly. Seneca’s remedy for boredom is pretty simple: he says that we need to find something to do. if we have nothing to do around the house, he encourages his readers to engage in public affairs. So Seneca would advocate what a lot of us are already doing — baking bread, repainting the bedroom, and perhaps gearing up to get involved in the upcoming election. “Months of boredom punctuated by moments of terror” became a common description of warfare during the First World War. Rome was known for its military exploits more than pretty much anything else. Its massive armies spent long periods of time away from home, and many soldiers found themselves with long periods of time during which there was nothing to do. Ennius, a Roman playwright, has a soldier character say, “When there is a lazy beginning the mind doesn’t know what it wants… The mind wanders indecisively; we only live sort of a life.” Perhaps this half-living seems very familiar to you right now; boredom can often make the days seem endless and pointless. Plutarch, a Greek historian writing during the Roman period, warns of the problematic effects of boredom on soldiers’ morale. He wrote about Eumenes, a general whose troops were trapped in a siege. Eumenes prescribed his men exercise — he had them walk around a large house over and over again, speeding up with each lap. One of the desert fathers, early monks who battled the sin of boredom (public domain) During the early Christian era, boredom came to seem worse than just an condition of life — it could be a sin. The monastic life, like many religious vocations, was deliberately constructed to include long, low-stimulation periods of silence and contemplation. They, like us, had their freedom and social interaction severely limited; this deprivation was supposed to allow them to focus on getting closer to spiritual truths. So where was the line between virtuous simplicity and boredom? Early medieval monks spoke of the vice of acedia, a Greek word originally meaning “indifference” or “lack of care.” Evagrius, an early desert monk, spoke of acedia as the “demon of noontide” that would come to ruin monks’ moral and spiritual commitment. Acedia was labeled one of the deadly sins (the eighth; it was later folded in with sloth). Evagrius’ disciple John Cassian described a monk suffering from acedia this way: He looks about anxiously this way and that, and sighs that none of the brethren come to see him, and often goes in and out of his cell, and frequently gazes up at the sun, as if it was too slow in setting, and so a kind of unreasonable confusion of mind takes possession of him like some foul darkness. It’s this lack of focus that made boredom dangerous for monks. Becoming unmoored in the long, hot hours of the middle of the day made them vulnerable to temptation. Perhaps they would wonder why they had chosen the religious life at all; perhaps they would just lose focus on the spiritual life they were supposed to be living. Life in boredom can seem purposeless, as I’m sure you know. So how should we fight acedia? Some monastic rules prescribed communal reprimand — one’s fellow monks should scold those falling victim to boredom and help them to focus on spiritual contemplation. So you’re bored — I am too! So were most people throughout history! But there are some time-tested solutions. Ancient authors had a number of remedies for the boredom that they faced. Some encouraged bored people to lean into it — to enjoy the leisure they had. Others encouraged the bored to get busy, finding a project or taking up exercise. Others said to lean on their community, allowing those around them to jolt them out of their stupor. Whatever you choose, take refuge in the fact that your boredom is alleviated by modern technology and — most importantly — temporary.
https://medium.com/lessons-from-history/how-did-ancient-people-deal-with-boredom-dfeae9f9aa74
['Historical Insights']
2020-04-29 20:20:10.394000+00:00
['Self Improvement', 'Boredom', 'History', 'Society', 'Coronavirus']
Mental Health Awareness is More Than Just a Meme
I speak candidly from my own experience with PTSD and clinical depression, which were exacerbated by six years of late-stage neurological Lyme disease and the hormonal and emotional upheaval brought on during menopause. I was in a very dark place for years. I could barely muster the energy to get out of bed, and I was miserable every minute of every day from constant, debilitating pain. Without sleep or physical healing, my emotional and mental faculties were overwhelmed, and I became an overly-sensitive, anxious, irritable, melancholy mess. Instead of earning compassion and kindness from those around me, I earned disdain and alienation. Part of me doesn’t blame others. Clearly, I was not a fun person to be around. The truth is — I was drowning, and even I didn’t understand how far I’d fallen into the pit of depression until I no longer recognized myself. Sadly, some of the worst offenders when it came to recognizing my mental illness were people in the medical or therapeutic fields. However, mental illness in real life isn’t always like it is in the movies. We aren’t all banging our heads against walls and running naked through the halls in the Cuckoo’s Nest. These also happen to be the very people who post enthusiastically online about how woke they are when it comes to mental illness. They make sure everyone knows they are ‘warriors’ who donate money and support the mental health community. What I’ve seen from many of these mental health and medical professionals is outdated knowledge gleaned from college courses they took thirty years ago. On stage, they can pontificate about theory and collect awards for their service and valor. They recognize people with extreme mental illness who need heavy-duty medications or hospitalization or constant supervision to survive. However, mental illness in real life isn’t always like it is in the movies. We aren’t all banging our heads against walls and running naked through the halls in the Cuckoo’s Nest. In every-day life, many people, including therapists and clinicians, fail to recognize that millions of people — their friends and family members included — are living and working and struggling with varying degrees of mental illness. Just because we are functioning at high levels and look normal on the outside doesn’t mean we aren’t silently struggling every minute of every day. We’re not just having bad days. We’re not just a Debbie Downer. We’re fighting demons. And sometimes the demons are winning.
https://medium.com/narrative/mental-health-awareness-is-more-than-just-a-meme-78d9a93dadbd
['Lizzie Finn']
2020-10-10 15:08:00.624000+00:00
['Life Lessons', 'Mental Illness', 'Mental Health', 'Depression', 'Health']
How Much Does It Cost To Make An Mobile App Like UberEats and Deliveroo
How Much Does It Cost To Make An Mobile App Like UberEats and Deliveroo Sophia Martin Follow Aug 17 · 10 min read Have you been into the food industry and planning to take your venture to the next level with on-demand food delivery apps? If yes, then your very next question will be, how much does it cost to develop a food delivery app? How to set a task for the developer and what functionalities you need to consider first in your app? Let’s follow the app clone of the top food delivery apps including UberEats and Deliveroo to understand their background and what exactly makes them successful in the competitive market. Before jumping on the features and functionalities of the app, it is worth understanding the background of the food delivery apps and their scope. Why On-Demand Food Delivery Apps Have Become So Demanding? Have you noticed that food delivery companies like UberEats and Deliveroo have become dominating names in the food industry today- just because of convenience and maximalism. And its impact on Americans is, 60% of US consumers order delivery or takeout once a week. In this fast-paced life, food delivery apps are a true blessing for everyone. The on-demand food delivery applications are one of those techniques that have created a platform where customers and chains of restaurants can easily meet to quench their needs. But here are the few reasons that make sense in increasing the popularity of online food ordering apps and making it an ideal choice for the startups. Raise the Business: The restaurants are finding online food delivery apps are a much more straightforward and convenient option to get delivery orders than taking orders from the wild calls. Moreover, 60% of restaurant operators say that offering delivery has lifted up their sales. Improve Customer Relationship Management: 43% of restaurant professionals said that they believe third-party apps help in building a direct relationship between a restaurant/bar/pub and its customers. The seamless online food ordering solutions have actually modernized customer relationship management and can offer all the required services right from food ordering to quick delivery at their home. Enhance Restaurant Business Promotions: Undoubtedly, on-demand applications are integrated with multiple social media platforms like Facebook, Twitter, Instagram and provide a great platform for seamless business promotions. The various types of online promotions help in attracting a large number of people to the app. Expand Customer Base: According to the survey, the customers who place an online order with a restaurant will visit the restaurant 67% more frequently than those who don’t use the app. Moreover, working with a third-party delivery service has been found to raise restaurant sales volume by 10 to 20%. Additional key facts and some insights portraying how powerful online food ordering apps have become: 63% of customers agree that ordering food from the online food app is more convenient than dining out with a family. According to the studies, UberEATs grew by 230% last year, with its average customer spending more than $220 annually. New food delivery platforms are sticky, once users sign up the app, an average of 77% of customers never or rarely leave the platform or switch to another app. Digital Ordering and delivery have grown 300% faster than dine-in traffic since 2014. It is estimated that mobile orders will make up close to 11% of all QSR sales by 2020. GrubHub, UberEATS, DoorDash and Postmates have become the most powerful food delivery apps in the industry and acquiring the largest share of the market. Now the central question is, why food delivery apps like UberEats, Postmates, DoorDash or Deliveroo are booming? What Makes UberEATS and Deliveroo a Leading Food Delivery App? There is no doubt in this fact that people always go for the one that adds convenience and comfort to their life. And this mantra applies to everything from the live movie streaming apps to music listening apps, and the games that we play to food that we eat. And there are multiple factors behind the boom of food delivery apps like Deliveroo and UberEATS. So, before learning the process to develop a successful online food delivery app, it is worth understanding the great blend of technology that UberEATs has used to add a great level of comfort: For Payment integration: UberEATs use Braintree, Stripe, E Wallets along with the payment on cash. UberEATs use Braintree, Stripe, E Wallets along with the payment on cash. Database Storage: UberEATs use AWS, Google for a better cloud environment, storage and backup. UberEATs use AWS, Google for a better cloud environment, storage and backup. GEO tracking: Since UberEATs is using Google Location API for Android and Core Location Framework for iOS, therefore offers you seamless location tracking experience. Since UberEATs is using Google Location API for Android and Core Location Framework for iOS, therefore offers you seamless location tracking experience. Navigation: UberEATs using Android Maps API and Mapkit for Apple devices. UberEATs using Android Maps API and Mapkit for Apple devices. Listing or Menu: To display the list of the menu as per the location, the developer can use any popular API like Foursquare. To display the list of the menu as per the location, the developer can use any popular API like Foursquare. Analytics: To review the performance and analytics of your business, UberEATs use Google Analytics or Mix Panel. In the Nutshell: The UberEATs clone may sound excellent for any food delivery app, but the performance of the app is depending upon the mobile app development company you hire for the app development. So before getting collaborated with any mobile app developer, it is worth discussing your app plans and needs. How To Develop A Food Delivery App Like UberEATs and Deliveroo? There is no doubt in this fact that developing food delivery apps like UberEATs, Postmates, DoorDash or Deliveroo required a huge investment. The upfront cost of the app development from scratch can be starting from $25,000 per project, which is quite expensive for the startups. So, what if you don’t have that kind of budget, should you drop the plan to hire an app developer? Well, everything has a solution. Today even some small businesses are coming up with something similar to the UberEATs at a fraction of the cost. There are many app development companies that usually make use of existing APIs to lessen the app development process and cost. All you need is hire the right app development team that can understand your business requirements and be able to build an app under the limited budget. Here’s what exactly you need to focus while developing a food delivery app like UberEATs: Understanding the Key Components of the UberEATs Key Features to Build an App Like UberEATs App Technology Used For Android/iOS/Cross-Platform Monetizing Strategy to Make Profit From the App Let’s dig deep in each point for better understanding: 1. Understanding the Key Elements of UberEATs Today, UberEATs has become the fastest-growing food delivery app and has collaborated with more than 50,000+ restaurants and provided the end number of food options. Moreover, with a $2.8 trillion addressable market, it is making up 22% of the company’s total bookings in 2019. Now the central question is, how do they manage everything right from the heavy traffic on the app, orders, courier to restaurant partners? Well, the food delivery apps like UberEATs have 3 major elements: For Customer: This version helps customers to choose from the extensive list of offerings in terms of restaurants as per their location and menus. All the food deliveries are made as per the customers chosen time. This version helps customers to choose from the extensive list of offerings in terms of restaurants as per their location and menus. All the food deliveries are made as per the customers chosen time. For Courier partners: This type of app is for the Uber drivers, who have been signed up or registered as a part of a delivery network. Once the order is made, it has been allocated to the driver as per the location. With the courier app version, both restaurants and customers are notified with the approx delivery time. This type of app is for the Uber drivers, who have been signed up or registered as a part of a delivery network. Once the order is made, it has been allocated to the driver as per the location. With the courier app version, both restaurants and customers are notified with the approx delivery time. For Restaurants Partners: Restaurants will get detailed order information and they are responsible to update the order status and send a notification to the customers & drivers. They have access to the list of the current orders made every day. 2. Key Features To Develop An App Like UberEATs When it comes to developing a mobile app, it’s features and functionality can make a great difference in how it is acceptable in the market. Moreover, mobile app features and functionalities can easily eat up your budget, therefore it is important to set your budget and accordingly make a choice of features that you need to integrate in the application. Since UberEATs clone is layered with different app versions, therefore we have categorized the major features required for customer and courier app version along with the estimated development hours. If you are a startup and planning to enter the food industry with the lowest budget, then you can consider developing an app with the MVP or basic features. Here’s the breakup of basic food delivery app: In case, if you are looking for the food app with advanced features, then the development cost of the Customer app version will be starting from $20,000+ and can go to any expensive price. You can check the break up below: And, here’s the necessary features required for the courier app version which is starting from $10,000 and can go to $35,000+ depending upon the choice of features and complexity of the app. 3. App Technology Used For Android/iOS/Cross-Platform The tech side of the app is depending upon the choice of operating system you choose to launch your app. If you are targeting a native app, then for Android app development solutions, it is worth hiring a mobile app developer expert in Java, Node.Js, whereas Objective-C and Swift can be an excellent option for iOS app development. In case, if you are targeting a large number of customers through multiple platforms then developing hybrid apps using Flutter or React Native can be an optimum choice. 4. How To Make Profit From Your Food Delivery App? No matter how amazingly you have designed your app, but there is no use of developing a beautiful app or hiring the best software development company for the project if there is no scope of making a profit from the app. So here are the few best app monetization methods used by UberEATs: Commission from Restaurants: This is one of the most common and efficient ways to earn profit from your app. UberEATs earn itself 15% to 40% commission from the orders fulfilled by UberEATs. Advertising Income From Restaurant Partners: UberEATs charging their restaurant partners a promotion fee to increase their visibility on the app’s listing. And with the increasing number of restaurants, it becomes important for the restaurants to invest in marketing to make their name visible to customers on the app. Delivery Fee From Customers: Charging plus surcharges for the food delivery during peak hours including lunch and dinner time is one of the worthy ways to make a profit from the app. 5. Hire a Development Team To build an app like UberEATs, you need a professional IT team that can understand the app clone and able to develop an app that can help you launch a successful food delivery app. Basically the mobile app development team should include: Business Analyst Project Manager Backend/Frontend Developer UX/UI designer Quality Analyst There are two ways to hire a team of app developers, either look for an in house app development team or look for an offshore app development company. On one side, communicating with in-house mobile app development team is easier whereas outsourcing a mobile app development company can be a cheaper option for you. Moreover, you can hire a mobile app developer at a price of $15 to $50 per hour in India whereas $150 per hour in the US and $100/hr in the UK. Cost to Build Food Delivery App Like UberEATs Estimating the overall price of the food delivery app can be a complicated task as there are a number of factors affecting the development cost. Right from the features, functionalities, complexity, size of the app, development team and its location, to technologies, there are a plethora of factors influencing the cost of app development. The average cost to develop a food delivery app can be starting from $15,000 to $25,000+ with the basic features, but the price can go to any expensive figures depending upon the choice of features and functionality. The complex the app structure is, the higher the app development cost will be. So if you want to calculate the exact price of the app development, then you need to follow this simple formula: Developer’s per hour cost * Total Development Hours = Total cost of the app development Conclusion To wrap up this blog, it is worth mentioning that the mobile food ordering business has become a fast growing trend in the food industry. Since the technology behind the entire food delivery app system continues to grow, the value of food delivery apps like UberEATs and Deliveroo are definitely soar to higher than expected. If you also want to be a part of this growing market, then it is the best time to get started with the online food delivery app. To help you understand the app clone, we have tried to cover every major aspect of the food delivery app, but if you still find yourself stuck anywhere in between the app, then it is recommended to get in touch with experts to discuss your app plan and get the right solution. Moreover, as the cost of the app development is the concern, so we have mentioned the rough estimations based on various surveys. But, again, we suggest to get real cost estimations from the mobile or software development company after discussing your app idea and business needs.
https://medium.com/flutter-community/how-much-does-it-cost-to-make-an-mobile-app-like-ubereats-and-deliveroo-df3a5cd52733
['Sophia Martin']
2020-08-17 06:12:23.078000+00:00
['Mobile App Development', 'Mobile Apps', 'Technology', 'Startup', 'Business']
Data visualization using Pandas
Data visualization using Pandas This article will help you to use in-built Pandas methods for visualizing data and drawing insights. How to import the packages? Numpy and Pandas package is imported. Along with this the magic function ‘%matplotlib inline’ is mentioned to make sure that the plots are displayed in the notebook. >>> import numpy as np >>> import pandas as pd >>> %matplotlib inline For the purpose of understanding, a dataset is taken which has random values. >>> df1 = pd.read_csv('dataset2.csv') >>> df1.head() How to create a histogram plot? A histogram plot can be generated by using the method ‘hist’ on a column of a dataframe. The number of bins can also be specified. >>> df1['b'].hist(bins=15) In case you want to view the plots in the seaborn style then import the package of seaborn, set the style and run the code again. >>> import seaborn as sns >>> sns.set_style('whitegrid') >>> df1['b'].hist(bins=15) How to deal with general plot kinds? You can also call the ‘plot’ method off the dataframe and mention the kind of plot needed. >>> df1['c'].plot(kind='hist',bins=10) You can also call the hist method directly through the plot method. >>> df1['c'].plot.hist() How to create area plots? It plots the area of the dataframe columns. The alpha keyword can be specified to tweak the transparency. >>> df1.plot.area(alpha=0.7) How to create bar plots? The ‘bar’ method plots the bar plot. >>> df1.plot.bar() In case you do not want the bars to be plotted separately and instead be stack on each other, pass the value True to the stacked keyword. >>> df1.plot.bar(stacked=True) How to create line plots? A line plot can be drawn by calling the line method and pass the x and y values. >>> df1.plot.line(y='d') The figure size and line style can also be changed like linewidth. >>> df1.plot.line(y='a',figsize=(5,3),lw=2) How to create scatter plots? The scatter method is called and the x and y values are passed. >>> df1.plot.scatter(x='b',y='d') If you want to compare with another column then mention the column name to the ‘c’ value. You can also change the color of the plot using the cmap keyword. >>> df1.plot.scatter(x='b',y='d',c='a',cmap='rainbow') How to create box plots? The box method can be called to create box plots. >>> df1.plot.box() How o create hex plots? This is just like the scatter plot but the data points are represented as hex cells. >>> df1.plot.hexbin(x='b',y='d',gridsize=15) How to create KDE plot? KDE stands for kernel density estimation. The kde method can be called to plot it. >>> df1['d'].plot.kde() >>> df1.plot.kde() For more detailed information on Pandas data visualization, check the official documentation here. Refer to the notebook for code here.
https://medium.com/nerd-for-tech/data-visualization-using-pandas-cfcde72807b1
['Jayashree Domala']
2020-12-27 02:51:56.204000+00:00
['Pandas', 'Python', 'Data Science', 'Data Visualization', 'Exploratory Data Analysis']
Dear APEX Community Members
Updates — Technology and enterprise pilots We are on track to have 4–5 enterprise pilots this year that will use APEX Network in production mode. Currently we cannot yet disclose the names of the enterprise pilot users but there are three piloting enterprises from two different industries that are well into the process and already beginning to see value: One previously announced budget Chinese airline (similar to Southwest in the US) One top 5 Chinese car brand (Tesla competitor) One high-end Chinese airline We are expecting first experimentations of pilots on the official APEX Network mainnet as soon as 1–2 months after launch. We are also in the process of experimenting with a hybrid model where user data is stored on a private or alliance chain version of APEX Network, but cross-enterprise transactions occur on the main network. Pilots are actively pushed by the partnership development team if they assess the enterprise to be a good fit. At least 70% of our enterprise base are aware of our blockchain technology offerings. Federated Learning, of which an early introduction was given by the team, is one of the latest blockchain technology features we are developing, and it will eventually be available for use both on the main network as well as private/hybrid chains. As previously stated, the goal is ultimately to have it run on the public chain. Indeed, FL at scale only makes sense at the public chain level, though to ease adoption and reduce hesitance I’m sure certain enterprises would like to test it at the private level first.
https://medium.com/apex-network/dear-apex-community-members-b3378f2b075a
['Jimmy Hu']
2020-03-16 17:42:25.664000+00:00
['Big Data', 'Technology', 'Blockchain', 'AI']
6 Tips to Stay Motivated on Your Side Projects
1. Define the MVP This is my most important rule when it comes to staying motivated on side projects: the MVP. MVP stands for Minimum Viable Product. It is the minimum amount of features required to achieve functionality so that users can use it. Photo by Halacious on Unsplash. Why do you need to do this? This is extremely important when working on a side project because it is the first major goal or the major goal you are trying to hit. It is what keeps you on track during your project and keeps you motivated to finish since you have a clear idea of what needs to be done. Write down all the features you want to be implemented for the first iteration of your project. Ask yourself the following questions: What is my application trying to accomplish? Is this a must-have in order for my application to work? Or this a nice-to-have? Once you release the MVP, you can iterate on top of it, introducing more features that would enhance the user experience. Some side projects never see the light of day because tasks keep making their way into the backlog, fighting an endless battle of scope creep. For my projects, I will make a v1.0.0 release of the application I am trying to build and iterate through the weeks (v0.1.0, v0.2.0, etc.) until I hit that prized v1.0.0 version that I consider the MVP.
https://medium.com/better-programming/6-tips-to-stay-motivated-on-your-side-projects-903432041644
['Eric Chi']
2020-10-02 16:11:39.389000+00:00
['Programming', 'Technology', 'Motivation', 'Software Development', 'Software Engineering']
I Strived to Be an Instagram Influencer, Now I Write
More content, more brand. I once hosted an Instagram Live morning show for my commercial real estate business. Each Friday, while the rest of the office enjoyed a jeans day, I showed up to work in a blue suit and tie, reserved a conference room, and recorded myself regurgitating real estate news. “Nationwide Insurance moved into their new 400,000 square foot headquarters, and office rent prices rose to all-time highs” This was before COVID, office space was actually leasing then. Optimistically, I wanted to be the Jimmy Fallon of commercial real estate, but sounded more like a professor addressing students cramming for a final. I posted ten episodes, grew a small audience, then decided, I hate this. Why do it at all? I was looking for something to spark sales and stumbled upon the world of content marketing and the influencers who champion such a strategy, notably Gary Vaynerchuk. If you don’t know him, his message goes like this: Unless you want to sit Shiva while your business peels, you better post daily on Instagram, Facebook, Snapchat, Twitter, LinkedIn, and TikTok. Gary says people don’t post because they’re worried mean girls from high school will DM poop emojis. A pleasant way of saying, insecure. Of course he’s right. I’m no Jimmy Fallon, but I wasn’t about to give up. I ditched the morning show and started documenting my work life. I filmed myself cold calling, took pictures of clients on building tours, and even created a “day in the life” TikTok. I posted about twenty behind-the-scenes clips, grew a small audience, and decided, I hate this. After twenty pieces of content, things became repetitive. Plus, just because I approached work like Hearts of Darkness didn’t mean my coworkers or clients felt as much. I received many stink eyes and several “what do you think you’re doing?” Clearly, I’m not Jimmy Fallon or Francis Ford Coppola.
https://medium.com/swlh/i-strived-to-be-an-instagram-influencer-now-i-write-5becc1996149
['Cal Axe']
2020-10-22 18:44:14.728000+00:00
['Marketing', 'Content Marketing', 'Writing', 'Business', 'Influencer Marketing']
5 Things You’ll Never Hear From a Successful Entrepreneur
Photo by Humphrey Muleba on Unsplash Entrepreneurs come in all shapes and sizes, from a great diversity of backgrounds and with a great diversity of different philosophies and approaches. This is evident in the spread of company cultures and growth trajectories among startups. Being so, if you ask 10 different entrepreneurs what the most important factors for entrepreneurial success are, you’ll probably get 10 different answers. Nevertheless, there are fundamental qualities that almost every successful entrepreneur shares in common. They are passionate, imaginative, and undaunted by the inevitable challenges of starting a business from scratch. That’s why you’ll never hear a successful entrepreneur say one of these five things: 1. I don’t want to hear it. The most successful entrepreneurs are open to new ideas and inspirations no matter where they come from. They’re willing to listen to customer complaints and incorporate that feedback into later models. They’re open to talk with mentors and peers about different approaches and different ways of doing things. They’re eager to hear from their team to discover new perspectives about the challenges faced by the business. Listening to others’ thoughts and opinions, even if you don’t agree with them, is essential for achieving any kind of meaningful growth. Our individual perspectives are limited, no matter how much we’d like to think otherwise. Entrepreneurs who are open-minded enough to hear others out tend to be far more successful than those who aren’t. 2. That’s impossible. Possibility is relative. What might be impossible to one group of people in one set of circumstances might be entirely possible to another. When someone says “that’s impossible,” what they often mean is “I’m not capable of doing this right now.” Successful entrepreneurs don’t view the world with this type of artificial limitation. Instead of seeing how a challenge can be overcome by their current abilities and current resources, they think of how it can be overcome by any possible set of abilities or resources. For example, if something is “impossible” in the moment, the successful entrepreneur might imagine that it’s not impossible with the addition of two new team members and an extra week added to the timeline. Alternative solutions drive innovation, and successful entrepreneurs are always willing to experiment to get the results they want. 3. It’s good enough. Some people stroll through their entire careers with a “good enough” mentality — they put in just enough effort to see a favorable result, and make decisions based on minimum criteria for success. There’s nothing inherently wrong with this; for most people in most careers, good enough really does mean good enough. But in entrepreneurship, competition is much fiercer and you’re in far greater control of your own destiny. Too many competitors and volatile factors are bearing down on you for you to settle for anything. When you first launch your core business, or your core product, your mind will be racing with ways you can improve upon it. Even after years and multiple generations, you’ll still be driven to experiment and find ways to improve. This constant denial of satisfaction can be maddening, but it’s what drives these entrepreneurs to success. 4. I’m too busy. Most people don’t know the meaning of “busy” until they get started as an entrepreneur. You’ll be wearing so many hats, taking on so many different responsibilities, and making so many decisions each day you won’t know what to do with yourself. But at the same time, you’ll be exhilarated to be in such a position. To successful entrepreneurs, the position of business owner isn’t a burden; it’s a thrill. It’s not a job; it’s a passion. There will be moments where you feel overwhelmed, of course, but if you’re truly committed to what you do, you’ll never be “too busy” for that extra conversation or that one additional responsibility. 5. I give up. There will be times when you question whether you have what it takes to be a successful entrepreneur, and times when you question whether all your sacrifices were worth it. There will be challenges you face that will threaten to collapse your entire business. This is normal; it is part of the process, and the successful entrepreneurs of the world are the ones who encountered these moments and decided to keep going. The minute you give up, on your business or entrepreneurship in general, your journey is over, and there’s no going back. Entrepreneurial success starts with the right frame of mind. You have to have an innate drive and a passion for what you do, and you can’t let the unavoidable complexities and trials of business ownership get in the way of your ultimate vision. Take inspiration from these taboo phrases and set your own course for entrepreneurship; just don’t let your doubts get the better of you. For more content like this, be sure to check out my podcast, The Entrepreneur Cast!
https://jaysondemers.medium.com/5-things-youll-never-hear-from-a-successful-entrepreneur-58f8de808ed3
['Jayson Demers']
2020-08-27 20:17:14.364000+00:00
['Startup', 'Startup Lessons', 'Business', 'Entrepreneurship', 'Entrepreneur']
Powering iOS, Android and web experiences with a backend-for-frontend
All of our seller reporting data is stored in a brand new “data warehouse” service. This service is populated with data through an event-driven system (specifically, SQS) from an upstream “source of truth” service we call “transactions”, as our sellers make sales through the iZettle Food & Drink POS (an iPad point of sale system). We started building the web experience first, alongside the data warehouse. At first, things were fine, but as the service grew and we started to build out our iOS and Android experiences, we started to hit some problems relating to what’s included in each of the service endpoints, and how they evolve. The mobile apps would end up receiving data that they don’t need, which is problematic when sellers are frequently on mobile internet. Further, the release cadence of the website and our mobiles apps are necessarily different per platform. The website can be delivered quickly, with changes taking only a few minutes to go out to production, but our mobile apps have a two week release train for new code, because of how app stores work. Deployed versions of mobile apps must also be supported for longer, causing frustration when we’d like to change or deprecate an endpoint that was previously only used by the web. Ultimately, with our data warehouse, we want to strive for two things: The time from selling something, to the data appearing correctly in a seller’s reports, should be as low as we can reasonably make it with a complicated distributed system Sellers should be able to request data over time periods that are helpful to them, including over the previous year, and have it appear on their chosen platform as quickly as we can reasonably get it to them We knew that, unless we changed something about how we get data to our clients, our problems would only get bigger as we added more endpoints, features, and clients. Working towards a solution As with all the problems we have to solve, we did some investigation and came up with a few possible solutions: Have frontend developers own the presentation of data within the existing data warehouse service, requiring them to learn Go Investigate and implement a brand new GraphQL service Investigate and implement a “Backend for Frontend” service, potentially in a language more suited to frontend skills The team has a lot of micro-service and REST-ful service experience. The cost, for our team, of setting up new services, is relatively small. Most of the iZettle Food & Drink backend is written in Go. It’s a good language, with great performance characteristics (especially when dealing with large amounts of data), but it’s difficult for frontend team members to learn, contribute to, and context switch between when delivering their other work, which is written largely in Typescript, Kotlin, and Swift. Other sections of the business use Kotlin to build their backend services already, and I personally have a lot of Kotlin experience, so we felt like that was a good bet. Our team has little experience with GraphQL and less appetite to own and maintain a brand new paradigm in our area for getting data to clients, when a more typical REST service will get the job done and let us ship quicker. It did look promising, and it is used elsewhere in the business, but we decided it wasn’t the right fit for us at the time. Given all the above, we decided to try making a “Backend for Frontend” service, written in Kotlin, to focus on getting data quickly and efficiently to our clients with great native experiences. It would be owned and maintained by the frontend team, with help from backend team members to make sure everything was up to standard. 💪 Building the BFF reporting service We picked Ktor as a service framework because I had some experience from other projects, and it can be as lightweight as you want it to be. To make spinning up this new kind of service nice and simple, and inspired by my colleagues’ work on a Go-based service chassis internally called izettlefx , I put together a Ktor-based service chassis, exemplified by one of the tests: I’m really impressed with how easy Ktor makes unit testing services. With a small amount of glue code, we have separation between our “request contexts” and “response handlers”, meaning we can thoroughly test our code at multiple levels. withTestApplication in Ktor (shown above) lowers the cost of doing a more integration-style test enough that we have many of them for the service chassis, and we can be much more confident that the integration of the features is working as intended. In the BFF reporting service itself, request handlers often aggregate data from upstream services— combining seller information like their business details, and seller data from several different endpoints exposed by the data warehouse. We built the service in a way which frontend team members are comfortable with. We use ReactiveX a lot, so it made sense for us to use this tech for wrangling upstream requests to make our mobile landing page. Relatively complex upstream request combinations become simple: Finally, requesters built to fetch upstream data from services can be reused for different endpoints and platforms — the requester that provides payment type information to our web-based Sales Report, in component-form, is reused to provide the same information in the mobile sales reports. Conclusion Ktor has been great to work with. It has a really solid set of building blocks with which we built a small service framework. The new service is very much focused on solving a particular set of problems, without being a kitchen sink. When we need another piece, we add to the service framework incrementally. Trust within the team is really important — everybody took the time to listen and discuss solutions when we started to have problems building new experiences with the existing data warehouse service. Folks helped throughout with backend best practices, and the service is deployed, monitored, and managed just like all our other services, with frontend and backend team members contributing to its ongoing development. Our data warehouse service stays lean, and the team builds fast endpoints for extracting seller data as we want to. The BFF service worries about the aggregation and presentation, and clients get the data they need in a format that suits the platform. 🚀
https://medium.com/izettle-engineering/powering-ios-android-and-web-experiences-with-a-backend-for-frontend-e198d55a21cc
['Skye Welch']
2020-09-11 11:13:12.056000+00:00
['Reporting', 'Kotlin', 'Engineering', 'Software Engineering', 'Food And Drink']
The Four Words That Made Her a Billionaire
This is The Story… of a woman who changed a nation… by running an illegal business out of her 258 square foot apartment. And now… onto The Story The knock on the door made her jump. She wasn’t expecting anyone, and her heart began to race. As she tiptoed over to the peephole, she peered through. It was the police. Again. She glanced around at the inside of her tiny apartment. Every available surface was covered with the pictures and files of her customers. It was all illegal. She cracked the door and asked if anything was wrong. They calmly informed her that she would have to come with them to answer some questions. She sighed, then slid out the door, locked it, and followed the police to the car. Thirty years earlier, World War II was raging. In Japan, everyone was part of the war effort. In the early mornings, the sound of wooden practice swords echoed through every town. The swords were being swung by young children. Children as young as nine were learning to fight and kill. The older teenagers were all at work in factories. Just like in America, they were told that their patriotic duty was to work long days to make supplies, guns, bullets, and bombs. Propaganda was all over the papers, the radio, and on every street corner. Everyone had to be ready to fight and sacrifice. That meant everything and everyone would work tirelessly until the war was won. One young girl couldn’t figure out why there was such an obsession with death and destruction. When she was six years old, her father died. The loss crushed her, and it wasn’t long until she hated the war effort. What she hated even more was that everyone around her seemed to like preparing for it. Her father was an admired, well-respected school headmaster, and her family had been dependent on her father’s income. His untimely death instantly threw them into poverty. Now the little girl’s world was dark. What would her family do without her father? How would they get the money to eat and survive? Her mother told her not to worry, but it was hard not to. After that, it wasn’t long until the war effort fell apart. The girl tried to make money and help her family however she could, but it was almost impossible. The economy was in shambles. And then Hiroshima and Nagasaki happened. After that, it was all over. The little girl survived through the complete societal collapse. They were told that the enemy would arrive and kill them all, but that day never came. She didn’t know how anyone continued to function under that pressure, but her mother did. Despite the madness during and after the war, her mother continued to work as a midwife. She worked every single day. Each day, the young girl would wonder if her mother would return, or simply disappear like so many of the other adults. But she always returned. Her mother would continue to mourn the loss of her father and would never remarry. The girl watched her mother’s iron will and became determined. She respected her mother deeply, and like so many children, wanted to please her. When she graduated high school, her mother was beaming with pride. She had single-handedly raised a child in the middle of one of the most catastrophic wars in history. She had managed to keep her alive… and help her graduate from high school. When a man asked her to marry him, her mother again beamed with pride. The young woman accepted. There were no other options she could see, and worried a “no” might crush her mother. Her marriage worked for awhile, but soon she grew restless. She didn’t love her husband. To be blunt, she didn’t even like him. The only thing she was allowed to do was a narrow range of housewife tasks… nothing else. She longed for a challenge, but it was forbidden. Divorce looked like the only way out. She knew that her mother would never allow it. And neither would the rest of society. In those days in Japan, women didn’t initiate divorces. But a man could get a divorce anytime he wanted with a simple three line letter. The price for being a divorced woman in Japan was steep. It would be almost impossible to get a job, her family would be shamed, and she wouldn’t be welcome at most social engagements. Besides, could she really put her mother through anything else? Despite her fear, she trusted her gut, and pursued the divorce. Her husband was shocked, but accepted. Just like she suspected, her mother was shocked too. After the divorce, the feeling of freedom was real. It was intoxicating and the young woman revealed in it. But soon she found that she wasn’t doing anything except lounging around at home. Her mother had sacrificed to give her a chance at a better life… and now she had freedom! But she was wasting the opportunity. Finally, she summoned the courage to go out into the world on her own. She was broke, needed a job, and wanted to see if she could make any of her big dreams happen. But something was still holding her back. All throughout her years growing up, she was taught that women in Japan worked the “boring jobs.” The boring jobs tended to be the soul-crushing, repetitive work that no one wanted to do. The young girl wanted to do something exciting. She wanted to be a part of something bigger than herself, a job that let her explore her talents. But when she went out to look, there were no opportunities in Japan. She’d been trained to hate the enemy, but now news was trickling into the country and there were whispers. Not all of their enemies were bad. Some of them were decent people. And besides, there were rumors that the Americans were pouring money into rebuilding Europe. The call to adventure beckoned, and the young woman planned a trip to Europe. Her mother begged her to reconsider, but the girl had to go out and see the world for herself. After working the boring jobs long enough, she had scrounged enough money for her Europe trip. Soon the day came and she left. The stories she heard about these people weren’t true. Yes, they were strange, but Europe was fascinating, and so was England. During her travels, she came into contact with hundreds of new ideas. As she looked around, she realized she was swimming in opportunity. The standard business practices here were nothing like those in Japan. In Europe, the jobs they considered “boring” were fascinating. And the young girl found that there was a business called a temp agency that would allow her to go from job to job. She couldn’t believe it. She was going to get paid to work and learn at jobs at the cutting edge of all kinds of different industries. What she really couldn’t believe was that all the European’s hated these jobs. In Japan, people were expected to have the same job for life. Why didn’t these Europeans realize how lucky they were? It wasn’t long before she was the star of the temp staffing agency, and all kinds of temporary job offers came her way. She accepted all the ones that sounded interesting, and got to try out a variety of different industries. Once she had some money saved up, she moved from England to Australia. Once again, she experienced a radically different work environment than Japan. After awhile, something was pulling her back to Japan. She knew exactly what kind of business she was going to start. Inspired by her time abroad, she was confident that her ideas would resonate with other Japanese people. Back in Tokyo, she rented a 258-square-foot apartment and setup a part-time work agency. Technically, it was an illegal business. In Japan, it was expected that you worked at a company… for life. The idea of temporary employment terrified the government. But she didn’t care. She had seen the future abroad, and knew it was only a matter of time until Japan modernized. But the cultural change was slow, and so was her business. Other Japanese women weren’t enthusiastic about the concept of being a temporary employee. Disappointed but still hopeful, she began teaching nighttime English classes to pay the bills and keep her dream going. After five long years in that tiny apartment, she was finally able to move her business into its first office space. Before she moved to Europe, Japanese women had been trapped in an ill-fated cycle. Most of them quit their jobs after marrying because they weren’t comfortable continuing their careers past a certain age. This young woman’s company clearly addressed this issue. She provided Japanese women with the opportunity to become temps, rather than fighting for the limited number of specialized career paths that they had to stay on for their whole life. In those early days, she only hired female workers. It was the 1980s, and she noticed that the company’s sales were slowing down. Many of her employees were uncomfortable going out and seeking new business leads. They were worried they’d be fined or arrested for spreading the idea of temporary work. Temporary work was still against the law despite her lobbying efforts for change. The woman was frustrated. Stagnation was unhealthy — she’d learned that after her divorce. But some of the women at her firm simply refused to budge. How could they carefully grow the business? She did not want to get herself or her employees arrested after all. Determined to continue making progress, the woman decided to begin hiring men. Soon she had a company culture where the women and men were in perfect balance. Despite her success, lifetime employment continued to be the norm in Japan. The government continued to advertise that under the law, temping by private companies was illegal. On one particular day, the knock came to her door. When she went to the peephole, it was the police. She glanced around at the inside of her tiny apartment. Every available surface was covered with the pictures and files of her customers. It was all illegal. She cracked the door and asked if anything was wrong. They calmly informed her that she would have to come with them to answer some questions. She sighed, then slid out the door, locked it, and followed the police to the car. She knew this day would come, and as the police walked her to the car, she laughed to herself. When she went into the police station to plead her case, she somehow managed to talk her way out of it. After that, she was frequently summoned by the police, questioned, and then let go. Each time she got released, she grew bolder. She had seen the future. Her entire country might believe that what she was doing was wrong, but she knew she was right. And she knew that one day, there would be a tidal wave of those who woke up and agreed with her. Sometimes she would lay awake in bed and wonder when she’d be thrown in jail for good, but fortunately that day never came. Eventually, after years of work, lobbying, and arguing with the government, she won. The law was changed. Temporary employment became legal in Japan. Little did the woman know, but she had positioned herself perfectly for a macro-economic tidal wave of opportunity. It was the 1990’s, when Japan entered what is known as the “Lost Decade.” Businesses went bust, and every single business needed one thing. Temp workers. The woman’s business, Temp Holdings was now large enough to give them exactly what they needed. It wasn’t long before Temp Holdings went public in 2008, and soon expanded around the globe. The little girl who craved freedom, had sought it out in the world. She found it, saw the future, and brought it back and shared it with her culture. The woman who paved the path was none other than Yoshiko Shinohara. Yoshiko became Japan’s first self-made woman billionaire. She says that there is one personal trait above all others that helped her become the first female billionaire in Japan. In our modern day, when everyone wants a complicated formula, Yoshiko’s four words of how she did it are a reminder that it doesn’t have to be complicated. Yoshiko says: “I hate to lose.” She trusted in her desire for freedom, and it led her on a path directly to it. Not only did Yoshiko Shinohara blaze a trail for others to follow, but her business has helped millions of women explore what it’s like to be more free and independent. She saw the future, and realized that eventually it would arrive. She didn’t wait until she had a glamorous office, or until the government gave her the thumbs up. It’s easy to complain that things aren’t fair. It’s hard to start trying to fix them from your 258 square foot apartment, and struggle alone for five years. It’s even harder to have to risk jail time to do it! As Emerson famously said: “If you are right, you are a majority of one.” Yoshiko’s story is a reminder that if you know you’re right, place a bet on your idea and yourself. You might be a majority of one. That’s her story. What’s yours going to be?
https://medium.com/the-mission/the-four-words-that-made-her-a-billionaire-351b539fd5fa
[]
2018-04-17 18:47:12.752000+00:00
['Storytelling', 'Business', 'History', 'Podcast', 'Entrepreneurship']
Configuring Web Server in Docker Inside a Cloud Instance
Configuring Web Server in Docker Inside a Cloud Instance How to configure a Web Server in a Docker Container, which is launched in a Cloud Instance. Hello Geeks, I Hope you are here to learn about Web-Server and Docker, so let's get started with the blog… We will explain how we can launch a container technology inside a cloud instance and configure a Web-Server inside the blog's same container. The task we are going to complete: Launch and start a docker container in EC2 instance Configuring HTTPD Server on Docker Container Setting up Python Interpreter and running Python Code on Docker Container. Here is basic information about the technology we will be using in the blog. Container Technology: Container technology is a method of packaging an application to be run with isolated dependencies. They have fundamentally altered the development of software today due to their compartmentalization of a computer system. Docker: Docker is a platform as service products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated and bundle their own software, libraries, and configuration files; they can communicate through well-defined channels. Apache HTTP WebServer: The Apache HTTP Server, colloquially called Apache, is a free and open-source cross-platform web server software, released under Apache License 2.0. Apache is developed and maintained by an open community of developers under the Apache Software Foundation's auspices. Launch and start a docker container in EC2 instance First, we launched our EC2 instance, and we connected to it using Putty. And now we are inside the system. As the system is new, we don’t have any software or program inside. We have to install docker inside our OS at first. To install the docker, we need to use yum, and we don’t have any docker configuration file, so we have to configure yum first. We went to the path /etc/yum.repos.d/. And we used the dnf command to download docker software into our system.
https://medium.com/swlh/configuring-web-server-in-docker-inside-cloud-d46fbf60ccf5
[]
2020-11-27 15:57:12.390000+00:00
['Docker', 'Centos', 'Python', 'Apache Httpd', 'AWS']
Rough Seas
image by Ulkar — purchased by the author Friend ships can be sailed Through rough seas and quiet storms Scarred — never broken First of all, you and Siri weren't stupid. You were a couple of kids doing what kids do — learning. I'm so glad you both escaped further harm, and that you built such a bond with one another.
https://medium.com/survivors/first-of-all-you-and-siri-werent-stupid-db927cef145c
['Toni Tails']
2020-09-10 07:47:16.835000+00:00
['Poetry', 'Relationships', 'Mental Health', 'Creativity', 'Life']
Web scraping com Python
Python RegEx Well organized and easy to understand Web building tutorials with lots of examples of how to use HTML, CSS, JavaScript…
https://medium.com/dados/web-scraping-com-python-45531a6138c9
['Wesley Watanabe']
2019-07-10 12:12:04.304000+00:00
['Data Science', 'Web Scraping', 'Python', 'Analysis', 'Data']
Kannada-MNIST:A new handwritten digits dataset in ML town
Class-wise mean images of the 10 handwritten digits in the Kannada MNIST dataset TLDR: I am disseminating 2 datasets: Kannada-MNIST dataset: 28X 28 grayscale images: 60k Train | 10k Test Dig-MNIST: 28X 28 grayscale images: 10240 (1024x10) {See pic below} Putting the ‘Dig’ in Dig-MNIST The Kannada-MNIST dataset is meant to be a drop-in replacement for the MNIST dataset 🙏 , albeit for the numeral symbols in the Kannada language . Also, I am disseminating an additional dataset of 10k handwritten digits in the same language (predominantly by the non-native users of the language) called Dig-MNIST that can be used as an additional test set. Resource-list: GitHub 👉: https://github.com/vinayprabhu/Kannada_MNIST Kaggle 👉: https://www.kaggle.com/higgstachyon/kannada-mnist ArXiv 👉 : https://arxiv.org/pdf/1908.01242.pdf If you use Kannada-MNIST in a peer reviewed paper, we would appreciate referencing it as: Prabhu, Vinay Uday. “Kannada-MNIST: A new handwritten digits dataset for the Kannada language.” arXiv preprint arXiv:1908.01242 (2019).. Bibtex entry: @article{prabhu2019kannada, title={Kannada-MNIST: A new handwritten digits dataset for the Kannada language}, author={Prabhu, Vinay Uday}, journal={arXiv preprint arXiv:1908.01242}, year={2019} } Introduction: Kannada is the official and administrative language of the state of Karnataka in India with nearly 60 million speakers worldwide. Also, as per articles 344(1) and 351 of the Indian Constitution, Kannada holds the status of being one of the 22 scheduled languages of India . The language is written using the official Kannada script, which is an abugida of the Brahmic family and traces its origins to the Kadamba script (325–550 AD). Kannada stone inscriptions: Source: https://karnatakaitihasaacademy.org/karnataka-epigraphy/inscriptions/ Distinct glyphs are used to represent the numerals 0–9 in the language that appear distinct from the modern Hindu-Arabic numerals in vogue in much of the world today. Unlike some of the other archaic numeral-systems, these numerals are very much used in day-to-day affairs in Karnataka, as in evinced by the prevalence of these glyphs on license-plates of vehicles captured in the pic below: A vehicle license plate with Kannada numeral glyphs MNIST-ized renderings of the variations of the glyphs across the modern Kannada fonts This figure here captures the MNIST-ized renderings of the variations of the glyphs across the following modern fonts: Kedage, Malige-i, Malige-n, Malige-b, Kedage-n, Malige-t, Kedage-t, Kedage-i, Lohit-Kannada, Sampige and Hubballi-Regular. Dataset curation: Kannada-MNIST: 65 volunteers were recruited in Bangalore, India, who were native speakers of the language as well as day-to-day users of the numeral script. Each volunteer filled out an A3 sheet containing a 32 × 40 grid. This yielded filled-out A3 sheets containing 128 instances of each number which we posit is large enough to capture most of the natural intra-volunteer variations of the glyph shapes. All of the sheets thus collected were scanned at 600 dots-per-inch resolution using the Konica Accurio-Press-C6085 scanner that yielded 65 4963 × 3509 png images. Volunteers helping curate the Kannada-MNIST dataset Dig-MNIST: 8 volunteers aged 20 to 40 were recruited to generate a 32 × 40 grid of Kannada numerals (akin to 2.1), all written with a black ink Z-Grip Series | Zebra Pen on a commercial Mead Cambridge Quad Writing Pad, 8–1/2" x 11", Quad Ruled, White, 80 Sheets/Pad book. We then scan the sheet(s) using a Dell — S3845cdn scanner with the following settings: • Output color: Grayscale • Original type: Text • Lighten/Darken: Darken+3 • Size: Auto-detect The reduced size of the sheets used for writing the digits (US-letter vis-a-vis A3) resulted in smaller scan (.tif) images that were all approximately 1600×2000. Comparisons with MNIST: 1: Mean pixel-intensities distribution: 2: Morphological properties: 3: PCA-analysis: 4: UMAP visualizations: Some classification bench-marking: I used a standard MNIST-CNN architecture to get some basic accuracy benchmarks (See fig below) The CNN architecture used for the benchmarks (a) Train on Kannada-MNIST train and test on Kannada-MNIST test (b) Train on Kannada-MNIST train and test on Dig-MNIST Open challenges to the machine learning community We propose the following open challenges to the machine learning community at large. To characterize the nature of catastrophic forgetting when a CNN pre-trained on MNIST is retrained with Kannada-MNIST. This is particularly interesting given the observation that the typographical glyphs for 3 and 7 in Kannada-MNIST hold uncanny resemblance with the glyph for 2 in MNIST. Get a model trained on purely synthetic data generated using the fonts (as in [1]) and augmenting to achieve high accuracy of the Kannada-MNIST and Dig-MNIST datasets. Replicate the procedure described in the paper across different languages/scripts, especially the Indic scripts. With regards to the dig-MNIST dataset, we saw that some of the volunteers had transgressed the borders of the grid and hence some of the images either have only a partial slice of the glyph/stroke or have an appearance where it can be argued that they could potentially belong to either of two different classes. With regards to these images, it would be worthwhile to see if we can design a classifier that will allocate proportionate softmax masses to the candidate classes. The main reason behind us sharing the raw scan images was to foster research into auto-segmentation algorithms that will parse the individual digit images from the grid, which might in turn lead to higher quality of images in the upgraded versions of the dataset. Achieve MNIST-level accuracy by training on the Kannada-MNIST dataset and testing on the Dig-MNIST dataset without resorting to image pre-processing. [1] Prabhu, Vinay Uday, Sanghyun Han, Dian Ang Yap, Mihail Douhaniaris, Preethi Seshadri, and John Whaley. “Fonts-2-Handwriting: A Seed-Augment-Train framework for universal digit classification.” arXiv preprint arXiv:1905.08633 (2019). [ https://arxiv.org/abs/1905.08633 ]
https://towardsdatascience.com/a-new-handwritten-digits-dataset-in-ml-town-kannada-mnist-69df0f2d1456
['Vinay Prabhu']
2019-08-12 07:02:31.581000+00:00
['Computer Science', 'Data Science', 'Artificial Intelligence', 'Machine Learning', 'Computer Vision']
Effects of Internalized Homophobia
Internalized homophobia is something most LGBTQ+ people have found to be a battle at different points in their lives. It mostly stems from the homophobic, heterosexist, discriminatory culture that we have been taught by society as we grew up. Most of us were taught negative ideas about being homosexual. It was considered to be something wrong, bad and immoral, and in my community, it is considered a taboo to even think about it. It is mentioned in whispers and immediately termed as demonic. Constantly hearing such demeaning and negative depictions of LGBTQ+ individuals can lead to us internalizing these ideas and words. We carry them inside our hearts and minds and start viewing ourselves through the lenses of a homophobic society. Sometimes we don’t even realize that we are doing this to ourselves. Internalized homophobia is a serious issue because it can affect the LGBTQ+ individual’s mental and physical health. Like every other issue affecting the world, internalized homophobia should not be left behind. People are still holding on to the ideologies of a homophobic society, forgetting that it is up to us to give them a real and positive view of the LGBTQ+ community. Some people within the LGBTQ+ community have ended up feeling contempt for themselves and other LGBTQ+ individuals. Others suffer from low esteem, negative body image and feel unworthy. There is also the shame they feel for being queer and not living by societal expectations. As this gets worse, they end up alienating themselves from other LGBTQ+ people because they don’t want to be associated, lest someone questions their sexuality. Others chose to numb the pain by getting lost in substance abuse, practicing unsafe sex, getting into trouble with the law or being emotionally unavailable. In severe cases, some have denied their sexual orientation and lead a life that makes them miserable because they fear they will be outcasted when they admit the truth. The attempt to pass as heterosexual in hopes that they will gain the social approval or ‘be cured.’ They will constantly monitor their behaviours, mannerism in fear of being discovered and try as much as possible to align their beliefs and ideas with what the society states is ‘right.’ In the worst-case scenario, some end up committing suicide . This just goes to show the tremendous impact internalized homophobia has on our mental health and the influence on our thoughts, feelings and mannerisms. A recent study revealed our daily lives have also been affected by internalized homophobia. It is linked to several negative outcomes in romantic relationships and non-romantic relationships in LGB individuals. In most cases, they end up struggling with long-term and committed relationships. They can be afraid of being in stable relationships and will subconsciously self-sabotage. In most scenarios, it also affects relationship quality and most of them find themselves in abusive and unfaithful relationships. It also comes with the burden of numbing discrimination and inequality. Because most of them cannot manage to deal with these negative global attitudes, they end up conforming to the dominant heterosexual culture while suppressing their own individual expressions. On a positive note, recent findings have been useful for counselors interested in interventions and treatment approaches to help individuals to cope with internalized homophobia and relationship problems. It is important to change our personal views of ourselves. Therapy is an important tool in combating internalized or outward homophobia. It can help break down negative internal thoughts. Openly talking about this will help us understand that there is nothing wrong with us; we need to be comfortable with who we are and embrace and celebrate our sexuality. It will make us understand that internalized homophobia is something we can work through and not to be afraid of opening up and admitting we need support. There is a lot of peace in self — acceptance and we all deserve to be ourselves. Personally, I wish someone had told me this earlier on in my life; things would have certainly been easier. Lastly, society around us should step up to come up with a mindful approach. They say that it takes a whole village to raise a child. The world is a global village, and everyone has a responsibility to create an inclusive environment for all forms of sexuality and provide a warm environment for everyone.
https://medium.com/matthews-place/effects-of-internalized-homophobia-11606f39204a
[]
2020-07-02 20:34:05.684000+00:00
['Internalized Homophobia', 'Mental Health', 'Homophobia', 'LGBTQ', 'Health']
Estimating AI Project Costs & Timescales: 4 Rules of Thumb
But, first a reminder of some of the basics of IT project estimating. Estimating AI project costs is an evolution and adaptation from that. The Fundamentals of Reliable IT Project Estimating IT work has an often-deserved reputation for massive over-runs and overspends, especially large, multi-year projects. However, over the last couple of decades, the reality has changed. IT projects still run late, but huge, eye-catching failures are far less common. There have been several reasons for this change. One is the prevalence of more flexible development processes and shorter project cycles. Another is the adoption of project quality metrics and continuous improvement. Carnegie Mellon’s CMM work was instrumental in the second and is still invaluable reading for IT newcomers. There is still heated debate about IT processes and project management methods. However, a request for IT project quotes generates broadly comparable estimates from most vendors, given the same information. This will include similar breakdowns of how the overall work will split into phases or activities e.g. requirements etc. This is relevant for AI projects because we’re currently where general IT estimating was many years ago: inconsistent, often unreliable. A big cause is that AI development processes vary, as do opinions on what quality means for AI deliverables. So discussions about estimates are typically driven by personal experience or project constraints rather than something more objective. Creating consistency in how you build AI will result in a more objective basis for estimating future AI projects. This will also allow you to improve and optimise your AI work, and hold your own in vendor discussions. Estimating AI Projects: Rule of Thumb 1 Don’t Plan Big Projects Without Credible Benchmarks, Ideally Your Own No matter what you’re promised or the figures indicate, you should embark on big AI projects judiciously. If you’re experienced in AI work, you probably have some benchmarks on what’s involved. These are the best way of validating large predicted AI project costs. Even so, it’s worth asking for other objective data to support estimates, ideally ones you can validate. Client references are particularly helpful. If you’re experiencing your first taste of AI, the risks of starting with a large project are greater. I’d invariably suggest starting with a short, relatively low-cost exercise. With that, your teams and partners will have experience of AI in your organisation. This will be a real experience of developing AI solutions that use your data and connect to your IT systems. This is a more realistic basis for future estimates of AI project costs and timescales than generic or industry figures. After a second or third small project, you’ll have a more reliable starting point for estimating more ambitious AI work. When starting out with AI, useful small projects can be as short as a 2–3 months, rather than quarters or years. Realistic results are possible with tens or very small hundreds of thousands of pounds/dollars/euros; costs in the hundreds of thousands or millions create much greater pressure on results. And a core team of single-digit size can be more effective and productive than one of dozens. If people advising you say otherwise, then you may want to look again at the choice of scope or ambition. And of course, there may be organisational or commercial factors influencing views. Estimating AI Projects: Rule of Thumb 2 Accept Some Phases are “Experimental” in Nature, But Keep Them On a Leash A sometimes misunderstood feature of AI work is that it’s normal for days’ or weeks’ work to be “wasted”. This isn’t necessarily a sign of team inexperience or lack of competence — although of course productivity improves with practice. It’s a reflection of underlying AI techniques, which require experimentation and a degree of trial and error. A challenge in estimating AI project costs & timescales is they can become — perhaps inadvertently — “blank cheques”, especially for AI vendors. The trick is finding the right balance between enough latitude for effective results, and work that’s allowed to “drift”. Most AI courses, including non-technical ones, refer to the concept of EDA (or some equivalent). This is Exploratory Data Analysis, a crucial early part of most AI projects. Tips for Handling Estimates for “Experimental” Work One common approach is to separate estimates for AI project costs and timescales into two parts. A firm estimate is possible for EDA, with a more tentative one for the rest of the project. The second estimate is only confirmed at the end of EDA. By then, there’s a better understanding of the data, business problem and wider IT environment. Another option is to use IT development techniques like timeboxing to limit the duration of the experimental work. This, in turn, constrains the rest of the project to what was discovered in the timebox. This touches on a discussion of development methods like Agile. It’s not appropriate to get into the details of that here, other than to observe that Agile isn’t an excuse for poor estimating. How you approach the “experimental” aspects of an AI project is less important than recognising their characteristics. For good AI project estimating the trick is to limit the duration of experimental phases, without excessively compromising their usefulness. Ideally, by using such phases well, your team discovers unavoidable errors or dead-ends after days or weeks, not months. Estimating AI Projects: Rule of Thumb 3 Decide How You’ll Balance the Tension Between “Good” and “Good Enough” A key role of data scientists is exploring different ways of solving business problems, evaluating alternatives to identify the best. Whereas business members of an AI team is to maintain focus on the business results of AI work. From that perspective, AI is a means to an end, to be reached as quickly and cost-effectively as possible. This can lead to two viewpoints on what makes a task “complete”, or needing more effort. For example, if an algorithm hits 95% “accuracy”, data scientists may consider this poor, but business users may believe otherwise. The right answer, of course, depends on what the algorithm is being used for. This includes understanding the value of getting an answer 95% “right”, and the cost/risk of 5% inaccuracy. This is where terms like Precision, Recall and F1 score become relevant and lead to discussions such as the relative importance of false positives vs false negatives. What Does This Balance Look Like in Practice? To understand this a little more, consider an example from medical diagnosis, say cancer detection. The meaning of “5% inaccuracy” is ambiguous. One possibility is that 5% of the patients screened believe they don’t have cancer, but actually do. Conversely, it may mean 5% of patients flagged as having cancer are actually all clear. The decision to pursue a more accurate algorithm or stop at 95% accuracy needs understanding of such differences. Ideally, there’s also an awareness of the associated business value. For more accuracy, further work tuning the model may be the answer; If 95% is approaching the limit of how well the algorithm can perform, other algorithms may be required; A third option might be revisiting options to source or prepare better data; The right answer may be to accept 95% accuracy, even if better is possible. This is just one example. There can be several such potential conflicts on the “right” answer during AI work. Each will impact AI project estimates. A role of AI project leadership is to balance these two mindsets. It’s tricky enough during the project. Even with facts, there may have compelling arguments for both viewpoints. It’s difficult — arguably impossible — to accurately allow for such possibilities before they arise. So when estimating AI project costs and durations at the outset, allowance needs to be made for this. This includes agreeing within the team how to handle such decisions. Estimating AI Projects: Rule of Thumb 4 Have Realistic Expectations & Contingencies for Data “Gruntwork” The fourth rule of thumb arises because AI results usually require substantial data preparation work. This work is often under-emphasised, and sometimes overlooked, in project estimates. Getting enough appropriate data to “feed” an AI solution can be a black hole in AI work, especially for the unwary. Data preparation can suck up inordinate amounts of effort, and sometimes may not even be part of vendor quotes. One challenge is knowing how much data gruntwork is really needed for your circumstances. The other is making sure it’s been included appropriately in estimates. Look especially carefully at vendor estimates for issues here. Data preparation can sometimes be missing from quotes completely, buried in proposal assumptions. Another approach is to allow a token few days, knowing full well it will expand immediately. Estimating Data Preparation Work To cover such possibilities, your team needs to understand what relevant data is available. This includes how complete, accurate and reliable it is, and what may and may not be done with it (e.g. legal, regulatory, ethical). Without this understanding, it’s unlikely anyone can reliably predict effort and costs of preparing data. This understanding includes what data there really is, what state it’s in (completeness, accuracy etc), and the work to prepare and clean it. It’s another tricky part of estimating AI project costs and timescales. If your organisation has done AI or analytics work in the area you’re considering, this information may be available. It may not be in formal documents, but hopefully, there will be people with relevant knowledge to help size the data preparation work. If you don’t have the knowledge available to understand data preparation, you’ll need a scoping exercise, called Exploratory Data Analysis (EDA). If vendors do this, you should expect clarity on data preparation activities and estimates as a deliverable. Estimating AI Project Costs & Timescales: Wrap-up AI Processes & Benchmarks are Key to Long Term Estimating Accuracy Estimating AI project costs and timescales is harder than for regular IT work. In part, this is due to the lack of AI industry processes and benchmarks. The result is more reliance than ideal on the experience, judgement and opinions of your AI experts and vendors. If AI is something you’ll do often, consider creating your own approach to AI work over your first few projects. You’ll be capturing AI project cost and effort data for your own organisation. This will be more reliable for you than industry figures, and provides benchmarks for continuous improvement. It isn’t necessarily a difficult exercise and doesn’t add much overhead. In fact, it should pay for itself quickly through savings in future work. Estimating Smaller Projects is Easier, Especially Early On In the meantime, starting small is usually the way to go when getting started with AI projects. Regardless of project size, there are some obvious areas where estimates may be excessive or over-optimistic. This is especially the case if you’re relying on vendors to deliver your work. Consider Estimating Each Phase As You Go Effective use of EDA can be very helpful, along with good project management practices — especially iterative approaches like Agile. You’ll also need to balance the instincts and priorities of different team roles, especially during “experimental” phases of work. This article was first published on www.aiprescience.com .
https://towardsdatascience.com/estimating-ai-project-costs-timescales-4-rules-of-thumb-707ccf49a768
['Was Rahman']
2020-04-30 20:55:23.945000+00:00
['Machine Learning', 'Project Management', 'Artificial Intelligence', 'AI', 'Data Science']
Q & A with Marian Dörk on the UN’s Habitat III conference and the role of the data visualization for sustainable urban futures
Photo: Matthew Tobiasz Marian Dörk is a Research Professor for information visualization at the Urban Futures Institute for Applied Research and the Urban Complexity Lab at University of Applied Sciences Potsdam, Germany. This October, the UN will hold its biggest ever summit on the future of cities. Why have cities become such a hot topic at this point in history? Marian: By 2050, more than 70% of the world’s population will be living in cities. Cities also produce most of the world’s GDP and greenhouse gas emissions, yet they are the key to a more sustainable future. The future of humanity lies in cities. Ecological and social crises affect both urban and rural areas, but cities are a laboratory where we can better understand the complex causes of today’s grand challenges and find holistic approaches to addressing them. Why are you going to Quito and what you hope to achieve there? Marian: We’re going to Quito to demonstrate that data visualization can be an important partner for designing the future of cities. For that, we’ve teamed up with Future Earth and the International Council for Science to build Habitat X Change. This will be an event and exhibition space, where people from diverse backgrounds with a common interest in science, visualization, and sustainability of cities can take part in an exciting program of talks, workshops, and panel discussions at the intersection of these topics. Furthermore, we are sharing the results from an open call for city visualizations that teams of scientists, developers, and designers have submitted in the run-up to Habitat III. We will also exhibit a working prototype of a visualization framework (with the working title “vis tent”) that blends physical city models with digital data visualizations of three cities. So Habitat X Change is a collaboration between science and visualization — could you explain the basis for the collaboration? Is it a common partnership? Marian: All partners in Habitat X Change share the recognition that complex challenges such as climate change are difficult to communicate. In order to inform decision-making at various levels, especially in cities, we need more research and design to develop new ways to bridge the gap between knowledge and practice. Data visualization (a scientific field itself) has in recent years become popular in the media, in particular to communicate scientific findings or when stories are complex and daunting to convey in text alone. The next challenge is for visualization to step up its role as a natural ally in communicating science to decision makers in business and civil society. The “Vis Tent” — could you talk a little more about this? What is it and how can city stakeholders engage with it, what can they learn from it? Marian: The “Vis Tent” — we’re still looking for a better name for it — is a hybrid visualization framework for cities that we’re building at FH Potsdam with some support from the mapmakers at HERE. The visualization brings together the traditional city model in physical form with projections of different urban data patterns. The physical model differentiates between water, land, streets, and buildings, while the projection represents various dynamic aspects of the city such as air quality, traffic, and population density. This allows, for example, to easily see where and at which times of the day and week traffic is particularly busy. By incorporating multiple data dimensions one can also analyze how certain dimensions may correlate — such as traffic and air quality. For Habitat III we are preparing visualizations of three cities: Bogotá, Cape Town, and Singapore. We have just recently launched cf. city flows, a comparative visualization on bike sharing, which has demonstrated the potential of juxtaposing and contrasting multiple cities. Visitors will be able to see these three cities next to each other and examine urban data patterns. How about the open call for visualizations? Is there a way to share the winning entries and their unique insights with policymakers and city stakeholders attending Habitat III? Marian: Yes, we will have a dedicated event at Habitat X Change, during which we present the winning entries and give an overview of the latest trends in visualizing cities. Throughout Habitat III we will also exhibit a broad range of city visualizations submitted by visualization groups from around the world. These have been reviewed by an international programme committee of researchers in data visualization, urban sciences and communication. We are planning Q&As with the people behind the winning entries either live during the presentation or afterwards on our blog. What happens after Quito? Marian: Quito is just one step on the road towards a much more integrated approach to visualization and science. Our core activities at FH Potsdam involve research and teaching on urban futures, so we will continue our work on visualization and other related topics in cooperation with city administrators, but also with partners from industry and civil society. The results of the visualization call will be shared on a web platform that is planned to be a continuously evolving resource for those interested in the science and visualization of cities. We are planning international workshops and summer schools on visualizing cities with scientists and stakeholders in the months and years to come.
https://medium.com/sustainable-urban-futures/q-a-with-marian-d%C3%B6rk-on-the-uns-habitat-iii-conference-and-the-role-of-the-data-visualization-557bcd9ec437
['Habitat X Change']
2016-09-12 16:11:00.755000+00:00
['Cities', 'Design', 'Sustainability', 'United Nations', 'Data Visualization']
The Mysterious Person Who Earns $49,090/Month on Medium
The Mysterious Person Who Earns $49,090/Month on Medium My detective “hypothesis” has never failed me Photo by timJ on Unsplash It’s funny how you clicked on this story because of the title. That’s proof that stories with dollar signs in titles and clickbait are actually manipulative. But this isn’t clickbait. My 237 IQ brain is overheating and It’s undergoing an analysis of Medium’s recent changes. This isn’t a story with complaints like the 63 articles I’ve read all month of October 2020. This is a story of curiosity and concern, questions raising suspicions that something is going on with Medium. This is working out well for me because after figuring this out, I knew I could be the next one to make the $49,090 this month or next month. Oh, I’m sorry — I didn’t get the dollar figures correctly because It’s too stressful to check. My point is, we all know there’s some mysterious being who makes $4$$$$ in a month, and $6000 and something on a single story. I thought someone who made that amount had a story trending. I mean, the 6 trending stories on the homepage, or popular on Medium across the topics. No. Their story isn’t either of those. We also know this person, but at the same time, we don’t know this person. We’ve been seeing this person’s name, say once a day or once in 5 days depending on your niche. But, we’ve just never noticed this person, they’re on lowkey mode. I have a suspect and it’s not who is on your mind. It took me one glance at the profile page to suspect. They’re not an editor for any mighty publication with 4 million followers, they own a publication, and they have a specific niche. It’s not a self-improvement niche that everyone’s squeezing into. It’s a niche no one pays attention to. They don’t write listicles. Never! They don’t write self-help stuff, probably because they seem clueless about self-help, awareness, life lessons, and others like the rest of some writers including myself. They’re not a jerk writing stories about “3 things to do to change your life forever” Nope! Never! They just don’t. They don’t write one-sentence paragraphs that everyone wants to adapt to, because most top writers do it. Their paragraphs are moderately long with little or no formatting. I mean, they don’t use block quotes and pull quotes 135 places in a single story. They’ve never published in a Medium-owned publication, just theirs. They’ve also not highlighted, clapped, responded, or sent a private note to correct a writer on his/her typos. Their titles are the most boring ever, it’s not clickbaity in any way, it’s unique, just like a newspaper headline — exactly what Medium wants. Oh! I almost forgot. They don’t write the hail and hearty “how to make money on Medium” or “how much I earned in 3 months” or “how to make $78nslwo96 blogging on Medium” or “the mysterious person who earns $49,090/month on Medium” or “54 ways to earn $1 on Medium” or “How to get curated in 87 topics.” In other blatantly truthful words, they do not lie to their readers, just like Nicole Akers said. Why? Because honest writers don’t lie to their readers. Hold on, there’s more. They don’t link-bomb their stories, Ryan Fan has an insightful story on why link-bombing won’t save you. You know, the embed thing that connects your other stories at the end. They barely… barely ever do that. Also, they don’t do the “get my newsletter” thingy at the end of each story they write. Their newsletter info is sweetly tucked in their profile bio. It always glows. Everyone sees it after reading their story, you know after each story, Medium has a wonderful feature or design, whichever — it contains the author’s profile photo, the author’s bio, and most of all, the follow button. The publication’s underneath too. It even glows in the dark. Are you wondering who this is? Honestly, you know who this person is, you’ve just never noticed. It’s funny how someone could be so unnoticed with so much quality content, and even worse, this person’s stories have an average amount of claps, not 703k. My detective brain said it’s because people sometimes forget to clap when they read a very interesting and captivating story. Psychology believes people don’t appreciate their most loved gifts, but rather appreciate their least the most. Just saying. They don’t care about claps, dwell on it, or beg for it on Facebook groups. With these hypotheses, I’ve finally decided that this person isn’t me. It certainly isn’t you. Except it is — but that’s only if you’re innocent of these suspicions. If you read this, you’re not, because the mysterious person has better things to do than read such a satirical/ode(ish) piece about themselves.
https://medium.com/wreader/i-know-who-makes-the-mysterious-49-090-on-medium-2c62c1a1d91c
['Winifred J. Akpobi']
2020-10-30 18:30:01.392000+00:00
['Sarcasm', 'Writing', 'Humor', 'Creativity', 'Satire']
Tutorial: Get alerted when feature flags change, via AWS Lambda and Webhooks
The Problem At Optimizely, we’re always looking for ways to eat our own dogfood. Once we added feature flags to Optimizely Full Stack, we started adopting flags to remotely configure our own application. In the last few months, we’ve found this has been especially helpful for our end-to-end testing. Being able to toggle features on and off for each language we test makes running these features much faster. Our SDK team is using a Full Stack project to configure our E2E tests. We do this by setting up our changes as feature flags in Optimizely, and creating an audience for each SDK we manage (e.g. when we developed the workflow for feature flags, we set this up as feature=”feature_management”, audience=”node_sdk”). This allows us to turn off features for certain SDKs or all SDKs during testing. Below is a snapshot of the feature layout. We’ve seen good results from using feature flags to manage these tests. But recently, we hit a problem. With many different engineers working on E2E tests all at once, what happens when someone else turns off a feature that you may also be working on? In order to track this better, I wanted to get a slack notification to our developer channel telling us what changed in our datafile (the configuration file capturing the state of all our experiments and feature flags). I created an AWS Lambda with API gateway to accomplish this. The API gateway endpoint is used to register a webhook with Optimizely. When the datafile is changed, the webhook is fired. The lambda function reads the datafile from the CDN and then from the DynamoDB instance if it exists (if not, it creates the DB table and stores the current version of the datafile before publishing a “no difference” message to your slack channel). The function compares these two JSON datafiles and tries to send a human readable diff. Below is an example diff using the feature above that someone toggled from on to off. There’s room for readability improvements. But, from the above I can tell that the rollout for feature_management has changed and that the feature has been disabled since featureEnabled was set to false and status went from Running to Paused. Getting started was easy. It was trivial to setup the Lambda and the webhooks. The hardest part was figuring out the permissions for the lambda. *Since the datafile is not changed often and the lambda only published to a slack channel, this implementation is a low cost solution. You could use this type of setup to notify your developer channel of project file changes or even QA for staging and debugging before release. For example, the developers may have gated features and initially had them set to false. When it’s time to flip on the features that need to be tested, your QA team could be notified in a channel and begin testing. This could actually be dogfooded company wide not just for QA. *Using the webhook, you could also post notifications to your servers via some service such as AWS Simple Notification Service. This would work nicely within AWS. Your Elastic Beanstalk instance spins up and registers for a SNS. When a notification comes in, your server instance would just read the file from DynamoDB latest instance or directly from the SNS message. But, that’s a blog for another time. Implementation This document assumes you already have an AWS account and know how to create an Optimizely project. Once you log into the AWS account console you can easily find the lambda section. The three areas we will touch with the AWS console are the lambda function, the DynamoDB, IAM console, and CloudWatch for logs. Above is a basic breakdown of what the webhook lambda will look like. This is the Design view of the lambda. When you click on the individual components, the views below it change for the appropriate module (such as the lambda showing the code). In the example below we can talk to the API Gateway, access logs, and connect with DynamoDB. The first thing we need to do is create the webhook forwarder. To do this, you can use a lambda template. Go to the AWS Lambda Management Console and create a function Using blueprint microservice-http-endpoint. The microservice-http-endpoint blueprint gives you a setup for gateway API and DynamoDB using node.JS. Name your lambda and create a new role with “Simple Microservice permissions.” Also, create and name your new API endpoint. Don’t worry about the code right now. We’ll just create the lambda to start. Edit your DynamoDB permissions by accessing the IAM console and adding “Create table” permission. *It may be a good idea to create the DB beforehand and just use that DB instead of allowing the lambda to create the DB. I decided to have the lambda create the DynamoDB file. Ok, now we can look at the code. The template code provided by AWS shows you how to update, delete, and insert into a DynamoDB instance using various HTTP methods (POST, GET, PUT). We only care about making POST calls so lets start there. First change the example code so that you are only switching on the POST or of course default. Then, let’s just print out the webhook project id and url from the Optimizely webhook payload. Next, create a test using the API Gateway test template (upper right hand corner next to the Test button) and replace the following as your body property of the payload: Validate that your endpoint is working correctly. Now, let’s register our webhook. Rather than run through the whole process, this help article here covers how to register your webhook with Optimizely. Your webhook URL is your API Gateway Invoke URL that is available by clicking in the API Gateway Button in the Design view of your lambda. Now, you can test this by actually updating your project and looking at your logs. You should see your URL there. Next, you need to add your secret key and test for that in the payload. So, create a environment variable and add your secret key there. You will see environment variables below your coding view. Below is a snippet of code from our lambda showing the testing for the secret key: f the secret key and the header don’t match, don’t honor the request. Keep in mind that if you are servicing multiple projects then you need a way to either have multiple secret keys or not use secret keys. One way for multiple keys would be to store the key in projectId:secret key value pairs in multiple environment variable or a single environment variable. This document includes the gist of the lambda so you can look through all the code at one time. So, we have our url, we can tell if the event coming in is legitimate, now, let’s process the request. When a request comes in we will: Load the new datafile from the webhook payload. Look to see if the datafile exists in DynamoDB. If it does exist, use it to find the latest differences, otherwise, store the current copy in DynamoDB and say nothing has changed yet. Finally, we do a diff, update the datafile in DynamoDB, and publish the difference to the Slack channel. First, the way I implemented the diff of the datafile might not be best. It uses recursion since we know the datafile is relatively small. It tries to print messages that tell what actually changed. You may want to replace that with a npm json diff package or tweak the existing to your needs. I’m not going to go through each function. But, I would like to discuss how we setup the webhook publish to Slack. If you are logged into Slack it’s really easy. You can just open up a browser and go to https://api.slack.com/incoming-webhooks. Or, you can go directly to https://my.slack.com/services/new/incoming-webhook/. Enter your channel and create a Slack webhook. In the sendWebhook call in our lambda, you will add that webhook to the appropriate area. Notice that the send portion is just the last part of the url. Conclusion Setting up a webhook to notify the team of project configuration changes helped me to know when we were potentially stepping on each others’ toes. The lambda webhook can also be used to notify your servers, as well as send a Slack message. Using lambda functions for webhooks is a powerful tool in project management. I hope that this document makes it easy to understand and create a webhook lambda to digest through Slack. Finally, you can find all of my index.js code here on GitHub. You can simply replace your code with the provided index.js and add your YOUR_KEY_HERE from Slack and you’re ready to go. Don’t forget to add your secret key as a environment variable. Happy Coding!
https://medium.com/product-experimentation/tutorial-get-alerted-when-feature-flags-change-via-aws-lambda-and-webhooks-2b2ba1cb2447
['Tom Zurkan']
2018-07-31 01:16:45.775000+00:00
['Feature Flags', 'Optimizely', 'Experiment', 'AWS', 'Startup']
10 Efficient Ways to Use Python Lists
Copy List by Value There are many ways to copy a list, but using an assignment operator isn’t one of them. Let’s confirm this: >>> a = [1, 2, 3, 4, 5] >>> b = a >>> id(a) 4345924656 >>> id(b) 4345924656 The assignment just creates a reference to the list a . This implies both of the lists now point to the same memory and any changes in one list would affect the other. Following are some possible ways to create a standalone “shallow” copy of a Python list, ranked from the most efficient to the least in terms of speed: b = [*a] b = a * 1 b = a[:] b = a.copy() (Python 3 — shallow copy) (Python 3 — shallow copy) b = [x for x in a] b = copy.copy(a) (Python 2) While the difference in speeds is comparable, sometimes doing a deepcopy (which is obviously the slowest and most memory-needing approach) is unavoidable. Unlike a deep copy, a shallow copy doesn’t do a clone of the nested objects. Instead, it just copies the reference of the nested objects. Let’s look at the following example to validate this: >>> a = [[0,1],[2,3]] >>> b = [*a] >>> a[1][0] = 5 #Output of b: [[0, 1], [5, 3]] Updating the nested list element a[1][0] = 5 changes the list b as well. In such scenarios where we aren’t using a 1D list, the following ways work best for doing a deep copy of all the list elements:
https://medium.com/better-programming/10-efficient-ways-to-use-python-lists-f6e7e666708
['Anupam Chugh']
2020-04-13 16:08:22.391000+00:00
['Software Engineering', 'Software Development', 'Python', 'Data Science', 'Programming']
The Not-So-Smelly Truths of Washing Like the French
It’s easy to fall for the City of Love. Just don’t expect it to always smell like a fresh bouquet of roses. This is especially true in the summertime, on crowded metros when the smell of body odor mixed with cigarette smoke can knock you over faster than you can say “Mon Dieu, d’où vient cette odeur??” (translation: where on God’s green earth is that rank smell coming from) As Americans, it’s hard for us to understand (naive teenagers aside) how anyone can stand body odor. Who showers better? An AOL Health poll conducted in 2009 revealed that 65 percent of Americans shower or bathe every day, while 4 percent shower more than once every day. Contrast this with the French, who, while not too far off from the American average, still manage to make showering less of a thing. In a recent poll conducted in France, only 57% of respondents reported showering daily. From the local climate to energy costs to advertising, there are numerous factors as to why one culture showers more than another. The Atlantic Journalist James Hamblin recently reported that we waste nearly two years of our lives washing: “12,167 hours... That’s how much life you use, if you spend 20 minutes per day washing and moisturizing your skin and hair (and you live to be 100, as we all surely will). That adds up to nearly two entire years of washing every waking hour.” If you consider that the average American shower, according to Home Water Works, takes 17.2 gallons of water, you begin to better understand the French perspective. As one American teacher who regularly teaches study abroad in France pointed out in a New York Times Opinion Piece: “Having spent a number of summers teaching… [I’ve] witnessed enough conflicts between students and the [French] families they live with to know that cleanliness is an endless source of cultural misunderstanding. [American] students feel that one can’t shower enough and that clothing should never be worn more than once, whereas their French lodgers worry about water bills and can’t understand why anyone would want to shower every day.” While the occasional bout of body odor might be disagreeable in France, it’s preferred to not being able to afford your energy bill at the end of the month. Should we shower every day? Sweat, believe it or not, is actually odorless. It is pumped out of your body in two different types of irrigation systems: eccrine glands and apocrine glands. Eccrine glands are found all over your body. As your body temperature rises, these glands act as an internal water cooler, releasing fluids to cool off your body. If ever you start sweating in areas you didn’t know you could sweat, you can thank the eccrine glands. In general, eccrine glands do not contribute to body odor. It’s your other irrigation system, the apocrine glands, that make the big stink. Apocrine glands are found on the parts of your body where you naturally have hair (namely your armpits and groin). If it helps, think of them as the trickling streams running through forests of body hair. These trickling streams flow freely with a milky fluid when your body is under stress (emotional, mental, and physical). Again, this milky fluid by itself is largely odorless. The foul-smelling reaction occurs when the sweat comes in contact with the bacteria on the skin. As you can tell from the forest of hairs analogy, I’m not a doctor nor do I play one on Medium. That being said, I’m willing to bet that if you only shower once every two days, like 24% of the French people you meet, there is going to be more bacteria on their bodies. Which, contrary to what your nose might tell your brain, is not necessarily a bad thing. Our skin is designed to maintain a layer of oils, bacteria, and other microorganisms. When we hop in the shower or bath, we wipe our skin clean of these positive protective influences. The side effects, as reported by Harvard Medical School, can be far more damaging than an occasional bout with body odor: Skin may become dry, irritated, or itchy. Dry, cracked skin allows bacteria and allergens to more easily creep in, allowing skin infections and allergic reactions to occur. Antibacterial soaps can actually kill off normal bacteria, upsetting the balance of microorganisms on the skin. Frequent baths or showers throughout a lifetime may reduce the ability of the immune system to do its job. Perhaps instead of recoiling at the smell of body odor on the French metro and elsewhere, we could recognize the possible factors leading that person to smell the way they do: including saving the planet, being healthier, or affording rent at the beginning of next month. We don’t have to fully embrace them (especially if they’re sweaty), but we should learn to appreciate them.
https://medium.com/mindtrip/an-american-nose-in-paris-9a9dc3ae84af
['Dave Smurthwaite']
2020-02-11 07:35:41.315000+00:00
['Travel', 'Wellness', 'France', 'Health', 'World']
Are You an Abusive Person?
Most of us don’t want to be an abusive or toxic person. We don’t want to hear someone we care about say that we hurt them. We don’t want to end up estranged from those we care about. No one wants to read an email, letter or blog post detailing how just being themselves has hurt and driven away someone they love. Being an abusive person will mean you hurt the people that are closest to you. Imagine destroying the love someone has for you by being an abusive nightmare. Who wants to face up to that? Being the toxic person in a relationship or family is like playing pass the parcel with a hot brick. No one wants to be left holding it. Still, abusive people exist, so it’s got to be someone. You hopefully already know that physical abuse is bad but people frequently hurt others without getting physical. If you value your relationships you should be worried if you regularly display any of these behaviors: You don’t care how your behavior affects others. If certain political topics make people around you uncomfortable you still bring them up relentlessly. Why should you pander to people when it comes to what you think? If you’re asked not to make derogatory comments on someone’s weight, you go out of your way to do it. Someone asks you not to swear? Fuck that. If someone politely asks you to modify your behaviour and you flat out refuse; that should tell you something. You don’t understand the word “inappropriate”. Most of us worry that we’ve done something inappropriate from time to time. A red flag for abusive behaviour is that people are constantly telling you that you’re being inappropriate. They constantly tell you and you constantly don’t care, so you never change. This is a sign that you like to do things your way and anyone who disagrees with you is wrong. This attitude could quite easily lead you to be abusive towards people. You blame others for your bad behavior. You raised your voice because someone annoyed you. You caused a scene at that graduation dinner because no one talked about what interests you. You insulted your daughter’s boyfriend because his dress sense irritated you. You threw away your wife’s photo albums because she doesn’t keep the kitchen tidy enough. Do you get into these kinds of arguments with the people close to you? This is a double whammy of abusive behavior. You’re not only upsetting someone but when challenged you insist they brought it on themselves. This can start out as you just wanting to avoid the unpleasant truth that you’ve upset someone but it can very quickly become controlling behaviour. If everyone around you does as they’re told everything will be OK. If they don’t then they sort of asked for you to get angry. Treating someone badly in order to punish them for something you think they did wrong is abusive. There is nothing healthy about taking punitive action against someone you are supposed to care about. You’re basically telling the other person that if they slip up in your eyes they get hurt in some way. This kind of attitude will make you toxic to be around, it doesn’t matter if you’re making excuses or you really believe your justification. The effect on your loved one will still be damaging. If this point applies to you then it’s a worrying one. It’s a sign that you have very poor communication skills and struggle to manage your emotions. The potential for you to be or become abusive is quite high. When someone tells you how they feel, you feel confused. This one is quite simple. If you can’t understand other people’s feelings it’s a sign that you’re very fixated on your own. Do you mind not turning every conversation around to the election? I’m getting tired of it. Why not? I enjoy it. Making rude remarks about my weight hurts my feelings. Well it’s all true. If you can’t handle the truth that’s your problem. You know I’m vegetarian, I don’t want to eat that. Being vegetarian is a stupid fad, I cooked meat, eat it or go without. That’s a terrible thing to say. You’re too sensitive, you’re just being silly. If other people’s feelings do not compute for you then that’s a big red flag. Do you think how you feel about something should dominate another person’s life? If so, you’re likely causing offense, upset and resentment on a very regular basis. You see emotional expression as weakness. Toxic and abusive people don’t like it when people express how they feel. They see this as weak rather than as an attempt to create peace and harmony in the relationship. If you carry on long enough with this attitude you alienate your loved ones. They won’t bother trying to connect with you on an emotional level. Your relationships with them will feel strained and distant. Bear in mind, they’ll still form emotional connections with people, just not with you. Showing emotional vulnerability, as well as giving and receiving emotional support is integral to a close relationship. However, no one said it was easy. If you want happy, close relationships you can’t opt out of vital aspects of them. You think your abusive background was fine. “It never did me any harm.” Whenever someone utters this phrase they make it plainly obvious that whatever they’re talking about did in fact do them harm. This attitude is a really strong indication that people around you are going to end up suffering because of you. Firstly this phrase is almost always a response to someone explaining that they are upset by something. This phrase then totally shuts them down. That’s not conducive to a good relationship. Secondly this train of thought is usually followed up with justification by comparison. You didn’t give the kids lunch. I used to go days at a time without food, it never did me any harm. I can’t believe you ignored me for the whole day. My father once didn’t talk to me for a week. This kind of poor communication turns an attempt to discuss and resolve something into a competition you always win. You’re not really saying that you think it’s fine to not give the kids their lunch. You’re saying you refuse to discuss the matter and you’ll do what you want. ~ Good relationships are the cornerstone of a happy life. If you recognize yourself in some of the points here then you need to take action. Don’t be that person that harms loved ones and ultimately pushes them away. Maybe you aren’t good at communication or understanding emotions. No one is good at everything and all of us can learn. If you are on the receiving end of some of the behaviors listed here then you need to recognize this could be abuse. If the person displaying these behaviors doesn’t agree to change then you need to step away from them in order to protect yourself.
https://medium.com/swlh/are-you-an-abusive-person-3a448dc0d02f
['Stef Hill']
2020-02-27 12:04:16.509000+00:00
['Self-awareness', 'Relationships', 'Mental Health', 'Self', 'Abuse']
Why I’m Saying Goodbye to Caffeine
Why I’m Saying Goodbye to Caffeine I’ve quit alcohol and tobacco, but can I kick caffeine too? Photo by Sebastián León Prado on Unsplash This year I’m aiming for a goal that I never expected to set: Going completely caffeine free. How Did I Get to This Point? I’ve been a proud caffeine drinker for most of my life. It started with soda in high school, but in recent years coffee has joined the mix too. My caffeine consumption really took off three years ago when I quit drinking alcohol. Before getting sober, I spent my evenings with beer bottles practically attached to my palms. After quitting, it helped to have non-alcoholic drinks on hand to replace the habit. Unfortunately, most of those drinks were caffeinated. Although caffeine is nowhere near as bad as alcohol (and not even in the same ballpark), I certainly can’t deny that I had started drinking unhealthy amounts of it. As of last year, I was up to about 4 cups of coffee and 4 sodas per day. That’s around 500 mg of caffeine — not an insane amount, but 100 mg more than the Mayo Clinic’s recommended maximum. I could also tell that I had at least a minor addiction to caffeine, because I felt like I needed it to start my mornings, and would get headaches in the afternoon when I skipped it. Despite that, this time last year, I still hadn’t even considered quitting. I thought that caffeine was just such minor problem compared to alcohol that it was ridiculous for me to think twice about it. I also had a third, more serious addiction to contend with: cigarettes. Last year, instead of worrying at all about caffeine, I was entirely focused on quitting cigarettes. I went on and off of nicotine patches for months, and finally quit all forms of nicotine last September. In the end, it was quitting smoking that actually led me to this year’s goal of quitting caffeine. After quitting smoking, my sleep habits became a total mess. I was staying up late, sleeping only a few hours some nights and 10 hours or more others. I started waking up multiple times a night, a problem that I haven’t had since my drinking days. At first I wrote all this off as nicotine withdrawal symptoms, but after it lasted for a couple of months, I looked into it more seriously. I learned that caffeine was the most likely culprit. Nicotine causes the body to process caffeine more quickly, so smokers end up feeling less of the effects of caffeine than non-smokers. When I had quit smoking, I had actually started drinking a little more coffee than usual. To add to that, my body was no longer processing caffeine at an accelerated rate, so each coffee was having a stronger effect than I was used to. That’s certainly enough to throw off my sleep. I immediately decided to cut down on my caffeine, and I started seeing improvements in my sleep from the very first day. From Cutting Down to Quitting I cut down on caffeine to address the sleep problems I experienced after quitting smoking, and — sure enough — it fixed them. So why didn’t I stop there? Why did I decide to quit completely? I had always thought of caffeine as a benign addiction, but while reading up on how it was affecting my sleep, I discovered that it actually has much more harmful side effects than I ever realized. My starting place for learning about caffeine was simply reading anecdotes of people who had quit caffeine on Reddit. Although anecdotes aren’t necessarily scientifically reliable, my experiences with quitting drinking and smoking have taught me that reading about others’ addictions can be an important learning tool. To my surprise, one of the most commonly reported symptoms of caffeine use is high levels of anxiety. Anxiety is something that I’d struggled with for most of my life, and it never once occurred to me that caffeine could in any way be related. Over years of going to therapy, I don’t remember a therapist ever asking me how much caffeine I drank. Reddit users described experiencing anxiety for years, than having it disappear (or at least decrease) after they quit caffeine. Some said the changes took a few months to take effect, others reported noticing a difference in just days. Reading through these stories, I was hopeful, but also a bit skeptical. A forum for people quitting caffeine was bound to be a biased source, and I wondered if the placebo effect was playing a role as well. I looked for actual research studies to back up the claims I was reading on Reddit. In short, the studies agree: caffeine causes anxiety. Does that mean quitting caffeine will eliminate all anxiety? Of course not, but it could potentially help. After reading anecdotes about the improved anxiety levels, and finding a few research studies to back it up, I decided I owed it to myself to at least try a life without caffeine. Although I don’t expect my anxiety to disappear entirely, even a small reduction would make this experiment worth it. Quitting and Withdrawal I’m now on day four without any caffeine. Almost everything I read recommended weaning off caffeine instead of quitting cold-turkey, so that’s exactly what I did. I had already been down to about three cups of coffee a day (and no soda). I noticed that the new year was right around the corner, so I timed my reduction so that I’d hit my first zero-cup day on January 1st. For two days I drank two cups of coffee a day, then for two days I drank one cup a day. Even by the time I was down to one cup of coffee a day, I was already feeling a bit of confusion. My mind was so used to operating on constant caffeine, that having just one cup in the morning was really throwing me off. On January 1st, my first day without caffeine, the confusion got much worse, and I had a very bad headache as well. These both got even worse in the second day, but mostly cleared up in day three. Today, day four, the withdrawal feelings are essentially gone. Now that it’s over, I can say it wasn’t a fun experience, but it wasn’t too hard at all compared to quitting alcohol or nicotine. From what I’ve read, if I had done a more gradual weaning schedule, I could have avoided withdrawal side effects entirely. I think I just got a little too carried away with the idea of quitting right on New Year’s Day. As far as cravings, I’ve been feeling mild ones all day. Again though, it’s really nothing at all compared to quitting alcohol and nicotine. Those experiences definitely left me over-prepared for this one. My goal is to last at least this entire year without caffeine, but ideally I’d like to just stay off of it for good. I think my most likely cause of failure will be just not taking it seriously enough. Caffeine isn’t nearly as detrimental as my other vices had been, and so I don’t feel as strongly about quitting. It feels like the stakes just aren’t high as far as caffeine is concerned. But, with that said, I really do think that the benefit to my anxiety could be huge, and I’m trying to keep that in mind. Even if caffeine isn’t the worst habit I’ve ever had, it was certainly not doing me any good. The Benefits So, after four days without caffeine, have I noticed any reduction in anxiety? Maybe this is just the placebo talking, but yeah, I really have. I actually feel great today, in a way that I haven’t in months. I know it’s too soon to tell for sure whether this is the lack of caffeine or just a coincidence, but things are off to a great start.
https://medium.com/the-ascent/why-im-saying-goodbye-to-caffeine-dcd7013e22e6
['Benya Clark']
2020-01-09 13:11:01.345000+00:00
['Addiction', 'Mental Health', 'Anxiety', 'Health', 'Lifestyle']
How to Outsmart a Plague
The Skeptics — “You’re not going to get sick and nobody you know is going to get sick.” Trump is one of many downplaying the size of the outbreak based on current infection number. People are pointing to the still-quite-low total numbers of cases of infection as a reason to write off the virus as non-newsworthy. However, what they’re failing to account for is COVID-19’s high level of contagion, difficulty to detect, inability to test effectively, and lack of a vaccine. All of these factors increase the overall danger of this pandemic, especially now that it is being reported in most of the world’s countries. And, counterintuitively, the fact that it doesn’t have the nightmare-inducing mortality rate of other diseases of recent memory actually means it is more effective at spreading to massive numbers of people. If it doesn’t kill you or confine you to your bed, you’re out in the world and much more able to share the bug with your fellow citizens. Notice that coronavirus has already jumped to the second most lethal on this list overall, and the outbreak is only just beginning. (source) What’s more, in comparing coronavirus to the common seasonal flu, which itself kills tens of thousands of Americans each year, coronavirus is estimated to be 1.5–2.3 times more infectious and 10–50 times more lethal. Please take a moment to let that sink in. As of March 11. 2020, these are the countries reporting cases of coronavirus. (source — CDC website) This thing is just getting started, and the window to stop the spread is narrowing very rapidly, if it is even still open at all. Harvard epidemiologist Marc Lipsitch thinks there’s no way to stop this thing from spreading far more significantly, and he predicts that, without strong countermeasures, “between [20–60%] of the world’s adult population could end up infected with coronavirus.” That’s a number in the billions. Even with a lower-end mortality rate estimate of 1%, that means tens of millions of deaths, at minimum. If this thing isn’t slowed down, and if just about everyone gets this bug, you will almost certainly know people who will die. Essentially, left unchecked, we could see the number of cases multiply by a factor of 10 every two weeks. That adds up to a very large number very quickly. Maybe that seems far-fetched. You might point, as Trump did in the tweet above, to the fact that we’re still at only about .001% infection rate for the global population, and even less in the United States. But one must understand, here, how exponential growth works. This video provides an incredibly helpful visualization of this phenomenon. One bit of hope comes from the possibility that the coming warmer weather might help slow or delay the outbreak, but even that is highly uncertain. It all comes down to the growth rate, as in how quickly and unrestrictedly the virus is allowed to spread. We’re in a situation of when, not if, this deadly disease spreads, and to what extent. At this point, many more people will die regardless of what we do, but the video above should hopefully make it crystal clear just how incredibly important efforts to contain and slow the virus are.
https://medium.com/basic-income/how-to-outsmart-a-plague-8c55c442eae4
['Conrad Shaw']
2020-03-21 15:25:42.289000+00:00
['Economics', 'Disease', 'Health', 'Basic Income', 'Coronavirus']
The Cardio of Audio
STRUCTURED vs UNSTRUCTURED DATA A structured data usually lives in an RDBMS or a database that you can easily search records, see the numbers and compare them. For example, a record can have names, id, date of birth, salary, address, etc. The data is arranged in a structured tabular like format and it’s simple to work with them. Unstructured data comprises audio, text, images, etc. Around 80% of Enterprise data is stored in an unstructured format. It is not easy to work with them because we can’t directly use the data stored in an image or an audio file. In this article, we will be mainly focussing on the audio data. AUDIO DATA The human brain is continuously perceiving audio around us. We hear the birds chirping, the road racing, the air blowing and the people speaking. We have devices to store all this data in various formats like mp3, wav, WMA, etc. Now, what else can we do with this data? For working with unstructured data like this Deep learning techniques are your best bet. First, we see what audio looks like. Audio is represented in the form of waves where the amplitude of the waves differs from time to time. AUDIO SAMPLING It is important to understand sampling because sounds are continuous analog signals and when we convert them into the digital signal that is composed of discrete data points from the signals. This process is called sampling and the rate by which sampling is done is called the sample rate. It is measured in Hz (Heartz). Audio with a 48kHz sample rate means the audio was sampled with 48,000 data points in a second. When sampling a little bit of the information is lost. LibROSA LibROSA is the popular python package used for music and audio analysis. It has the building blocks to create a music information retrieval. To install the package with pip you can run this command in your terminal. pip install librosa Load an Audio File We will load a 23-second audio file of a dog barking. import librosa data, sample_rate = librosa.load(“dog bark.wav”) The load method of librosa takes the path of the audio file and returns a tuple that has audio sampled data and the sample rate. The default sample rate is 22050. You can also specify a custom sample rate in the argument. To use the original sample rate we use sr=None data, sample_rate = librosa.load(“dog bark.wav”, sr=None) Let us see what is in the data- print(data.shape, data) print(sample_rate) Output: (1049502,) [ 0.00019836 -0.00036621 0.00016785 …. 0.00099182 0.00161743 0.00135803] 44100 The data is a numpy array with 1049502 data points. The original sample rate is 44100 Hz. Scaling down the sample rate will make the data less and we can perform operations faster but too much scaling down will also result in some information loss. Displaying the Audio Data Librosa has a display module that plots you a graph of the data- import librosa.display librosa.display.waveplot(data) Output: This is what barking of a dog looks like. Now with the sampled data and the sample rate, we can extract features from the audio. Feature extractions for Machine Learning There are various methods and techniques to extract audio features. These are- Time-domain Features Zero-Crossing Rate - If you observed the audio image we saw the sampled data was between -1 to 1. Zero crossing rate is the rate of the change in these signs i.e. the rate of change from a negative value to positive value. This is used heavily in speech recognition and music information retrieval. Spectral Features Spectral Centroid - It indicates the “brightness” of a given sound. It represents the spectral center of gravity. Suppose you are trying to balance a pencil on your finger. So, the spectral centroid would be the frequency where your finger touches the pencil when it’s balanced. Spectral Rolloff - Spectral roll-off is the frequency in Hz below a predefined percentage (roll_percent) which is 85% by default in librosa library. This feature is useful in determining voiced signals from non voiced signals. It is also good for approximating the minimum or maximum frequency by setting the roll_percent to 1 or 0. Perceptual Features MFCC — Mel-Frequency Cepstral Coefficients - Each individual voice sounds different because the voice is filtered by our vocal tract including the tongue, teeth, etc. The shape decides how it sounds and by determining shape accurately we can identify the sound it will produce. The job of the MFCC is to determine the shape of the vocal tract and represent it into a power spectrum. MFCCs are the most used feature in audio and speech recognition. They were introduced in 1980 and have been the state of art ever since. CONCLUSION The presence of unstructured data is huge on the internet. It’s not an easy task to analyze unstructured data as we have to perform a lot of transformations on the data to extract features. The audio can have 3 different categories of features time-domain, spectral and perceptual features.
https://towardsdatascience.com/the-cardio-of-audio-cbe310d94b48
['Rinu Gour']
2019-11-23 15:18:58.858000+00:00
['Big Data', 'Python', 'Data', 'Data Science', 'Machine Learning']
The Shaw Alphabet and Other Quixotic Solutions I Love
From Wikimedia Commons Some, like me, have a Quixotic love for rational solutions. A rational love for rational solutions is a love for rational solutions that work, but some love a rational solution that would work if only people were not so damn stubborn and set in their ways. Let’s call such people “Panzas.” Panzas are able to see that a solution, though fascinating, is not really accepted, but they love it anyway. My name is Michael. I am a Panza. Here I tell of some of my loves, which I’ll list in groups: Those that have not yet taken off (thinking positively here); Those that have found a niche (some on the way down, some on the way up, some just holding steady, but all secure within the niche); Those that are starting to achieve lift-off; Those, still beloved by Panzas, that seem to be losing ground; and A success story: from the outer edges, when it was known only to (and loved only by) Panzas, but now is mainstream You will surely have your own nominees, but these are ones I’ve followed. Have not yet taken off Panzas live in hope (our namesake, you will recall, did indeed finally have his hopes realized and received his promised island to rule), but in some cases hope struggles for breath. Here are some of those for which hopes are somewhat dim. The Shaw alphabet was created per a bequest in George Bernard Shaw’s will to develop a phonetic alphabet, distinct from the Roman alphabet, of at least 40 characters. A competition was held, overseen by Pittman (of Pittman shorthand), and the final result was developed based on the designs of the four contest winners. One book, The Shaw Alphabet Edition of Androcles and the Lion was written with parallel text, Roman alphabet and Shaw alphabet. The Shaw alphabet is more compact, taking up about 1/3 the room of the Roman transliteration of the same text. The book was published in 1962 and not since. I bought a handful of copies and sent them to friends, whom I then plagued with letters written using the Shaw alphabet. Of late, the title of the establishing book has been ignored in favor of the “Shavian” alphabet, presumably to parade one’s education. It’s a lovely alphabet, but phonetic alphabets don’t work well when regional pronunciations vary so widely. Shaw did say that the text of Androcles and the Lion should be written using the British Received Pronunciation as spoken by King George VI. Since I come from a region where “pen” and “pin” are pronounced the same, I fear I sadly mangled the spelling. But I loved the idea (which some pronounce “idear”). The HK G11 assault rifle is another lovely idea that unfortunately was not adopted. It is a robust weapon that uses caseless ammo, which means that a soldier can carry many more rounds of G11 ammo than regular ammo — because without brass shells, ammo weighs, round for round, much less. Moreover, with caseless ammo, no shells are ejected. Ammo is loaded from magazine into firing chamber with a rotating mechanism, which is fast: 3-round bursts fire at the rate of 2000 rounds per minute (thus a 3-round burst takes 90 milliseconds), with recoil felt after the burst is fired. A magazine holds 45 rounds, and the rifle carries one magazine in firing position and two alongside, ready for quick loading: 135 rounds ready to go, in effect. The rifle initially had some problems with heat causing cook-off of the caseless ammo during sustained fire (brass shells on conventional cartridges provide some insulation), which was fixed, and a later problem of the propellant block being too fragile for field conditions. The big problem, I suspect, was that the rifle was unconventional. The military tends to be extremely conservative and resistant to change (cf. British tactics in the Great War: having troops run across open fields to charge machine-gun emplacements). [Update: another reason for the G11’s not being adopted: a change in the geopolitical situation.] The Dvorak keyboard layout impressed me so much that for my children I ordered Smith-Corona portable typewriters that had the Dvorak layout — the true Dvorak layout, in which “?” does not require using the shift key. (The committee developing the ANSI-standard Dvorak layout were told that they had to stick with the existing keycaps, which have “?” as an up-shift of “/”.)The eldest did indeed learn and use her Dvorak typewriter and found it a great advantage in college because none of her fellow students tried to borrow her typewriter. But even she converted to QWERTY in time. So it goes. Still, people continue to work to improve the QWERTY layout, and one — the Workman keyboard layout — seems quite interesting (though the “?” requires using the shift key, for reasons unclear to me—possibly the existing-keycaps curse). The big benefit of the Dvorak keyboard is increased typing comfort, with less finger travel and a better balanced workload for the hands. The Fitaly keyboard layout is optimized for one-finger typing (as on a touchscreen) or for typing using a stylus. I used the Fitaly layout for some years when I had a Palm Pilot. It worked extremely well. For a touchscreen keyboard in, say, a public library, that is used constantly by first-time users, the QWERTY layout is better because it’s familiar, but for a keyboard layout used repeatedly by the same person, the Fitaly is much better because, once it’s learned, it results in much faster and more accurate entry. I loved it. With practice, many people could type 50wpm with the Fitaly keyboard, with the best reaching 84wpm using the Fitaly layout on a Treo Thumboard. I so wish the Fitaly were available for the iPhone, and I really don’t see why it isn’t, except it has the usual curse: it’s different. Esperanto, a language constructed to be easy to learn to serve as a common second language for all (with no political overtones from being a national language), is still active, but I think most will agree that it has not sustained the momentum it had until interrupted by the Great War. Still, it’s around (and easily learned on-line), and it has proven to work, both for communication among speakers whose only common language is Esperanto and as a first foreign language, since learning Esperanto as one’s first foreign great facilitates learning a subsequent foreign language. For example, one study in Finland (where German is commonly taught in school) showed that students who studied a year of Esperanto followed by two years of German knew more German and were more fluent in German than students who studied German for three years. And it’s a fun language. This post about Ithkuil — another fascinating constructed language that looks considerably less fun, serving as a testbed for a philosophical test of language capabilities— begins by explaining the five reasons Esperanto works so well. So why is Esperanto not commonly taught in schools that teach students a foreign language? We Panzas want to know. Esperanto as an introductory foreign language perhaps belongs in the next category, but its real goal was to be a universal second language. Forth is a programming language that includes, among other commands in the language, compiler commands, so that within your program you can define new commands and use them like any other command in the language. Indeed, except for a core of hard-coded commands, most of Forth is written in Forth. You use your new commands along with existing commands to define additional commands, until finally you define a command that is the program — executing that command does the job. Forth uses a push-down stack to hold data, and arithmetic operations use Reverse Polish Notation: (3+5)*4 in a Forth command is 3 5 + 4 *. A handful of commands allow you to manipulate the stack. Forth is fast (since it works in a computer’s native language, which consists of addresses) and large Forth programs take less memory than assembly language equivalents, so its main use nowadays (insofar as it is used) is embedded programming for microprocessors. It is very easy to implement and often is the first high-level language brought up on a new microprocessor. And for an individual programmer using the language, it grows (through commands the programmer adds) to be very powerful in addressing the applications the programmer most frequently develops (because the added commands are tailored to those applications). Iterative development is natural: define a command, execute it on the spot, and revise as needed. The drawback is the name explosion (since command definitions are typically brief, you end up with a lot of command names), which makes Forth not so good for team projects, now the norm. But back in the day, it was great even though (strangely) not widely accepted. Partly, I think, that is because Forth was developed in the field, by programmers, not within a supporting institutional/academic structure. In a niche Italic handwriting (aka chancery cursive) is much better than printing or traditional cursive (since italic letter shapes hold together when written at speed, and since italic handwriting is beautiful). When I was teaching at a private elementary day-school, I introduced italic handwriting into the curriculum, and it was a hit. When parents saw the beauty of their children’s handwriting, they felt that their tuition money was showing some results — plus the students loved it. Italic handwriting takes a little practice, but a fountain pen with an italic (or stub) nib does most of the work for you. Italic handwriting is still definitely around and is taught in some schools, but not like back in the day (say, in Elizabethan times — the earlier Elizabethan times). Traditional shaving, using true lather (made from shaving soap with a shaving brush and water) and a razor that has only one blade (a double-edge safety razor or a straight razor), has fallen from popular favor but still has a cadre of adherents, who like it because their shaves are better than when using canned foam and a multiblade cartridge (or — shudder! — an electric razor) and better in two ways: a better result (smoother, easier on the skin) and a better experience (a shave that’s actually enjoyable) — well, three ways if you include that I spend less than $5 per year on blades. Starting each day with a pleasurable ritual that improves your mood and appearance and makes you feel squared away and shipshape has a cumulative positive effect. Why don’t all men shave this way? I suspect the answer is “clever (and well-funded) marketing.” The marketing effort costs many millions of dollars, but that’s fine with Gillette and its cohorts since the men buying the products pay the marketing costs. It is true that a traditional shave with true lather and a double-edge safety razor takes a little more time — at first, enough so that novices shave in the evening, when they don’t have to rush (rushing a shave is a Bad Idea). But with experience a complete shave (wetting brush under hot water, loading brush with soap (10 seconds so far), lathering one’s face, doing a three-pass shave (with the grain, across the grain, against the grain — except for in-grown-prone areas, where the third pass is across the grain in the other direction), lathering before each pass and rinsing after, drying the face and splashing on aftershave) takes a total of 5 minutes. That’s perhaps 2 minutes longer than a shave done with canned foam and a cartridge razor, but in return for the 2 minutes you begin the day doing something you enjoy that improves your mood and thus the character of the day. Recumbent bikes are definitely around, but not so popular as one would expect. They make so much sense — the sight line is easier on your neck, for instance, and you can exert more force on the pedals than merely your body weight. [Full disclosure: Not only did I have a recumbent bike, I also had a Moulton bike.] Recumbent bikes did not fall into a niche; they started in a niche and in a niche they remain, but perhaps they will break out. Crokinole solves a particular special problem: to find an active indoor game easily enjoyed by players of varying skill (such as one might find at a party), is fun to watch, and moves quickly so that many get a chance to play. Ping-pong (table tennis, if you’re a purist) takes too much room and doesn’t accommodate well players of different skill levels. Also, ping-pong games take too long to complete, so that waiting your turn is a drag. Crokinole takes but a table-top — a card table is fine. Even a novice can enjoy playing and the mix of luck and skill makes novice/expert matches still fun. It seems to me ideal for parties, which gives you an idea of what sort of parties include me. Somehow the sliding of the crokinole pieces across that smooth maple board is so satisfying, with luck and skill nicely balanced. And it’s proved to have legs: it’s been played since 1876, when it was invented by Eckhardt Wettlaufer in Ontario, Canada. Crokinole is in fact popular (in parts of Canada), but any Panza can see that it should be more popular everywhere. Beginning (perhaps) to burgeon Go/Weichi/Baduk (Japanese/Chinese/Korean names), like Crokinole, solves the problem of having fun when the players differ in ability and experience, with Crokinole being a game of physical activity and Go a strategic board game like chess — but also not like chess. For players who differ in ability and experience, chess doesn’t work well at all, particularly if the difference is great, since in chess the handicap changes the character of the game. In Go, the handicap is integral to the game and doesn’t alter the nature of play — and the handicap can be finely tuned. When two people play repeatedly with each other, very soon every game is tense and close-fought with a narrow victory (for one) or loss (for the other). Here’s how: if one player wins 3 games in a row, the opponent’s advantage is adjusted by one stone at the start, so winning over the opponent becomes slightly more difficult. If a player loses 3 in a row, the handicap is adjusted one stone in his favor, so winning becomes slightly easier. Very soon there are no more 3-game winning streaks for either player, and every game is a hard-fought toss-up. A regular game on a 19x19 board might take 40 minutes to an hour, but the game play feels much the same if the board is reduced to 13x13 or even 9x9, with the game taking less time as the board shrinks. So if you want to play a game over lunch, the 13x13 board makes sense. If you want to play a quick game, 9x9 is the answer. Add in the aesthetics of the game — the board and stones and sounds and tactile pleasure — and a Panza doesn’t understand why the game is not as popular in the West as it is in the East. That may be changing. Go got a big boost with the publicity from Deep Mind’s AI AlphaGo Zero, which taught itself to play by starting with only the rules and then playing game after game against itself until, after 3 days of self-play, it was better than AlphaGo I, which had beat the human world champion Lee Sedol 4 times in a 5-game match. Always popular in Japan, Korea, and China—well, not always, but for 2500 years — the game now is increasingly popular in North America. Watch the movie The Surrounding Game to get an idea why. After my first game as an undergraduate, I had no idea at all what went on, and I thought, “Never again. Why do people play this game?” Then I started playing in graduate school, saw why, and wanted to go to Japan to delve deeper into it. (I didn’t.) The whole-food plant-based diet and time-restricted eating are both gaining popularity as people become more aware of studies that demonstrate the positive health effects, try them, and discover they work. Still many people still are reluctant. They conjure up imaginary difficulties (“What about protein?”), and they draw back from feeling awkward, ignorance, and confused (feelings that often arise from plunging into something totally new), not realizing those feelings can be enjoyed. Two books by Michael Greger MD FACLM, How Not to Die and How Not to Diet have helped people see the benefits, and the documentary The Game Changers show elite athletes thriving on the diet. Evidence from nutritional studies sare also convincing — to take one example, from the New England Journal of Medicine, “Effects of Intermittent Fasting on Health, Aging, and Disease”: Preclinical studies consistently show the robust disease-modifying efficacy of intermittent fasting in animal models on a wide range of chronic disorders, including obesity, diabetes, cardiovascular disease, cancers, and neurodegenerative brain diseases. Periodic flipping of the metabolic switch not only provides the ketones that are necessary to fuel cells during the fasting period but also elicits highly orchestrated systemic and cellular responses that carry over into the fed state to bolster mental and physical performance, as well as disease resistance. Still, neither whole-food plant-based diets nor time-restricted eating have yet been widely adopted in (say) the U.S., as shown in obesity trends in men and in women. But at least more people are talking about it, and some will edge into it. Losing ground This is the unburgeoning category: skills and solutions that seem to be dwindling and withering. Home cooking and kitchen skills seem to be fading due to over-scheduled lives, resulting in many losing the knowledge and practice of daily cooking — not fancy feasts, just getting good and tasty food on the table efficiently and enjoyably. The slack is being taken up by prepared meals, pizza, fast food, and other foods that are CRAP (calorie-rich and processed). (See charts at links above.) Human interaction is falling victim to busyness and automation. David at Raptitude has a good column describing the descent in which he notes Human interaction probably isn’t in danger of extinction, but it is quietly losing great swaths of its natural habitat. Technology is making real interaction less necessary at work, home, and everywhere in between, which must mean there’s simply less of it in the world than there was a decade ago. Read the column for more. The trend is dire: humans are social animals and need human interaction for their health — mental, physical, and spiritual. On-line interactions do not deliver the benefits. The solution is clear: interacting in person with others. Note that Crokinole and Go do involve interacting with others (unless you play Go against an AI). Popular now beyond Panzas Science-fiction was once very very niche. It’s been around for centuries — Gulliver’s Travels, Jules Verne, and so on — because it’s a good solution to making pointed criticisms of one’s society even when strong social forces don’t want criticism. As Gulliver’s Travels demonstrates, science-fiction lets a satirist criticize politics and society at a safe remove, by presenting some distant and fantastical society that (coincidentally) reflects and puts in high relief the foibles of the writer’s own society. Nowadays science-fiction is taken for granted. Philip K. Dick and Kurt Vonnegut (both strong social critics) once were fringe; now they’re mainstream. Science-fiction movies are big box office. Panza can relax. They get it now.
https://medium.com/age-of-awareness/the-shaw-alphabet-and-other-quixotic-solutions-i-love-8e6a28ebe28f
['Michael Ham']
2020-02-03 21:10:57.711000+00:00
['Health', 'Artificial Intelligence', 'Games', 'Language']
Self-employed vs. Employed: Pros & Cons
This is not going to be some deeply controversial rant about why one way of making a salary is better or worse than the other. This is not a verbose way to toot my own horn and make you spend your time and energy reading about how great I am. This is not an article filled with cat pictures (sorry). Because the question of “Which is better?” is an unfair question. The “right” answer is completely subjective. I am currently self-employed and I love it, though of course there are drawbacks. I previously worked for 10 years in corporate America and it was also both good and bad. There are “dream jobs,” of course, but the reality is that even your dream job has drawbacks and days that suck sometimes. Both self-employment and traditional employment have advantages and disadvantages, it is truly about what is best FOR YOU as an individual and for your family. As Quora user Kelven Swords points out: Pros: YOU make the decisions, no one else… and you thus reap the rewards. YOU control the finances, no one else… and you thus reap the profits. YOU determine who is on staff, no one else… and you thus control the social structure. Cons: You make the decisions… thus have no one else to blame for your errors. You control the finances… thus have no one else to blame for any wasted money. You determine who is on staff… thus you have no one else to blame for any parasitic staff members who poison the well. Let’s take it a step further than what Kelven has described above. There are obvious advantages to working for yourself. You can set your own working hours. You choose who to work with…and who NOT to work with. You have significantly more control over processes, contracts, clients, work, time, and everything else. You can work in your pajamas — and even sleep in! You get to build great relationships with your clients because you’re steering the ship and choosing how to cultivate those relationships. There are some obvious disadvantages, as well. You have no one else to rely on. You do not have a manager setting tasks or deadlines, so all deadlines are self-imposed, which can be difficult for some to manage and stick to. Time management becomes extremely important, which is hard for many. No company insurance or other benefits. No sick time, paid vacation time, or maternity leave. Less stability in terms of income. You will find yourself working far more than 40 hours most weeks. You do not have coworkers and it can be sometimes lonely and isolating. You are probably not an expert in every single thing a business needs: processes, sales, closing sales, marketing, website building and maintenance, creative stuff, contracts, organization, admin work, etc. Higher potential for burnout/overworking. Doing your taxes is harder. When it comes to working for a company, you are getting some very specific advantages, in terms of a stable, dependable income, medical and other benefits, having people to ask when you need help, and being told what you should be doing. Something people rarely think about when dreaming of being self-employed is the lack of structure and organization. You have to create your own schedule, keep yourself on task, make sure work gets done, track deadlines, invoices, payments, all business expenses, and create a structure to your day. It is incredibly easy to lose track of time or lose focus and end up spending half your day on social media when no one is watching! There are many tools out there to help you get organized and create a structure for your day. Some are free and some cost money — which you need to keep track of so that you can make sure to deduct it on your taxes as a business expense. Taxes are different and a bit more difficult when you work for yourself, and you have to save some of your income to pay it, and it WILL be a difficult check to write. If you have personal assets, you’ll need to consider if it makes more sense for you to be a sole proprietor, LLC, S-Corp, or several other options, each with their own benefits and drawbacks. There is much research involved in starting your own business! For Me Being my own boss has been fun, challenging, interesting, and lonely. I love being a writer and being able to choose what I write and who I work with, and I created a business model which works well for me. I also continuously refine and evolve my business offerings, update my own website, look for clients, maintain my social media accounts, and blog regularly. All of which is part of running my business, but is ultimately unpaid work. I love my business and what I do, but I also enjoyed my work as a Business Development Director in the recruitment industry. I had a great boss, cool coworkers, a stable and dependable paycheck, and a set end time to my workday, none of which I now have. However, I have the freedom to do the work I want, charge the rates I want, and am much more flexible with my schedule. I can go to the gym in the middle of the day, run errands whenever I want, work in the middle of the night if I am so inclined, and pet my cat all day. For You It’s about what works best for you. Don’t put pressure on yourself to be one way or the other or let people tell you one is “better” or more “right” for you than the other. Make plans, do research, interview people, and figure out what is best for you and make sure you have a clear idea of both the advantages and disadvantages so you are well informed!
https://jyssicaschwartz.medium.com/self-employed-vs-employed-pros-cons-d97b4bdc4f70
['Jyssica Schwartz']
2020-02-04 16:23:57.033000+00:00
['Life Lessons', 'Freelancing', 'Entrepreneurship', 'Writing', 'Business']
Joining a Professional Association as a Freelancer
Personal benefits Most professional associations require members to keep professional development (CPD) records and therefore offer plenty of training opportunities. The greatest benefit of belonging to an association is, no doubt, networking. Working for ourselves, we don’t usually get to meet colleagues during the day, so networking events are excellent for meeting colleagues in the flesh. Getting to know and interacting with colleagues is essential and may even lead to fruitful business relationships. Being a member of a professional association is also a great opportunity for you to get involved. Volunteer, join a committee, take on tasks such as editing their member magazine or helping with their website. This will get your name out there, put you in touch with colleagues and generally allow you to do something for the profession, which will benefit us all.
https://medium.com/the-lucky-freelancer/joining-a-professional-association-as-a-freelancer-aa74b82d4922
['Kahli Bree Adams']
2020-07-06 23:43:10.031000+00:00
['Freelancing', 'Entrepreneurship', 'Business', 'Startup', 'Small Business']
A Microbial-Based Explanation for Cooling Human Body Temperatures
A Microbial-Based Explanation for Cooling Human Body Temperatures Could changes to our gut microbial landscapes be responsible for cooling human body temperatures? The value of 98.6° F (37° C) for the standard human body temperature was first proposed by German physician Carl Reinhold August Wunderlich in 1851. This reference point is likely inaccurate, however, as recent studies have shown that human body temperatures generally run lower than the accepted norm. 37 degrees Celsius has been the traditionally accepted value for normal body temperature, photo by orelphoto on Adobe Stock Nevertheless, average human body temperatures have decreased since the Industrial Revolution, according to the results of a recent Stanford study, which concluded that the average American’s body temperature is about 0.58° F (0.03° C) lower for woman and 1.06° F (0.6° C) lower for men than it was in the 19th century. On average, human body temperatures have fallen by 0.05° F (0.03° C) per decade. The researchers attributed these changes to a reduction in metabolic expenditure, reduced inflammation, and lowered incidence of infectious diseases in modern times. In their study, the Stanford team examined three different datasets, which included records from the Union Army Veterans of the Civil War from 1860 to 1940, the National Health and Nutrition Examination Survey I from 1971 to 1975, and the Stanford Translational Research Integrated Database Environment from 2007 to 2017. The team combed through all 677,423 temperature measurements, accounting for variables such as age, height, weight, and potential differences in temperature measurement accuracy, to arrive at their conclusions. Microbes provide warmth to their hosts I would like to propose that our reduced body temperature measurements may be the result of loss of microbial diversity and rampant antibiotic use in the Western world. Indeed, a small study of healthy volunteers from Pakistan reported higher mean body temperatures than those encountered in developed countries where exposure to antimicrobial products is greater. Heat provision is an under-appreciated contribution of microbiota to hosts. Microbes produce heat as a byproduct when breaking down dietary substrates and creating cell materials. Previous reports have estimated bacterial specific rates of heat production at around 168 mW/gram. From these findings, we can extrapolate that an estimated 70% of human body heat production in a resting state is the result of gut bacterial metabolism.
https://medium.com/medical-myths-and-models/a-microbial-based-explanation-for-cooling-human-body-temperatures-4746a3a9868
['Nita Jain']
2020-02-26 08:26:33.452000+00:00
['Health', 'Ideas', 'Microbiome', 'Science', 'Education']
Azure — Deploying Vue App With Java Backend on AKS
Azure — Deploying Vue App With Java Backend on AKS A step by step guide with an example project AKS is Microsoft Azure’s managed Kubernetes solution that lets you run and manage containerized applications in the cloud. Since this is a managed Kubernetes service, Microsoft takes care of a lot of things for us such as security, maintenance, scalability, and monitoring. This makes us quickly deploy our applications into the Kubernetes cluster without worrying about the underlying details of building it. In this post, we are going to deploy a Vue application with a Java environment. First, we dockerize our app and push that image to the Azure container registry and run that app on Azure AKS. We will see how we can build the Kubernetes cluster on Azure AKS, Accessing clusters from outside, configuring kubectl to work with AKS cluster, and many more. Example Project Prerequisites Install Azure CLI and Configure Dockerize the Project Pushing Docker Image To Container Registry Creating AKS Cluster Configure Kuebctl With AKS Cluster Deploy Kubernetes Objects On Azure AKS Cluster Access the WebApp from the browser Summary Conclusion Example Project This is a simple project which demonstrates developing and running Vue application with Java. We have a simple app in which we can add users, count, and display them at the side, and retrieve them whenever you want. Example Project If you want to practice your own here is a Github link to this project. You can clone it and run it on your machine as well.
https://medium.com/bb-tutorials-and-thoughts/azure-deploying-vue-app-with-java-backend-on-aks-a938eaed0cf4
['Bhargav Bachina']
2020-12-20 06:10:05.345000+00:00
['Azure', 'Cloud Computing', 'JavaScript', 'Kubernetes', 'Web Development']
If You’re Not in a Developer Community Then You’re Missing Out
Help Along the Way This one is fairly obvious, but it’s also extremely understated. No matter how much you offer to the developer community, it will always give you more in return. (My favorite online community right now is IndieHackers.) When I first started to show interest in code, I met one of the developers at the company I work for through a mutual friend. I expressed my interest in learning code, and he immediately offered some advice. He suggested a few resources and learning platforms to get my feet wet. I maintained a connection with him throughout my learning journey, and as soon as I felt comfortable with the basics of what I was learning (at the time it was HTML and CSS), he offered for me to take some of his smaller freelance jobs. This was huge! Even today as I have moved on to new technologies and am learning different things, my friend continues to offer suggestions and guidance. As of late, I have been learning Laravel and I reached out about a bug I kept running into. We jumped on a call and he helped me fix the bug I was stuck on. The value he’s been able to offer me has been tremendous in my journey to learn code. I can honestly say that I would not be as far as I am today without his help. And that is just one person! The developer community is full of people just like my friend who are eager to help and offer guidance. I know that people like him and others who I have met through online communities like IndieHackers will continue to be a source of help as I learn and grow as a developer.
https://medium.com/better-programming/if-youre-not-in-a-developer-community-then-you-re-missing-out-50471426a37e
['Jesse Nieman']
2020-08-14 14:01:32.554000+00:00
['Programming', 'JavaScript', 'Python', 'Community', 'Startup']
To smarter emails & efficient service, presenting the AI case
I am sure you’ve all been hearing about artificial intelligence. You see it on new devices, the cool Alexa that you can speak with, a phone support system that’s instantly capable of listening & responding to your voice, and of course the media and investment community has been crazy looking for the next big thing. Us being us, we thought we’d try and make some sense of it in the marketing and sales function. I went about looking for the latest innovations in some of the most established tacts of digital marketing and here’s what I found: · Email marketing Email marketing has been an ongoing topic for a while, but we haven’t seen the true value it can provide until the rise of AI in the email marketing analytics space. A notable service we found was Nova. By utilising AI to scrape through a person’s online identity, it generates a personalised paragraph that sales representatives can add to their sales proposition. How does it work? Dump a batch of email addresses in, as well as the text of your pitch. Nova then screens the contacts and pulls information from sources published publically online and on social media accounts to create a personalised pitch. That’s great right? · Customer service Now, when you think of customer service, do you picture a bot serving you? Or the real question these days is, do you prefer it? A study of 5,000 consumers worldwide, conducted by LivePerson, showed that more than 50% of consumers preferred a human representative, and found only 38% of those surveyed had positive perceptions of this technology. They also found that some factors such as country and industry, had an effect on the receptiveness of consumers to these technologies. Additionally, the nature of customer conversations for industries are inherently different. The fast food industry only really need to engage in simple conversations with their customers. Dominos for example, implemented a chatbot feature, “DRU” for their customers to easily choose their pizza base, toppings, dressing and sides, then order it. This was objectively efficient, and even impressive. Was it a success? YES. Chief executive Don Meij even stated, after realising the benefits of AI, that they are beginning to shift the philosophy of the company from “mobile first” to “AI first”. New initiatives are expected to come out such as drone deliveries, a Facebook chat that helps consumers find vouchers and coupons, and soon enough- DRU manager which helps Domino store owners automate rosters and order stock. However, chatbots won’t be so easily implemented for customers asking about, say, life insurance. Again, the nature of the questions and conversations are important. In saying this, it’s also important to ensure that the tone and intonation of the chatbot is reflective of the brand. Amazon’s Alexa is a good example of this. She was friendly the majority of times, but there were few times the chatbot was perceived as judgemental. Another tribute to DRU’s success was that it conveyed the Domino brand well, and built a closer connection with the customer than the point-and-click interface. As I continue on my journey to explore the applications of Artificial intelligence and how it can help make a real difference, I will be coming back soon with more interesting technologies we’ve tested and enjoyed.
https://medium.com/drizzlin/to-smarter-emails-efficient-service-presenting-the-ai-case-b4810cc4fe0
['Andrea Virrey']
2017-12-15 12:24:37.842000+00:00
['Customer Service', 'AI', 'Artificial Intelligence', 'Chatbots', 'Digital Marketing']
Data Journalism Crash Course #5: Advanced Data Search — Google
Image by the author Google is not limited to the search engine, and the search engine is not limited to simple search. Many users do not take advantage of the full potential of Google applications, and for journalists, it is particularly important to know how to use them well. Search filters In a simple Google search, you can filter results by country, language, publication date, and city. It is also possible to choose between displaying all available results or just pages that have been visited previously, unvisited pages, and literal results. This action can display pages with terms exactly as they were typed, with all the words together and in the same order. Advanced search and search operators The Google search engine has search operators that can refine your search efficiently and practically. Here are some examples: Search for exact word or phrase Use quotation marks: “Every idea needs a Medium” shows only results that contain exactly this sentence, not results that contain the four words used at different points on the page. Delete a word Add a negative sign (-): idea -medium shows results that address products other than Medium. Search on a single website or domain Search by site: medium.com only shows results that are on pages hosted on the Medium site. Search files Search by filetype: xls shows only results contained in spreadsheets of Xls format. Related Sites Searching for related: medium.com shows other websites of Vice itself, in addition to some that work with subjects related to the one she addresses. Related doesn’t just work with websites. You can also search for terms and find other sites that talk about a topic that interests you. Terms within the body of the text Forget the title of the article: with this search, no matter the main theme of the site, Google will search focused on the “inside” of the content. To do this, type intext: and, after the colon, type the keywords, as in the following example: intext: big data for beginners Terms in one-page titles Searching for the title: medium shows only results contained in titles. This is a simple way to quickly find out about content that already exists on the internet before producing your own or to analyze how competing content is on the same subject as your article. See all the Google search tricks here. Some can be done using Google’s advanced search, without the need to memorize operators or consult the help page. Search for images Google Image Search — Image by the author Google’s image search can filter results by image dimensions, color, type (drawing or photo, for example), and publication date. When selecting a result, the user has the option to visit the image source website and obtain more information about it. You can also use an image as a search term. When uploading an image to Google Images, the result can be returned with a suggestion for the image name, pages containing that image, similar images, and the same image with other dimensions. The only condition is that the image already exists on a website. It is possible to add a keyword next to the image sent to make the search easier. Search operators also work on this search. To upload an image to Google Images, simply click on the photo camera icon inside the search box and upload an image from your computer or paste the address of a photo online. Google Trends — Image by Google Google Trends allows you to assess the popularity of a term over time and compare it to the popularity of other terms, and you can also filter results by country, date, category, and Google product (Images or YouTube, for example). Besides, Google Trends highlights the top terms searched on Google over a given period. Google Public Data — Image by Google The little-known Google Public Data presents in the form of graphs important statistics from the database of large institutions, such as the World Bank and Eurostat. This information can be compared and analyzed using the various filters offered by the application. Google Crisis Response — Image by Google An initiative that gathers maps created from data on disaster situations, making information more accessible. Google Earth — Image by Google Google Earth is one of Google’s most famous tools, with animated satellite imagery in three dimensions. This functionality ends up creating a more interactive experience than Google Maps, especially in the paid version, Google Earth Pro. Satellites take around 14 days to take pictures of the entire planet. Google Street View — Image by Google Google Street View offers 360 degree panoramic views horizontally and 290 vertically. This feature of Google Maps and Google Earth, has been available since August 2007. Street View is often being used in “before and after” situations when a location undergoes a significant change. Since 2012, Google has also used Trekker, a kind of high-tech backpack with a 360-degree camera. With the equipment it is possible to get images of where cars cannot go, such as inside museums, theme parks that are difficult to access and even cemeteries. Through a Trekker loan program, in 2013, volunteers were able to start to collaborate with images for Street View. Cameras were borrowed so that each person could collect 360-degree images of the places they know well. Currently, anyone can help Google Street View, whether collaborating with normal photos or panoramic photos taken by the smartphone. With the use of Trekker, Google was able to register places where it was not previously thought to map. From cemeteries around the world, to museums in Latin America, through the Grand Canyon, through the Amazon rainforest, through deserts and down to the bottom of the sea. Through Street View it is also possible to “visit” famous movie sets, the White House, a submarine and even CERN. The feature allows anyone to experience famous places on the planet without leaving their home. IF YOU WANT TO KNOW MORE
https://medium.com/datadriveninvestor/data-journalism-crash-course-5-advanced-data-search-google-3e2a40a2ac52
['Deborah M.']
2020-10-30 13:38:02.101000+00:00
['Data Journalism', 'Journalism', 'Google', 'Data Science', 'Technology']
Why Productivity is Killing Us All
Why Productivity is Killing Us All And How Eastern Medicine & Philosophy Can Save Us How often do we rest? I mean TRULY rest. Without input, stimulation, without guilt that we should be doing this or that? No phone or computer in sight; no emails to be sent, appointments to schedule? In fact, how many of us are running around filling every second of potential down time with shit just so we can stay busy and productive? I know I’m guilty of this. As an acupuncturist and practitioner of Chinese Medicine- I do my best to practice what I preach. I know for a fact that stress is a top causative factor of illness and disease and that our modern lifestyles are literally killing us. As an entrepreneur who has been raised by baby boomers within the paradigm of the American dream and the “you only get ahead if you work your ass off” mentality; I struggle, A LOT. I feel like I’m fighting a constant battle against the deeply-ingrained ideals of a culture driven by productivity and capitalism. It doesn’t leave much space for rest without guilt. Several years ago when I began studying Eastern Medicine and Philosophy it quickly became apparent to me how vast the dichotomy truly was between Eastern and Western ideals and lifestyles. I use the term “Western” loosely because after spending time living in Europe, I can truly say that the United States is on another level when it comes to stress, productivity, burn-out, and quality of life. I see this with my patients daily. Throughout the years of looking for root causes and how to address them with acupuncture & herbal medicine, I’ve learned one thing is certain- being too productive is killing us. I get all kinds of complaints and symptoms walking in my office door. I often get the off-the-wall, random mystery symptoms and the patients who have been to every doctor and specialist and who have been on every medication and still feel like they’re not getting anywhere. It’s provided an interesting vantage point to assess some of the common threads and cultural health trends we face in modern times. The conclusion I’ve come to: I’m never just battling back pain, migraines, autoimmune disease and the like; but rather, I am always treating the side effects of stress, over-work, and the chronic state of inflammation that comes with it. This is not to say that there aren’t other contributing factors to these issues and illnesses, but there is ALWAYS a stress component. On a physiological level, we know what stress does to the body. The sympathetic nervous response (or “fight or flight”) is a mechanism that is intended to be engaged for a short period of time so we can literally survive a life-threatening situation. All of our other systems like our respiration, digestion, and bladder function are inhibited to create the optimal chance for survival. The problem is that our modern lifestyles keep our sympathetic nervous system constantly engaged in a way that we are not built for. When this mechanism is activated over a longer period of time it starts to wreak havoc on the body. These other systems stop functioning optimally and our threshold for stress-response is much lower. This means that our bodies start registering even minor non-threatening occurrences as threats, and we end up in a chronic state of fight-or-flight and inflammation. Photo by Sage Friedman on Unsplash So where does Eastern Medicine and Philosophy fit into this? And how can it create value for us here in the West? One of the core tenants of Chinese Medicine is the concept of yin and yang. It’s a term we’ve all vaguely heard of but that a lot of us don’t truly understand or perceive as relevant to our lives. I beg to differ; I think that this simple yet profound concept is a key to leading a life of longevity and vitality in modern times. Let me explain. The concept of yin and yang states that there are two opposing yet interdependent energies in all of life. These lie within us and around us. The way I relate these to our modern lifestyles is that yang can be defined as the exertion or output of energy while yin is the input, restoration, or reception of energy. At any given time, both yin and yang are present. They operate in a constant state of balance with one another. We are all always dancing between input and output; action and passivity. The problem for most of us is that we are always exerting WAY more than we are restoring. This idea of “filling your cup” and the fact that “self-care” is a buzzword are indicative of this. My question for you is this: how can we exert energy at an optimal state if we aren’t refilling our tanks enough? Much like the gas tanks in our cars get low and we must refill them constantly, our bodies work in a similar fashion. This has to happen on a regular basis in order for us to maintain balance and create longevity and vitality in our health and lives. Just like a car, optimal function and increased longevity occurs with proper maintenance and regular re-fueling. This can be applied in the workplace for companies as well. Employee satisfaction, retention, and productivity can directly be tied back to creating a culture that values input and restoration as much as it does action. This is why creating initiatives that allow for employees to disconnect and recharge (i.e. disabling company email at off-work times, providing incentives for taking vacation days etc.) is crucial to optimal productivity. If a company is constantly demanding output and productivity from it’s employees without providing sufficient opportunities and space for the input and restoration to occur; how can optimal results be achieved? We have seen this play out recently in Japan when Microsoft piloted a 4 day work week for it’s employees and saw a 40% increase in productivity. This would no doubt positively impact a company’s bottom line in a significant way. We are addicted to being busy. And it has to stop. If there’s one thing Eastern medicine and philosophy has taught me it is that the times of rest are just as much part of the action as the action itself. How can we create, innovate, and be productive if we’re never recharging? If we’re never creating space and we’re just constantly going and exerting our energy, how can we show up in the world in an optimal way? We can’t. It’s time to start placing equal value on restoration and preservation, before we kill ourselves.
https://allysonschurtz.medium.com/why-productivity-is-killing-us-all-dfd9e1c38372
['Allyson Schurtz L.Ac']
2019-12-04 02:19:15.744000+00:00
['Productivity', 'Corporate Wellness', 'Workplace', 'Personal Development', 'Health']
How to Design RESTful Web Services with Dropwizard
Creating a new Dropwizard application Now that, we have understood what Dropwizard is and few internal libraries, let us create a Dropwizard application to understand these concepts even further. Dropwizard dependencies are exposed as a maven dependency. In this section, we will develop a maven project and add the relevant dependencies. Step 1: Creating a Maven Project Open your favorite IDE and create a new maven project with archetype selected as maven-archetype-quickstart. Add the following dependency: <dependency> <groupId>io.dropwizard</groupId> <artifactId>dropwizard-core</artifactId> <version>2.0.0-rc9</version> </dependency> This single dependency ensures that all related components are downloaded and our application is ready to run. Step 2: Creating a Dropwizard Configuration file Each Dropwizard application has its own subclass of the Configuration class which specifies environment-specific parameters. These parameters are specified in a YAML configuration file which is deserialized to an instance of your application’s configuration class and validated. Create the following configuration file. Custom Configuration file Above mentioned configuration class extracts firstName and lastName parameters from the supplied YAML file. We will create a YAML file later stage of the application which will have following contents: firstName: "John" lastName: "Doe" Step 3: Creating a Jersey Resource Jersey resources are the meat-and-potatoes of a Dropwizard application. Each resource class is associated with a URI template. For our application, we need a resource which returns new Person instances from the URI /hello so our resource class looks like this: Jersey resource We are performing the following activities in the above class: @Path annotation indicates that this class is a JAX-RS resource @Produces indicates that this resource produces JSON data @GET indicates that resource can be accessed over HTTP GET @Timed let Dropwizard automatically records the duration and rate of invocations as a Metrics Timer The method arguments indicate that if data is supplied in the request query parameters, then the same will be used. Otherwise, default values configured in the YAML file will be used The response returned by the getPerson() method is a Person instance and will be mapped by Jackson. Following is the Person class representation: Person class Step 4: Creating a HealthCheck Dropwizard strongly recommends providing health checks for a Dropwizard application. In fact, if health checks are not configured it warns the user at application startup. We have created the following health check for this application: Dropwizard health check Step 5: Creating an Application Class Combined with Configuration subclass defined earlier, Dropwizard’s Application subclass forms the core of a Dropwizard application. The Application class pulls together the various bundles and commands which provide the basic functionality of the application. Following is the application class: Dropwizard application class Step 6: Building the executable jar We are now done with our Dropwizard application development. We have created the barebones components and added a JAX-RS resource with an endpoint. Let us add the following maven shade plugin in order to build the executable JAR file: Note the mainClass parameter in the configuration. It must the application main class.
https://medium.com/swlh/how-to-design-restful-web-services-with-dropwizard-d5681a127cba
['Somnath Musib']
2019-12-02 09:01:02.012000+00:00
['Coding', 'Programming', 'Java', 'Software Engineering', 'Technology']
AI In The Shipping Industry
Artificial Intelligence has been re-shaping the world as we know it. Not in the way that we saw in the famous movie the Terminator (SkyNet is not going to take over your phone) but it has transformed our everyday lives by improving processes that we perform regularly. The same holds true for the shipping industry. AI has enabled the implementation of IoT devices that gather information that the AI can learn from, improve upon, and make automated decisions. Automation may be the biggest and most beneficial way AI is used in shipping because it enables anomaly detection, reduction in waste, improved quality control, and decreased shipping times. The combination of IoT devices and AI has brought unparalleled improvements to quality control, fuel consumption, and safety. One example of how AI is aiding the shipping industry is how it improves quality control. With the installation of cameras and IoT devices in cargo containers shipping companies can monitor the environment of goods to prevent damage and spoiling. This is very prevalent in the food transport industry. AI monitors data received from cameras and IoT devices about data points such as temperature, humidity, shock(fall), light exposure, and even vibration. This provides a richer bank of information on the quality of goods not only at the beginning and end of transport but during. Through automated alerts and solution suggestions shipping companies can adjust containers to remedy problems in real-time and prevent the waste or degradation of goods. Artificial Intelligence is also allowing shipping companies to manage their fuel consumption and utilize the most efficient shipping routes. The accurate monitoring of how fuel is being used leads to a reduction in fuel spend and brings environmental benefits by decreasing emissions. AI is also being used to improve the physical routes that shipping companies take. Based on historical data on weather patterns like water and wind currents, traffic through certain areas and ports; shipping companies can plan routes that leverage all of this information to reduce fuel consumption and reduce trip times. Lastly, AI can be used to improve safety by monitoring ship systems and the environment around the ship. An example is a ship image recognition system being developed by tech company SenseTime and Japanese shipping firm Mitsui OSK Lines (MOL). They are developing an image recognition system to identify ships in the surrounding area and monitor shipping lanes. This makes transport within bays, from entering to departing ports, much safer, not only for your ship but the surrounding ships as well. The advent of cheaper and faster computing power, combined with the improvements in the accuracy and efficiency of AI models has brought about a lot of improvements to the shipping industry. Reduction in waste, efficient fuel consumption, and improved safety features are just a few of the ways the shipping industry is becoming faster, safer, and more efficient in the present and future. About the Author: Josh Miramant, CEO Blue Orange Digital, Presskit and Bio Image Credit: Intel Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/ai-in-the-shipping-industry-6b35a0cadb3f
['Blue Orange Digital']
2020-09-29 15:00:45.243000+00:00
['Supply Chain Solutions', 'Supply Chain', 'AI', 'Artificial Intelligence', 'Ai In Supplychain']
It’s Time to Rethink What It Means to Be Healthy
There are many metrics that are currently used to assess human health that aren’t based in sound science, and yet they persist. The body mass index (BMI) is one of them. Even the U.S. Centers for Disease Control and Prevention (CDC) says that the BMI “can be used to screen for weight categories that may lead to health problems but it is not diagnostic of the body fatness or health of an individual.” Writer Annaliese Griffin spoke to health experts and came up with five new metrics to assess your health that have nothing to do with measurements like weight or calories and all to do with reframing your relationship with health. They include questions like: How much green stuff are you eating? What did your body do for you today? And are you sleeping enough? Use the coming new year as an opportunity to embrace better health habits that are based in science and also take into account your well-being and mental health. Read how below.
https://elemental.medium.com/its-time-to-rethink-what-it-means-to-be-healthy-4835432ff8c3
['Alexandra Sifferlin']
2020-12-29 06:32:39.979000+00:00
['Nutrition', 'Body', 'Health', 'Life', 'Science']
Grammar Mistakes That Medium’s Copy Editors Really Don’t Want You to Make
Common errors David: When a word or phrase is written by a majority of online users the wrong way: Like “everyday.” Instead of “I drink a ton of coffee every day,” they write “I drink a ton of coffee everyday.” I hate that. Sam: A small error I see frequently is “that” instead of “who,” such as “we need a president that shows empathy.” Tana: Using “which” when it should be “that,” like “a mindset which contributes to more incarceration” versus “a mindset that contributes to more incarceration.” Sam: Unnecessarily capitalized words — especially when referring generally to the president of the United States. Tana: Agreed! Unnecessarily capitalized words, particularly position or job titles, like “President of the HOA” or “Chief Security Officer for the company.” Tiffany: I agree with Tana and Sam that unnecessary capitals are getting to me lately. Also, why are people trying to capitalize the internet!? Tana: Comma splices, like “half of the users are police, the other half are private citizens.” Should be a semicolon. Iris: Errors like $1 million dollars. [Ed.: This is redundant. It’s either 1 million dollars or $1 million.] David: Hyphenated adverbs ending in -ly when modifying another word. A no-no in my book. [Ed.: AP Stylebook agrees.] Common typos Iris: The infamous “missing L in public” typo. Sloane: The equally infamous its/it’s, there/their/they’re, than/then typo. Pet peeves Iris: Impact vs. affect. This year, “impact” as a verb is showing up a lot. Impact makes a great noun, but it can be a problematic verb or just a bit too much, particularly when the less intense “affect” can take its place. When I read impact in a context like “how the pandemic impacted the workplace,” I might even replace it with a stronger verb: “how the pandemic dismantled/altered/forever changed the workplace.” Here’s an example I just read in my local news outlet where impact shows up as an appropriately impactful noun: “the pandemic’s impacts on learning.” That said, I will sometimes leave impact as a verb when the context is especially intense, like so many things 2020. Sloane: Language redundancies like “my own” + noun. Phrases like “my own mind” or “ my own thoughts” are redundant; “my mind” or “my thoughts” works just fine. Iris: Definitely an online thing, but super-long in line links bug me, especially when they run onto a second or even third line. Or when stuff like quotation marks or trailing punctuation marks are linked when they don’t need to be. Tidy linking makes me happy. Sloane: Me too! Long in-line linked copy — ugh, no. Just link the relevant word or short phrase, but not the word “here” or “said.” Iris: Overuse of “not only/but also.” Tana: Unnecessary commas: 1) “That a crime goes unsolved is not due to lack of effort by law enforcement, but lack of evidence.” 2) “Biases could be used to implicate someone in a crime, or in any variety of other legal but uncomfortable situations.” 3) “Thessen didn’t connect the dots at the time, but realizes now that this new bylaw was an act of subterfuge.” Iris: Overuse of “from X to Y to Z.” Putting commas in there really gets my goat (from X, to Y, to Z). Sloane: Overuse of the em dash kills me; however, when used well, it’s a delight. This piece by Peter Rubin is a good em-dash explainer. Technically not wrong but try to avoid Iris: Looooong run-on sentences for cute effect. This was a huge writing trend for a while. It’s okay and sometimes preferable in small doses, but I’ve seen a few pieces where it’s nearly every sentence. That’s asking a lot from a reader. Sloane: Similarly, overly hyphenated word phrases are also an older trend that is still hanging on. Tana: I’m not sure if this annoys anyone besides me, but it’s 100% a cross-platform thing [Ed.: Meaning other media publications are okay with this usage]: tucking in an unnecessary “the” before a profession and a name. Examples: 1) “The writer Maya Angelou says…” 2) “I spoke with the biologist Tedros Adhanom.” 3) “According to the scientist Jennifer Doudna.” Bonus things we love Sam: Love when linking is nice and tidy — not having the entire sentence linked. Iris: I will love a writer forever when they demonstrate a solid grasp of semicolons and en dashes. David: I love it when a writer caps the first word of a complete sentence following a colon, which is correct usage. Sam: Also, I love when names are all spelled correctly! Like when the writer has clearly checked those. David: I love seeing “minuscule” spelled correctly.
https://medium.com/creators-hub/grammar-mistakes-that-mediums-copy-editors-really-don-t-want-you-to-make-6b9eee0c7e3e
['Sloane Miller']
2020-12-02 19:18:25.122000+00:00
['Writing', 'Editing', 'Copywriting', 'Creativity', 'Writing Tips']
Dancing in the Dark
Dancing in the Dark What happens if the Prefect Cloud API goes down? “What happens if your API goes down?” This is an understandably common question from Prefect’s enterprise customers, who depend on Prefect Cloud to automate mission-critical workflows. I always explain that because of Prefect’s unique Hybrid Model, an API outage is not nearly as disruptive as what they probably expect from a SaaS service, and in some cases its effects can be entirely mitigated. Last Friday, my claim was put to the ultimate test: Cloudflare experienced a DNS issue causing many websites and services to become inaccessible for a short period, including the Prefect Cloud API. We predictably saw a large spike in outstanding task runs that became “zombies” and lost their connection to the backend (more on this later). Despite this unpleasant situation, the moment the issue was resolved, work continued as scheduled and all affected workflows were easily (and in most cases automatically) resumed. People who naively glance at our Hybrid Model might conclude that it is purely about separation of concerns (execution environment vs. platform environment), but as with most things at Prefect, it is the product of careful consideration to ensure that even in a worst-case event we are still proactively working on users’ behalf and providing value. In particular, thanks to its innovative design, even if our API is down: your business critical data is not lost or affected your work is still being scheduled a record of all outstanding jobs is maintained and curated (including sending notifications) work will resume when API access is restored Why do we have such confidence in our approach? Because it was designed to put resilience first. This is another example of how we designed Prefect as an insurance product — most useful when things go wrong. Through careful design of each component, we created a system that delivers value and recovers resiliently from failure even when a substantial portion of the global internet is down. Scheduling The Prefect Cloud scheduler service is an always-on, horizontally scalable service that is constantly parsing all flow schedules. Its job is simple: to create new flow runs and place them in a Scheduled state (with the appropriate future start time) for every flow that needs scheduling. Once a run is placed in a Scheduled state, it is added to a work queue and stays there until a Prefect Agent picks it up at the appropriate time via an API query. This design ensures that scheduled work is never lost —if no Agents can communicate via the API, then at worst some runs begin late. We are currently working on a feature that will allows users to send notifications on both late flow runs and when agents stop communicating with the API, ensuring they are alerted that something might be awry (Note: because of this design, these alerts will be triggered even if the API is down!). Impacts to in-flight work Prefect flows have configurable executors that manage all dependency resolution of the tasks within a given flow (this is critical to the scale that Prefect enables). In normal operation, this is sufficient to ensure all work is visited and completed. However, in extreme circumstances, it is possible for task and flow runs to end in a half-completed state. For example, Kubernetes preemption events can shut down work without warning. Similarly, an API outage means that tasks cannot confirm their final state with the backend and consequently the flow run can not complete. Prefect Cloud has multiple services running behind the scenes that monitor for these types of situations. Two of the most visible are: Zombie Killer Service : this service looks for task runs that are in a Running state but haven’t sent a heartbeat in the last 2 minutes; when found, the service either places the run into a Failed state or a Retrying state (if the task has configured retries). If no activity occurs on the retrying tasks, the Retrying states eventually make their way into the work queue for agents to pick up. Advanced users will be able to configure zombie behavior separately from task-level retries. : this service looks for task runs that are in a state but haven’t sent a heartbeat in the last 2 minutes; when found, the service either places the run into a state or a state (if the task has configured retries). If no activity occurs on the retrying tasks, the states eventually make their way into the work queue for agents to pick up. Advanced users will be able to configure zombie behavior separately from task-level retries. Lazarus Service: this service looks for distressed flow runs and task runs that don’t appear to be making any progress. When found, this service places them into the work queue for Agents to retry. If Lazarus visits the same flow 3 times in a row, it will conclude that it is fundamentally broken and automatically mark it as Failed , triggering any configured Cloud hooks to fire and send the appropriate notification. As a concrete example, the most common Lazarus event is a flow run that has been Submitted by an Agent but has not entered a Running state after some time, for example if the Agent is unable to deploy it into an execution cluster. These services (along with many others) guarantee that in the extreme event wherein work stops communicating with the API, items are re-added back to the work queue for completion once API communication is possible again. Critically, a record of the event becomes both easily discoverable and apparent in your UI dashboard, from which you can choose to manually restart or inspect further. Data Availability Last but not least is the issue of data — more often than not, enterprises are concerned about their ability to access data during an outage. Independent of the outage we’re discussing here, Prefect’s Hybrid Model ensures that no proprietary data is ever stored in Prefect Cloud’s database. This means that your ability to access your business critical data is completely unaffected by Prefect Cloud API’s availability. We say that the Hybrid Model provides “cloud convenience with on-prem security,” and indeed, it ensures that your code remains fully on-premise and in your control. This means that in an absolute worst case event, you could call flow.run() yourself to guarantee your data is updated. Our work continues! All aspects of what I’ve described above are in a continual cycle of improvement, as we constantly seek to strengthen our guarantees. Prefect’s ultimate mission is to eliminate negative engineering by ensuring that data professionals can confidently and efficiently automate their data applications with the most user-friendly toolkit around. Our design goal is to be minimally invasive when things go right and maximally helpful when they go wrong; what better proof than a global internet failure?
https://medium.com/the-prefect-blog/dancing-in-the-dark-b4cc0e240ba7
['Christopher White']
2020-07-20 18:37:51.305000+00:00
['API', 'Site Reliability', 'Python', 'Data Engineering', 'Workflow']
Teens, Brains, and Tetrahydrocannabinol
My academic studies have taken me on a tour of addiction and mental health. Where I believe THC has beneficial properties, I also believe we must be mindful of the risks. The younger the user, the higher the risks. Here is a small exploration of teens and THC as adapted from my academic paper. Teens and THC From their parent’s medicine cabinets to the person selling marijuana on the sly near schools, youth find avenues to use; they desire to change their current reality. Here’s a quick video explaining how THC Affects the brain: With the use of marijuana, the adolescent brain, which is still developing tends to fall short of the ability to make sound decisions. The use of THC tends to decrease wise choices in an already underdeveloped brain. Tetrahydrocannabinol (THC), the chemical responsible for most of marijuana’s psychological effects, affects brain cells throughout the brain, including cells in circuits related to learning and memory, coordination, and addiction (SAMSHA, 2018). The prefrontal cortex (PFC) is still in the development process where decisions and ideas are weighed based on risk assessment. During the sensitive time of brain growth, their healthy decisions are halted. The brain’s limbic system, which develops first, creates memory and emotional responses and so the instant gratification and quick choices disregarding consequences takes over, rather than the focused, thinking in logical terms, and planning part of the brain. Unfortunately, one of the reasons teens make emotionally triggered choices has to do with an undeveloped prefrontal cortex. For the teens who find access to drugs, the accepted choice derives the decision from the limbic system; soon, the ease of access, transfers to action (Hart & Ksir, 2014). Teens who use prior to their brain’s full growth, tend to stunt the maturity of reason and logic. The slope to try out illicit drugs happens rather quickly. For instance, the use of ‘blunts’, which are cigars that are hollowed out, filled with marijuana and crack, and then smoked increases the dangers of the drugs. The developing brain receives a double dose of drugs, increasing the damage taking place across the blood-brain barrier. The validation of the shift from one drug to another, Melberg, Jones, Bretteville-Jensen suggest, “Our findings demonstrate, first of all, that there is a gateway effect and the hazard of taking up hard drugs increases substantially after the initiation of cannabis” (p.586). Another concerning drug comparable to marijuana is on the rise. A synthetic drug called “Spice,” which is “sold over the counter in many states — particularly in gas stations, convenience stores, and head shops — has synthetic chemical components of marijuana sprayed onto shredded plant material that is then smoked” (Wadely, 2014, p.2). The drug is dangerous. Thankfully, it is on the NIDA’s list of a drug that is decreased in use during the 2014 year. Those around 12th grade rather than the younger grades usually use this drug. Unfortunately, “Other drugs, which use remained unchanged in 2014 include Ritalin and Adderall — both stimulants used in the treatment of ADHD — as well as LSD, inhalants, powder cocaine, tranquilizers, sedatives, and anabolic steroids. However, most of these drugs are now well below their recent peak levels of use according to the investigators” (Wadely, 2014, p.4).
https://medium.com/publishous/teens-brains-and-tetrahydrocannabinol-d29244cb7e7b
['Pamela J. Nikodem']
2020-02-02 16:36:01.152000+00:00
['Growth', 'Life Lessons', 'Health', 'Mental Health', 'Addiction']
What Would Democrats Do?
Terry H. Schwadron Oct. 11, 2018 Even as President Trump was whipping up about 9,000 at an Iowa rally with cries that Democrats are an “angry mob,” intent on “policies of anger, division and destruction,” actual Democrats in Washington are thinking through what will happen if they actually do flip the currently Republican-majority House in November. There was something very odd about the juxtaposition. “You don’t hand matches to an arsonist and you don’t give power to an angry left-wing mob, and that’s what the Democrats are,” Trump said. In an op-ed for USA Today , the president argued, for example, that the plan would threaten seniors and represented “radical socialism.” You’ve got to squirm at the constantly more pointed language marking our political races. Meanwhile, TheHill.com went about a very sober journalistic task about the agenda for a potential Democratic-majority future. Their reporters actually asked those who would be incoming House committee chairs what they actually want to do. The most amazing thing you learn is that Democrats might actually take a ten-minute break from fund-raising to pay attention instead to matters of governing. That said, the ambitious agenda for seeking legislation to address items of governance and budget that have gone unaddressed includes a healthy dose of closer questioning of Trump officials. After eight years in the minority, Democrats want a wide variety of bills, from shoring up ObamaCare and Dodd-Frank financial rules to protecting “Dreamers” and the integrity of elections. Of course, the likelihood is that even if the House turns blue, the Senate will hold its REpubican majority, meaning that none of these bills have a hope of becoming law. Nevertheless, it is good to see a different view of why we even have a federal government. Under Trump and Republican leadership, the clear outline has been to shrink social services, build up the military, offer tax cuts built for a smaller version of government and to vastly reduce regulation to let corporations and entrepreneurs flourish. Of course, Democrats also are vowing to be aggressive in investigating the actions of the Trump administration — an oversight role Democrats contend was virtually abandoned by Republicans. Here’s a summary of thehlll.com’s findings by committee: Appropriations. Rep. Nita Lowey, D-NY, would focus efforts on increasing support for social service programs, including a labor-health spending bill with a $1 billion increase over 2018 levels for medical research, maternity care, home-heating subsidies, nutrition and education programs, and funding to fight the opioid crisis. Lowey said an aim would be to reinstate a system of passing the various spending bills separately. Armed Services.Rep. Adam Smith , D-WA, (Wash.), would seek to revisit particular initiatives, including the spending to overhaul nuclear weapons. Smith said deployment of special forces would be a primary interest, particularly operations in Africa and other hotspots. And he would revisita ban on transgender people in the military. Budget. Rep. John Yarmuth (Ky.) wants to expand the scope of the panel to include overarching assessments of how specific issues, in the broadest terms, impact the federal budget. That means new attention on the impact of tax cuts, immigration, health care and climate change. Energy and Commerce. The priority for Rep. Frank Pallone Jr., D-NJ, is health care and shoring up what has been cut in ObamaCare, a target clearly opposed by the president and Republicans. He proposes action on reducing drug costs, elimination on income caps on tax credits, and limitations on rising Medicare payments for drugs. The committee also would take a bigger oversight role towards energy issues. Financial Services. Rep. Maxine Waters, D-CA, a fierce Trump critic, is in line to take on a consumerist agenda that has been set aside by Republicans,, including a lot of oversight investigation. Democrats also want to bolster Dodd-Frank restrictions on Wall Street as well as the Consumer Financial Protection Bureau (CFPB), and the federal flood insurance program. Homeland Security. Rep. Bennie Thompson, D-MS,wants the committee to conduct deep dives into election security, Trump’s travel ban, the administration’s uneven response to Hurricane Maria and the screening methods adopted by the Transportation Security Administration. He would lead questioning of enforcement on the southern border as well as policies allowing family separations and the Wall. Intelligence. It is easy to see Rep. Adam Schiff, D-CA, going hard after the special counsel investigation, possibly openingexamination of Russia’s potential financial ties to Trump’s global business empire. Judiciary. Rep. Jerrold Nadler, D-NY, outlines a busy program of questioning policies affectingimmigration, guns, voting rights and, of course, impeachment. Nadler has lashed out at the administration for refusing to defend certain ObamaCare insurance protections from outside lawsuits; for separating immigrant families at the southern border; for backing the National Rifle Association in opposition to tougher gun laws; and for defending states that have adopted tougher voting restrictions. Natural Resources.Rep. Raúl Grijalva (D-Ariz.) hopes to revisit the Democrats’ upset over environmental issues by serious questioning of policies and actions by Interior Secretary Ryan Zinke on climate change, oil drilling, shrinking national monuments and selling off mineral rights. He would seek to strengthen the Endangered Species Act and the National Environmental Policy Act. Oversight and Government Reform. Rep. Elijah Cummings, D-MD, would lead investigations of all sort — voting rights, elimination of pre-existing conditions in health care, attacks on the FBI, the media and other institutions. As a forecast of what might come, Cummings and Oversight Democrats have submitted more than 50 subpoena requests for administrative documents on topics ranging from Trump’s efforts to dismantle ObamaCare and officials’ use of chartered flights to the president’s travel ban and the use of private email in the White House. Republicans have denied every request. Transportation and Infrastructure. Rep. Peter DeFazio,D-OR, wants the infrastructure package that Trump promised but never delivered. Ways and Means.Rep. Richard Neal, D-MA, wants to revisit tax cut, including hearings on what the cuts never delivered to the middle class. They also want to reinstall a state and local tax deduction, known as SALT, that was eliminated in the GOP tax law. He also targets shoring up retirement savings, protecting multi-employer pension plans and infrastructure. Does any of this sound “radical”? ## www.terryschwadron.wordpress.com
https://terryschwadron.medium.com/what-would-democrats-do-969e9b88675f
['Terry Schwadron']
2018-10-11 11:28:39.816000+00:00
['Democrats', 'Health', 'Politics', 'Environment', 'Congress']
Donald Trump Is Smarter Than We Ever Gave Him Credit For
Donald Trump Is Smarter Than We Ever Gave Him Credit For Ladies and gentlemen, we’ve been played. Photo by Charles Deluvio on Unsplash It was all an act. Back in February, when everyone was trying to warn him about an impending pandemic, Donald Trump under-reacted. He didn’t seem to get it. He dismissed the experts. He called it a democratic hoax. In response, democrats mocked him as a fool. Now we’re finding out the truth. The entire time, Trump knew how deadly the virus was. He’s on record acknowledging its potential to become deadlier than any virus we’d seen in a century. And yet, he continued to shrug it off as “Kung Flu,” and played politics with masks and ventilators. He knew. And instead of trying to save lives, he conspired with his administration to sabotage cities and states with democratic majorities. He did this in hopes that it would weaken bastions of liberal progressivism, and turn the election in his favor. Three books show us exactly who Donald Trump is: You don’t even have to read these entire books to see the real Donald Trump, the one who his inner circle knows when the cameras cut off. They show different sides of the president, but they all agree. We should’ve been much more afraid of this man.
https://medium.com/the-apeiron-blog/donald-trump-is-smarter-than-we-ever-gave-him-credit-for-996c493f6492
['Jessica Wildfire']
2020-09-10 16:01:01.860000+00:00
['Books', 'Politics', 'Society', 'News', 'Culture']
How to Become Friends With Your Anxiety
Photographer: Michelle (Fisher) Bulla — Model: Allison Crady Tension in your head, soreness along your upper back, your body slowly curves inward, and your mind races with unkind thoughts. You feel out of control, uncomfortable, and disoriented. You have anxiety. I’m pretty sure I said something awkward, and I can’t get it out of my head — They must think I’m a psycho. Who the hell am I? Why do I say things like that? Will this ever stop? I have a lot of anxiety, and it sucks. Sitting with discomfort and uncertainty was not part of my life training. Our emotional pain is real. Becoming friends with anxiety means slowing down, acknowledging your pain, and listening to your body with self-compassion. “You can’t stop the waves, but you can learn to surf.” — Jon Kabat-Zinn We can hear ideas over and over, then one day someone says it a certain way, and it clicks. By sharing my anxiety journey — from trying to “fix it” to somatic healing to self-compassion — I hope something clicks. 1. Acknowledging Anxiety We all have anxiety as part of our human journey, a reminder of our shared humanity. We have to start by acknowledging and welcoming its presence. Pushing past anxiety is like telling yourself that it’s not okay to feel or be your whole self: numbing and unhelpful. We like to push away anxiety with vices. I once drew a cartoon of myself watching TV with a thought bubble, “Haha, I don’t have to deal with my emotions.” Pushing aside unpleasant emotions does not heal them. 2. Sitting with Anxiety Most situations are not nearly as bad as we make them out to be. Our stories and judgments cause us pain and anxiety. Anxiety activates our core emotions and fears, the need for control, survival, and love. I found a few guided anxiety meditations that have helped — a 5-minute meditation and a 10-minute meditation. After grounding, we can separate the situation from our response and see the situation more clearly. Being with your anxiety feels difficult, and we don’t want to start. It feels much more appealing to run to comfort or to try to “fix it.” I try to logic my way through anxiety — Okay, my thoughts are racing. I seem to be feeling unsafe or inadequate. I’ll repeat a mantra, take deep breaths, do meditation or yoga, and it will go away. We need to combine a logical approach with the felt experience. Anxiety manifests as trapped energy in our bodies. We need to slow down and release the anxious energy. Easier said than done. I get frustrated. I’m doing all the right things, so why isn’t this working? Why can’t I be calm already? I’ve been taking deep breaths for a while now. It takes time and practice to build faith in your ability to feel and heal through anxiety. Our technology culture gives us instant gratification, so we expect everything to happen quickly. Our bodies function in a slower, more flowing way. Our bodies require more listening, acceptance, and intentional breathing to feel balanced. 3. Listening to Anxiety Our anxiety alerts us to important information. When we slow down and listen, we can ask ourselves what we need to feel better about the situation. Talk to the anxious energy in your body — “What more is there for me to know?” My body often reminds me to slow down and take deeper breaths. I remind myself that I am well-resourced to handle this situation. I felt anxious about writing this blog. Watching my thoughts, I realized I felt scared of sharing my darkness and feeling not good enough. By listening to my body, I learned that I need to let myself be vulnerable and embrace the creative process. I can also take steps to make the process more enjoyable and create safety for my artist child. 4. Building Confidence Anxiety comes from feeling out of control. We feel like a situation, or our emotions, have become unmanageable, and that scares us a lot. Anxiety is intelligent, blocked energy in our bodies. We can talk to the parts of our bodies where we feel tense, and we can move the blocked energy through our bodies. My somatic life coach walked me through an exercise: notice the places in your body where you feel tension when you are anxious. Then, identify what confidence feels like in your body and where. With awareness, you can gently move the energy back into a place of confidence. I learned that my body’s wisdom knows how to handle every situation that I encounter. We need to become very relaxed and present to tap into our inner wisdom. Using somatic processing, along with a logical approach, has helped me feel more in tune with my body and embrace anxiety. 5. Befriending Anxiety Anxiety is part of us, and we need to value the ways it helps us. We need to care for our bodies, tell a new story about anxiety, and find strategies. For many of us, self-compassion does not come naturally. We grew up in critical, hyper-masculine environments that have made us judgmental of ourselves. In her self-compassion talk, Dr. Kristen Neff shares an exercise: imagine a friend was having a hard time. How would you respond? Then imagine yourself having a hard time. Compare the responses. Most of us are harsher with ourselves, using a judging tone. We are more willing to be kind and supportive of our friends in their hard times. By understanding our anxiety, caring for our bodies, and listening to our needs, we can develop self-compassion, making us healthier, more resilient, more likable, and more balanced human beings.
https://medium.com/an-injustice/how-to-become-friends-with-your-anxiety-bd756e4526b6
['Allison Crady']
2020-12-16 01:18:43.420000+00:00
['Anxiety', 'Mental Health', 'Creativity', 'Compassion', 'Feminism']
Why Flutter is the Future Trend in Mobile App Development?
Make Your Business Successful with Flutter Mobile App Why Flutter is the Future Trend in Mobile App Development? In this Blog, You can Get the Overview flutter and Why Flutter is More Efficient for Startups. In startups, there is confusion regarding which cross-platform mobile app development is more efficient in the future for rapid growth in the competitive market. By choosing the wrong mobile application platform many startups fail. So, The Quick Solution is Flutter. With the right choice of technology, startups can easily survive in the competitive world for a long time with more efficiency. So in this blog, we will discuss the reasons why flutter is the right choice for cross-platform mobile application development. Brief Introduction of Flutter: I know that to understand the technology is making you boring but believe me, the flutter is easy to understand and more interesting. Flutter is a single code base application for both Android and iOS. It is a free and open-source cross-platform app with high performance. It is launched in 2018 by Google so it is more trustworthy. It is faster app development for developers. Flutter is the hot reload feature that saves more time and you can change the codebase instantly. The developers can build the app without compromising the performance. Flutter is the more customize and attractive app ever. Pick points of the Blog: Why Flutter is the Best Platform? Why Flutter is the Development Trend What is the Scope of Flutter? Amazing Apps using Flutter Framework Conclusion Why Flutter is the Best Platform? React Native, Angular Js or Xamarin are other mobile frameworks available over flutter. So when the decision has come many developers and owners think that flutter why is the best platform for mobile app development. Refer the below image for clear comparison: Flutter is developed and supported by google so the long term maintenance is more than the other frameworks. Look at some benefits of having the mobile app in a flutter. Cost-effective It is cost-effective so for startups it is the best option for mobile app development. Fewer developers There is no requirement to hire separate developers for Android and iOS because Flutter has required fewer developers. With the one small team of Flutter developers, you can build the cross-platform mobile app speedily. Faster code development With the Flutter, you can develop your app with faster code than other frameworks. It increases the developers’ efficiency and saves more time in your business. Go beyond mobile Flutter has the potential ability to go beyond the mobile that leads to the more growth of your business. Before choosing any technology the research, the pros and cons are necessary to know each framework. Thus, after knowing these benefits of Flutter you can make a decision that Flutter mobile app is more efficient for your next mobile app development for startups. Why Flutter is the Development Trend: Quickly look at some reasons why Flutter has more development trends in mobile app development. By editing the code for IOS and Android apps both we can easily adjust the UI. Not spending more time in-app development you can save time to develop the app and instant changes without losing the present application state. Flutter is more similar to native app performance. What is the Scope of Flutter? The future scope of the Flutter is as long as Google has. The Flutter beat the React Native the market. Let’s have a look at the Scope of the Flutter in the mobile app development. The resources are less used in Flutter compared to others in less money and investment, fewer developers you can build the app. Flutter is easy to learn and more popular among the developers also in the market. Flutter is an excellent pixel-perfect design. It is a system in Dart that developers implement the Flutter for reading, replace, remove and change the operations in an easier way. Constantly updated Dart libraries and the quality of the code is more in Flutter that creates a more precise, accurate and less bulky app. Amazing Apps using Flutter Framework: Google ads users can view their campaign on the smartphone that provides the details of the campaign, alert notifications, suggestions and also allow the calling Google expert. You can add, edit and remove the keywords of the particular campaign and more. So this Flutter app also helps to manage all the activity of your app without a desktop anywhere. Alibaba is the world’s largest e-commerce company that connects dealers around the world. Alibaba app is for the wholesale marketplace app. It is for the global trade app that provides the users to buy the products from suppliers across the world in the mobile app. Birch Finance is the app for the credit card reward that allows users to manage the cards which are existing. It provides various ways to redeem and earn rewards. Coach yourself for the German-language market. It is a meditation app that helps users in personal development. This app provides news, videos, and lotteries daily. This app for New York, Chicago, London, and more tour locations and more. Watermaniac is a healthcare app that provides users to track the amount of water they drink. By using this app the users can set reminders, alerts regarding the drinking of the water, it is a more customized app that helps users to set and achieve the daily goal of water. Conclusion:
https://medium.com/devtechtoday/why-is-fluter-the-future-trend-in-mobile-app-development-26596c84296b
['Binal Prajapati']
2020-03-19 12:26:46.629000+00:00
['Mobile App Development', 'Technology', 'Startup', 'Business', 'Flutter']
Confidence intervals for permutation importance
Confidence intervals for permutation importance A new theoretical perspective on an old measure of feature importance Feature importance helps us find the features that matter. Introduction In this post, we explain how a new theoretical perspective on the popular permutation feature importance technique allows us to quantify its uncertainty with confidence intervals and avoid potential pitfalls in its use. First, let’s motivate the “why” of using this technique in the first place. Let’s imagine you just got hired onto the data science team at a major international retailer. Prior to your arrival, this team built a complex model to forecast weekly sales at each of your dozens of locations around the globe. The model takes into account a multitude of factors: geographic data (like local population density and demographics), seasonality data, weather forecast data, information about individual stores (like total square footage), and even the number of likes your company’s tweets have been getting recently. Let’s assume, too, that this model works wonders, giving the business team advance insight into future sales patterns weeks in advance. There is just one problem. Can you guess what it is? Nobody knows why the sales forecast model works so well. Why is this a problem? A number of reasons. The business folks relying on the model’s predictions have no idea how reliable they would be if, say, Twitter experienced an outage and tweet likes decreased one week. On the data science team, you have little sense of what factors are most useful to the model, so you’re flying blind when it comes to identifying new signals with which to bolster your model’s performance. And let’s not forget other stakeholders. If a decision based on this model’s forecast were to lead to bad results for the company, the board will want to know a lot more about this model than “it just works,” especially as AI continues to grow more regulated. So what can we do? A great first step is to get some measure of feature importance. This means assigning a numerical score of importance to each of the factors that your model uses. These numerical scores represent how important these features are to your model’s ability to make quality predictions. Many modeling techniques come with built-in feature importance measurements. Perhaps you can use the information-gain-based importance measure that comes by default with your xgboost model? Not so fast! As your teammates will point out, there is no guarantee that these feature importances will describe your complex ensemble, and besides, gain-based importance measures are biased [1]. So what can we do instead? We can use “randomized ablation” (aka “permutation”) feature importance measurements. Christoph Molnar offers a clear and concise description of this technique in his Interpretable ML Book [2]: The concept is really straightforward: We measure the importance of a feature by calculating the increase in the model’s prediction error after permuting the feature. A feature is “important” if shuffling its values increases the model error, because in this case the model relied on the feature for the prediction. A feature is “unimportant” if shuffling its values leaves the model error unchanged, because in this case the model ignored the feature for the prediction. Background Where did this technique come from? Randomized ablation feature importance is certainly not new. Indeed, its inception dates back to at least 2001, when a variant of this technique was introduced as the “noising” of variables to better understand how random forest models use them [3]. Recently, however, this technique has seen a resurgence in use and variation. For example, an implementation of this technique will be included in the upcoming version 0.22 of the popular Scikit-learn library [4]. For a more theoretical example, consider the recently-introduced framework of “model class reliance,” which has termed a variant of the randomized ablation feature importance “model reliance” and used it as a core building block [5]. A new theoretical perspective While working with this technique at Fiddler Labs, we have sought to develop a clear sense of what it means, theoretically, to permute a column of your features, run that through your model, and see how much the model’s error increases. This has led us to use the theoretical lens of randomized ablation, hence our new name for what is commonly called permutation feature importance. In a recent preprint released on arXiv, we develop a clear theoretical formulation of this technique as it relates to the classic statistical learning problem statement. We find that the notion of measuring error after permuting features (or, more formally, ablating them through randomization) actually fits in quite nicely with the mathematics of risk minimization in supervised learning [6]. If you are familiar with this body of theory, we hope this connection will be as helpful to your intuition as it has been to ours. Additionally, our reformulation provides two ways of constructing confidence intervals around the randomization ablation feature importance scores, a technique that practitioners can use to avoid potential pitfalls in the application of randomized ablation feature importance. To the best of our knowledge, current formulations and implementations of this technique do not include these confidence measurements. Confidence intervals on feature importance Consider what might happen if we were to re-run randomized ablation feature importance with a different randomized ablation (e.g. by using a different random seed), or if we run it on two different random subsets of a very large dataset (e.g. to avoid using a full dataset that would exceed our machine’s memory capacity). Our feature importances might change! Ideally, we would want to use a large dataset and average over many ablations to mitigate the randomness inherent in the algorithm, but in practice, we may not have enough data or compute power to do so. There are two sources of uncertainty in the randomized ablation feature importance scores: the data points we use, and the random ablation values (i.e. permutation) we use. By running the algorithm multiple times and examining the run-to-run variance, we can construct a confidence interval (CI) that measures the uncertainty stemming from the ablation used. Similarly, by looking point-by-point at the loss increases caused by ablation (instead of just averaging loss over our dataset), we can construct a CI that measures the uncertainty stemming from our finite dataset. Example: forecasting the price of a home To demonstrate the use of randomized ablation feature importance values with CIs, let’s apply the technique to a real model. To this end, I used the Ames Housing Dataset [7] to build a complex model that estimates the sale price of houses. The full code for this example is available in a Jupyter notebook here. To show the importance of confidence intervals, we run randomized ablation feature importance using just 100 points, with just K=3 repetitions. This gives us the following top-10 features by score, with a 95% confidence interval indicated by the black error bars: Randomized ablation feature importance for 100 points after 3 repetitions. As we can see from our error bars, it is uncertain which feature is actually the third most important over these 100 points. Re-running randomized ablation feature importance with K=30 iterations, we arrive at much tighter error bounds, and we find with confidence that a house’s neighborhood actually edges out its total basement square footage in importance to our model: Randomized ablation feature importance for the same 100 points after 30 repetitions. However, it turns out that a larger source of uncertainty in these feature importance scores actually stems from the small size of the dataset used, rather than the small number of ablation repetitions. This fact is uncovered by using the other CI methodology presented in our paper, which captures uncertainty resulting from both ablation and the size of the dataset. Running this other CI technique on another 100 points of our dataset (with just one repetition) we observe the following wide CIs: Randomized ablation feature importance for 100 points with point-by-point CIs. By increasing the number of points to 500 instead of 100, our confidence improves significantly, and we become fairly confident that neighborhood is the third most important feature to our model overall (not just in our limited dataset). Randomized ablation feature importance for 500 points with point-by-point CIs. Conclusion Feature importance techniques are a powerful and easy way to gain valuable insight about your machine learning models. The randomized ablation feature importance technique, often referred to as “permutation” importance, offers a straightforward and broadly-applicable technique for computing feature importances. We also showed here how, through a new way of theorizing and formulating the “true” value of randomized ablation feature importance, we are able to construct confidence intervals around our feature importance measurements. These confidence intervals are a useful tool for avoiding pitfalls in practice, especially when datasets are not large. If you liked this post, you can find more like it on Fiddler’s blog, and if you want a deeper dive into CIs for randomized ablation feature importance, be sure to check out the full paper. Don’t worry, it’s only four pages long! References [1] Parr et. al. Beware Default Random Forest Importances (2018). https://explained.ai/rf-importance/ [2] Molnar, Christoph. Interpretable Machine Learning (2019). https://christophm.github.io/interpretable-ml-book/feature-importance.html [3] Breiman, Leo. Random Forests (2001). https://www.stat.berkeley.edu/%7Ebreiman/randomforest2001.pdf [4] Scikit-learn Contributors. Permutation feature importance (2019). https://scikit-learn.org/dev/modules/permutation_importance.html [5] Fisher et. al. Model Class Reliance (2019). https://arxiv.org/abs/1801.01489 [6] Merrick, Luke. Randomized Ablation Feature Importance (2019). https://arxiv.org/abs/1910.00174 [7] De Cock, Dean. Ames, Iowa: Alternative to the Boston Housing Data as an End of Semester Regression Project (2011). http://jse.amstat.org/v19n3/decock.pdf
https://towardsdatascience.com/confidence-intervals-for-permutation-importance-2d025bc740c5
['Luke Merrick']
2019-10-08 18:07:06.602000+00:00
['Feature Importance', 'AI', 'Artificial Intelligence', 'Explainable Ai', 'Machine Learning']
APEX Supernode Candidate Selections
Supernode Candidates, ongoing efforts & reimbursement continuation It should be noted that each and every Community Supernode Candidate will be expected to maintain a certain level of activity and ongoing support for the project, as their efforts up to this point forms a large part of the basis for why they have been selected. This is not solely for the benefit of APEX Network, but will be a requirement to maintain support and garner the votes necessary from the community of CPX holders to retain active production status in the future. Every Supernode Candidate will be receive a special “Supernode Candidate” tag in the main and tech support chat. Supernode Candidates will receive continued reimbursement for their hardware rental costs until further notice. Reimbursement would naturally halt before or close to the start of the staking on the mainnet.
https://medium.com/apex-network/apex-supernode-candidate-selections-858f6db85dbf
['Apex Team']
2020-01-15 11:01:17.594000+00:00
['AI', 'Technology', 'Blockchain', 'Big Data']
Everything You Know About Productivity is a Lie
We live in a world of empty slogans and meaningless mantras. But, of all the empty phrases in our culture, none are as damaging as productivity. Productivity sounds nice. It sounds technical, efficient, and powerful. Most of us have bought into the idea that to be successful you have to be productive. You probably end most of your days looking at your to-do list and scolding yourself for being so unproductive. Productivity used to be a term of art in the discipline of economics. Now, it is a multi-billion-dollar cash cow for the self-help industry. What does productivity mean? The most common answer is the equally vapid phrase, “getting shit done.” But what shit are you getting done? In the knowledge economy, productivity is meaningless. It is a leftover from the first industrial revolution. It might have mattered how many widgets you could crank out on the assembly line in 1950. But measuring the amount of thought you put into an article, design, or line of code is impossible. Productivity is an unmoored metric in the knowledge economy. It doesn’t measure anything worth tracking. Productivity in Economics In economics, productivity is the ratio between the output volume and the input volumes. It is a measure of efficiency. Economists look at productivity to see how efficiently countries and corporations are using capital and labor. In economic terms, the more productive you are, the more value you can create for the economy. Productivity only makes sense as a ratio. The problem is that it has become increasingly difficult in the knowledge economy to measure the inputs that go into creating an output. Traditionally, economists look at hours worked. Payroll records are the most common way to assess the amount of labor used to create a given product or service. This works well for most physical products. But it fails miserably with the kind of work lawyers, writers, designers, consultants, coders, and engineers do. If thinking is a major part of your job, traditional measures of productivity do not accurately capture your inputs. They also likely fail to measure the value of your output accurately. It might take me one hour to type out an article. If that is all I produce that day, have I been productive? If I sell the article for $400, does that mean I was more productive than if I only sell it for $100? What if I have been thinking about the article and writing it in my head for months? What if I wrote the entire thing from concept to final edits in an hour? How does productivity measure my efficiency? There are too many variables in the creation of knowledge work for economic productivity to accurately calculate its economic efficiency. Productivity is even more nebulous when you leave economics and venture into the world of self-help. Productivity in Our Self-Help Culture Not surprisingly, the definition of productivity in the realm of self-help is not as precise as the one used by economists. Most self-help gurus think of productivity only in terms of output. Input is rarely even an afterthought. It is all about how much did you get done that can be shipped out today? There are thousands of self-help productivity books, and ten times that many productivity gurus. Our hustle culture loves to promote the idea of getting shit done. But, as a knowledge worker, what does it even mean to get shit done? If you are a painter, are you more productive if you finish twenty miniature portraits in a week than if you take three months to paint a mural in a public park? If you write one line of code that solves the one bug that has kept the product from shipping, are you less productive or more productive than all the coders who spent weeks writing the rest of the code? If you are a writer, is writing 10,000 words more productive than writing 1,000 words? Does it make you ten times more productive? I know many freelance writers that make less writing 10,000 words than I make writing 500 words because I work with businesses, and I charge high rates for my labor. Self-help productivity might leave me feeling like a loser and the 10,000-word writer making pennies per word deeply confused. They might wonder, if they got so much shit done, why are they still broke? Again, the notion of productivity also fails to capture all the time I spend thinking about my work. I write in the shower every day. I outline articles and write complete introductions. I end up publishing or selling many of those articles. Many of them never go anywhere because, upon deeper reflection, I realize they are garbage. How does productivity measure that? Today while playing cards with my kids, I had an interesting idea pop into my head. I jotted it down and will return to it later. Does that mean playing cards was productive? Or, am I only as good as the number of projects I finish in a day? Like other knowledge workers, I draw upon my life experience and the media influences I am exposed to every day in creating my work. There is no way to measure the inputs I require to produce something. Additionally, not everything I produce has an absolute economic value. I have sold articles to magazines, written content and sales copy for clients, and independently published articles and books. Some of the pieces I have published as an indie have earned far more over several years than they would have made if I had sold them to a traditional publisher like a magazine or if I had written them for a client. But, for most things I write, I earn the most money writing for private clients. Sometimes I have published articles independently and only made a couple of bucks. However, occasionally an article I have published myself or placed with a magazine or website under my own name goes on to bring new clients to me. It is impossible to see a finished article and book and know what its economic value will be. If you are a knowledge worker, you should stop focusing on productivity and focus on something else instead. Process Over Productivity How you work is a better measure of your progress than your pile of completed tasks. If you want to maximize your work time, you need to create a process for doing your work. If you focus on process over tasks or goals, you will be happier and more successful. As a knowledge worker, you usually cannot control how well your idea or finished product is received. You can control your process. I do not write every day. Every day I spend time doing some combination of these activities: Writing Reading Thinking Outlining Editing Observing the world around me Throwing away bad ideas I do plan ahead to make sure I can meet client deadlines. I have a writing process. I don’t worry about how many words I write a day or how many projects I finish in a day. Instead, I focus on my process. Over the past eight years, I have learned that I will make the money I want if I do certain things consistently. When I fail to follow my process, my business flails. I don’t have to-do lists. My only goal is to be better today than I was yesterday. I don’t panic about efficiency or stress about how much shit I got done. I can measure my process. It takes into account everything I need to create my best work. It also takes into account my crazy life. Some days that means I only read and think. Some days I produce 500 words, and other days I produce 5,000 words. Some days I don’t write any words. It doesn’t matter as long as I am following my process. I do care about how many projects I finish and ship. But, writing and other creative endeavors are about much more than the end result. The process determines the quality and quantity of work I create. A daily tally of checkmarks doesn’t make me a more prolific writer. Knowledge workers of the world, it’s time to unite and kill our senseless obsession with productivity.
https://medium.com/escape-motivation/everything-you-know-about-productivity-is-a-lie-4e2a703f806e
['Jason Mcbride']
2020-08-01 23:13:50.550000+00:00
['Life Lessons', 'Business', 'Productivity', 'Freelancing', 'Writing']
Inside Out: Repository Pattern for Data Layer
Inside Out: Repository Pattern for Data Layer A perfect place to put your Domain logic for Data Models outside Enitity Definition Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects. — Edward Hieatt and Rob Mee in Patterns of Enterprise Application Architecture by Martin Fowler Before We Begin The Object-Relational Impedance Mismatch* In industry, Relational Databases (e.g., OracleDB, MySQL, PostgreSQL) are the one most used as persistent data storage of an application. On the other hand, Object-oriented Programming is the dominant programming paradigm. So often it is the case that your application will be written in one of the languages that supports Object-oriented Programming patterns, whereas your database will be relational in nature. Now, the Object-oriented programming paradigm is an evolution of programming techniques originated from Software Engineering domain; whereas, Relational Database or rather relational mapping in proven Mathematical deduction founding it’s application in data storage domain and hence it’s way to Software Engineering. And as expected, they don’t really go together, and have multiple spaces with subtle difference and gap in communication. This is termed as The Object-Relational Impedance Mismatch. Lucky for us, years of Software development has brought very much enough solution to this problem. Database Connectors and Transaction Script Pattern* As obvious, every enterprise application has to find a way to Query and Mutate data stored inside database from application based upon application behaviour. The basic and naïve technique to do that is to open a TCP connection to database and operate directly. Every database vendor offers SDK library in almost every popular programming language to open and connect to database port exposed over network (e.g., JDBC). And every SQL database has to maintain a standard Query Specifications defined by SQL. Note that, although every database vendor has to abide by the SQL Spec, they however if they want, can extend it to best use of their database and have an edge on product comparison. So, in order to perform any application behaviour that requires either of the operation on database, application has to open one or many connection to database, write SQL as per behaviour requirements and execute them inside database itself. The SDK must offer a cursor to current execution context inside database, that the application can use to perform core database operations (e.g., COMMIT , ROLLBACK etc). As it seems evident, this comes with another set of problems. Now that the SQL query is stored inside the application, it is strongly coupled with application code. Not only that, any change in requirement will force the engineer to rethink the implementation on database as well. Stored Procedures and Triggers* Extending the above discussion a bit more, Stored Procedures and Triggers are quite common non-standard feature that SQL Databases offers. The purpose of Stored Procedure is to define a set of instruction that is required to be performed for one single operation expected from application. It could be addition of Foreign Key to an entity in a different Table in respond to a INSERT mutation of the original Table. Now that the complexity has been moved out of application, where you just invoke the procedure via SQL prepared statement, the domain-specific implementation is now inside database itself. This increases coupling to a good cost that must be avoided in any changing system. Triggers are also might be helpful tools but again must be used in a database specific operation and not with domain specific operations. Domain Model Pattern* The first layer of abstraction that we can use is the Domain Model pattern. Domain model is an object model of the domain that incorporates both behaviour and data. So a Domain model can be think of as a set of states (attributes) and methods (behaviour). Now it may seem evident to represent a model in Object-oriented paradigm as a Class with One-to-One mapping with a database table. The entity attributes of database table are represented with attributes that either gets initialised during constructors and can be set by anyone (i.e., public accessor) associated with that Class instance. Each methods of that Class can then represent each form of database operation, Query or Mutate related to that model. Now compared to Transaction Script pattern, where all the logic resides in Application Controller layer that handles service requests of the application, the database specific operations are now moved inside model implementations providing a fine layer of abstraction between business logic and database logic. Data Mapper* Now from Domain Model, we encapsulate Model behaviour from Controller behaviour, but that does not solve Object-Relational Impedance Mismatch as mentioned in the beginning. For example, you cannot do inheritance in databases whereas it’s a common practice in Object-oriented programming. To solve such inconsistencies, another layer is introduced between Domain Model (in-memory Entity) and Database Table row (persistent Entity). The responsibilities of this layer includes — To keep in-memory references in sync with Database Table row, and to ripple any change that is done on in-memory reference. In particular, Data Mapper is not aware of either Data Model Class or Database Table, but it has knowledge how to map them and transfer information from one another. Object Relational Mapper/Entity Relational Mapper Extending functionalities one more level over Data Mapper is Object Relational Mapper. Data Mapper only maps between a Domain Class Instance and corresponding entity row in Database Table. But it is not aware of any relations that exists among entities, which is often the case. To solve this inconsistency, Object Relational Mapper is introduced. An Object Relational Mapper not only can map between Domain Model and Database Entity, but also could defined and Query/Mutation other Domain Models and Database Entities which are made relative to it. This abstracts away all the database operations from Model implementation by offering standard set of API to Query and Mutate, which then is used within Model Class to perform specific persistent operation without have to writing any raw SQL. There are multiple patterns involved to achieve this — Identity Field, Association Table Map, Foreign Key Map and Dependent Map. Repository Pattern Entity Entity Entity is a single instance of a Domain Model Class that represents an entry in a table in Database, i.e., Entity has One-to-One mapping with a Database Table row. So for a committed Entity, the data attributes of that instance can safely assumed to be persisted inside database. Now every Entity Relational Framework or simply Entity Framework offers a set of functionality to perform standard Query and Mutate operation. But often, if not always, enterprise application models requires more that primitive operations to be performed and that usually involves some amount of business logic to be coupled with Model Class implementations. For a small scale application code, that might just be enough and preferable option. But as the requirements keep adding up and/or changing, it becomes tedious work to maintain that inside a Model object. Also more amount of business logic dependent of Entity Framework means the application being too tight coupled with Framework which in turn makes it hard to replace or modify. Repository Repository Repository is a collection of a particular Entity type. Repositories will always have One-to-One correspondence with Entities. Or in other words, Repository has a composition relation with Entity. Repository could be thought of as a Table or Entity Set in a relational database whereas Entity being a row in Table with a Set of Attributes of its own identified using a key. Now that we have separated Domain Model implementation and specific Business rules, we can now put them inside Repository. In Repository pattern, a Model is allowed to have only data attributes that via Data Mapper will get pushed to Database and static/hook methods that performs pre-processing of those attributes during Object lifecycle. Rest of functionalities that are expected on Data Layer, i.e., business logics around Model can safely be implemented inside Repository Class. Recap A Domain Model is an Object-oriented representation of a Database Table Entity. A Data Mapper takes an in-memory Domain Model Instance and maps to particular Database Table Entity and responsible to keep both in sync. A Object-relational Mapper extends the behaviour of Data Mapper and associates related Database Table Entities with Domain Model object. A Repository is a collection of Entity and hence could be thought of as a representation of a Database Table, where all business logic related to Model resides. Advantages By separating Business Logic and Domain implementation, the application no longer have hard dependency on Entity Framework. Hence, depending upon use-cases Entity Framework can be changed, modified or upgraded. By encapsulating Domain specific logics inside Repository, we can reuse queries following DRY methodology. We can also perform Object-relational behaviour patterns (e.g., Unit of Work, Identity Map, Lazy Load) within scope of Repository to optimise number of database operations and hence the overall application performance. By providing solid separation between Application Controller (or Web Controller) and Domain Model, tight coupling around modules can be eradicated and all Query and Mutation at Controller layer can be done based upon Interfaces exposed by Repository only. Encapsulation of Domain Model and Repository helps avoiding unnecessary bugs regarding exposing a sensitive information to outside world. This also helps to maintain stable relations among Domain Objects and perform JOIN operations smoothly and abstracted from top layers. Bibliography Patterns of Enterprise Application Architecture — Martin Fowler, David Rice, Matthew Foemmel, Edward Hieatt, Robert Mee, Randy Stafford Domain-Driven Design: Tackling Complexity in the Heart of Software — Eric Evans, Foreword by Martin Fowler Agile Database Techniques: Effective Strategies for the Agile Developer — Scott W. Ambler I found this repository as a great starting point on undestanding the Repository Pattern with very simple relational example: https://github.com/w3tecch/express-typescript-boilerplate
https://medium.com/swlh/inside-out-repository-pattern-for-data-layer-5eca4dd0e7d4
['Progyan Bhattacharya']
2020-06-07 15:55:15.135000+00:00
['Database', 'Architecture', 'Software Development', 'Design Patterns', 'Software Engineering']
The AWS Shell
Anyone developing applications and infrastructure in AWS will at some point make use of the AWS Command Line Interface (CLI), either interactively at a shell command line, or by integrating the CLI into shell scripts. While it is a powerful avenue to access, create and manage AWS resources, it can get cumbersome to remember all of the possible commands and arguments for each of the services needed. We have relatively simple commands like $ aws ec2 describe-subnets which doesn’t need any arguments to retrieve a list of subnets for the default VPC in your account. On the other hand, there are a large number of CLI commands which require one or more arguments to get a response and the data you are interested in. The AWS Shell is a GitHub project which creates another interactive interface, which is capable of guiding what you need to do next for the command. aws-shell is a python based interface, easily installed using pip. pip install aws-shell After installing aws-shell, the first execution takes a little longer as the autocomplete index is built for all of the AWS commands. The autocomplete index is important as we shall see in a moment. In addition to the autocomplete index, a complete documentation set is also indexed and displayed at various times. Before you can use the shell to access AWS resources, you must configure your AWS access and secret key in the same manner as you would for the AWS CLI. Start aws-shell, and enter configure as the command. aws> configure AWS Access Key ID [****************NEWP]: AWS Secret Access Key [****************w7NK]: Default region name [us-east-1]: Default output format [json]: aws> All of the commands you would normally use in the CLI are available in aws-shell, except no more typing that aws command, and you get autocomplete to help you execute the command successfully. Profile Support One thing I find tedious about the CLI is using profiles to change which access and secret key I am using. This is much simpler to take advantage of in aws-shell. First, we need to set up a profile if we don’t already have one. Let’s look at what we have configured already (output has been formatted to fit the view). aws> configure list Name Value Type Location ---- ----- ---- -------- profile <not set> None None access_key ****NEWP shared-credentials-file secret_key ****w7NK shared-credentials-file region us-east-1 config-file ~/.aws/config Now, let’s add a profile called test, and then list the credentials we have configured: aws> configure --profile test AWS Access Key ID [None]: ........ AWS Secret Access Key [None]: ........ Default region name [None]: us-east-2 Default output format [None]: json aws> configure list Name Value Type Location ---- ----- ---- -------- profile <not set> None None access_key *****NEWP shared-credentials-file secret_key *****w7NK shared-credentials-file region us-east-1 config-file ~/.aws/config What? Where is our new profile? To see the new credentials, we have to specify the profile argument. aws> configure list --profile test Name Value Type Location ---- ----- ---- -------- profile <not set> test manual --profile access_key *****YAMY shared-credentials-file secret_key *****C7iv shared-credentials-file region us-east-2 config-file ~/.aws/config aws> Just as we can use this profile with the AWS CLI like: $ aws ec2 describe-instances --profile test which gets tedious very quickly, we can set the profile we want to use in the aws-shell. Using Profiles With your profile created, you can either start aws-shell with a profile definition using the command $ aws-shell --profile test Alternatively, you can also set and change your profile from within the aws-shell. aws> .profile Current shell profile: no profile configured You can change profiles using: .profile profile-name aws> .profile test Current shell profile changed to: test aws> .profile Current shell profile: test aws> As you know from working with the AWS CLI, changing profiles changes the access and secret key used to execute the commands in the associated account. Using the aws-shell The aws-shell takes over the terminal window, and displays some key sequences at the bottom, which can be toggled to suit your preference. The options which can be toggled and the key sequences are: F2 — Turn “fuzzy” on or off F3 — Change the key layout between emacs and vi F4 — Multi-column — provide command and sub-command hints in one or two columns F5 — Turn Help on or off F9 — set the focus F10 — exit aws-shell Fuzzy Search “Fuzzy” refers to fuzzy searching for the commands you type. This means you can get to the command you want without typing the full name. For example, we can type EC2 drio, and aws-shell shows ec2 describe-reserved-instances-offerings as the first option as drio are the first letter of each of the words in the command. Similarly, typing r53 shows the list of Route53 commands. Pressing the ‘F2’ key to turn off fuzzy searching, means you must type the command, sub-command, and options exactly. This feature is best left enabled. Keys This alters the key bindings used by aws-shell. The choices are vi and Emacs Multi-Column When aws-shell shows the list of commands, you can control if it is a single or multi-column list. It is a personal preference, but using multi-column with commands which have a long list of sub-commands can make it easier to find what you are looking for. Help By default, aws-shell displays help on the command and sub-command as you type them. If you find this distracting, you can disable the help display by pressing F5. Working in the aws-shell Aside from several features specific to the aws-shell, executing commands is like working in the AWS CLI, with the benefit of not having to remember precisely the name of the commands, sub-commands, options, etc. This can be a time saver. There are several other useful commands offered by aws-shell, called dot commands as they are prefixed by a . before the command. The .profile command allows changing the profile, meaning the access and secret keys used to execute the CLI commands. We saw this command earlier in this article. It is possible to change the working directory using the .cd command. aws> .cd invalid syntax, must be: .cd dirname aws> .cd ~ aws> !pwd /Users/roberthare aws> .cd /tmp aws> !pwd /private/tmp aws> It isn’t possible to see your current directory using a dot command, but this leads to the next feature, executing shell commands directly within aws-shell by prefixing the command with a !. Not only can we execute arbitrary shell commands within aws-shell, but we can use pipes (|), to send the output of the aws-shell command to a shell. aws> ec2 describe-subnets --output table | grep CidrBlock || CidrBlock | 172.31.48.0/20 || || CidrBlock | 172.31.80.0/20 || aws-shell keeps a history of all commands executed in the file ~/.aws/shell/history, so you can see what commands you have executed. There is no history command per se in the aws-shell, but you can take advantage of another feature to see and interact with the list. The .edit command retrieves your shell history in an editor, allowing you to both view your command history, and create a shell script from the aws-shell commands. The last dot commands are .exit and .quit, which have the same effect as pressing the F10 key, that of ending your aws-shell session. Conclusion If you spend any amount of time interacting with the AWS CLI, then you know how tedious it is always having to type those three extra letters. It doesn’t sound like a big deal, but if you are like me, occasionally you forget the “aws” part of that long command line. The aws-shell makes it simpler to interact with the AWS CLI, especially with the dynamic display of sub-commands options. References The AWS Shell The AWS CLI The AWS CLI Command Interface About the Author Chris is a highly-skilled Information Technology AWS Cloud, Training and Security Professional bringing cloud, security, training and process engineering leadership to simplify and deliver high-quality products. He is the co-author of more than seven books and author of more than 70 articles and book chapters in technical, management and information security publications. His extensive technology, information security, and training experience makes him a key resource who can help companies through technical challenges. Copyright This article is Copyright © 2020, Chris Hare.
https://labrlearning.medium.com/the-aws-shell-1792361a0c89
['Chris Hare']
2020-01-14 05:16:49.005000+00:00
['Aws Cli', 'Technology', 'Python', 'System Administration', 'AWS']
In Psychedelic Therapy, Don’t Forget the ‘Therapy’
In Psychedelic Therapy, Don’t Forget the ‘Therapy’ A wide gulf lies between what we ‘see’ on psychedelics and what we do with what we saw Image: nutcat/Getty Images Twenty minutes late, Matt (whose name was changed for privacy) stumbles into my therapy office, dives onto my green couch, stretches out like a Freudian pro, and buries his face in his hands. Matt, a renowned New York wellness entrepreneur, had found me through the intersection of Burning Man and the plant medicine communities — where most of my clients come from. This niche has found me organically and inevitably; I’ve had a private practice as a marriage and family therapist for 13 years, and for 10 of those, I’ve been on my own parallel personal journey of exploration both as a burner and a psychonaut (sailor of the mind). “Stupid. Stupid. Just stupid!” Matt groans. “What?” I ask. “I came to you to talk about psychedelics, but now you know my dirty secret. I can’t believe I came in today. I haven’t even slept. Fucking cocaine.” Matt had come to me to talk about his experience on ibogaine, a natural psychoactive medicine derived from the West African shrub iboga and increasingly used by patients trying to kick drug addictions. It clearly hadn’t worked for him, and it occurs to me that he might still be high on coke. And if he’s high, it’s unethical to treat him. But at this moment, other issues take precedence: the shame spiral he’s in, the fact that we’re already 25 minutes into a 50-minute session, and the reality that I couldn’t move his huge body off my sofa if I tried. Above all, a more universal question is nagging me: As more people turn to psychedelic-aided therapy, why are so many forgoing the necessary follow-up to process these experiences? That follow-up, known as psychedelic integration, is how we metabolize the supranormal phenomena we’ve experienced while tripping and fold it back into “normal” life. Integration takes real time and is likely to be bumpy. Old traumas can suddenly surface hours or days after a trip; we can be suddenly flooded by unmet needs or pain we thought we’d left in the past. This is something I know firsthand. Ayahuasca helped me become a better mother, a better ex-wife, and a better therapist. The “medicine” reflected back to me the power of compassion and how to cultivate it. Iboga, a medicine that originated in Gabon, showed me my own darkness and reconnected me to my creativity. Psilocybin has added depth and perspective to my meditation practice. And I’m still processing my one and only LSD trip that I had a little over a year ago. I know that without proper resources in place to guide them through the aftermath, people often struggle to reconcile life-changing “medicine experiences” with their actual lives. That is, a wide gulf lies between what we “see” on psychedelics and what we do with what we saw. Mind the gap, as we say in London, where I’m from. Psychedelics are proliferating both in and out of therapeutic settings; they have transcended hippies’ “tuning in and dropping out” and become increasingly accepted as mental health treatments. As the Multidisciplinary Association for Psychedelic Studies (MAPS) website puts it, “With both MDMA and psilocybin on the precipice of approvals as mainstream medicines, and several leading universities opening dedicated psychedelic research facilities, the story of the last 10 years has been one of profound breakthrough.” Psychedelic wellness solutions are featured in Michael Pollan’s bestseller How to Change Your Mind, Gwyneth Paltrow’s The Goop Lab, and Anderson Cooper’s segment on the John Hopkins study of psilocybin for mental health. The best thing about psychedelics may also be the worst: Psychedelics tear down your defenses fast. A study by Scientific American shows a 223% rise in LSD use among 35- to 39-year-olds between 2015 and 2018. That startling number only proves what I’m seeing in my own psychotherapy practice in Los Angeles: a striking influx of people who have taken psychedelics for therapeutic purposes. Too many of them, I’ve found, are seeking help weeks or months or years after life-changing experiences on ayahuasca, psilocybin, or iboga. And I’m increasingly concerned. So many patients are struggling. A therapeutic trip can trigger a dramatic shift — and that’s the point. People break addictions, heal traumas, and mend relationships immediately after psychedelic experiences. But over time, many become jaded, disappointed, confused, and destabilized. The best thing about psychedelics may also be the worst: Psychedelics tear down your defenses fast. People often finish a trip and leap into life-changing decisions — divorcing their spouses, leaving their jobs, or conceiving children. Many people don’t regret those decisions, but inevitably, after time has passed, some do. Which brings us back to my patient Matt, the wellness entrepreneur who can’t stop snorting cocaine. “What did iboga show you about yourself?” I ask him. Considered the “grandfather of psychedelics” (ayahuasca is the “grandmother”), iboga is a traditional West African root bark used in low doses to retain alertness while hunting and in high doses to cause near-death experiences for the purpose of spiritual awakening. Administered legally in sobriety clinics throughout Mexico, iboga is also one of the most powerful addiction interrupters we know of. It successfully got Matt off coke a couple of times, but he avoided the after-care protocols, including therapy and group support, and his abstinence didn’t stick. He had planned to take iboga again, he tells me, except two weeks ago, he discovered he’s a heart attack candidate, so it’s no longer safe. My own 48-hour iboga experience was a mixture of torture and revelation. I relived the shadows of my childhood, hatred, and rage over my parents’ divorce and my father’s addictions — all ugly emotions my upbringing had trained me to hide. Iboga revealed a split in me: the part of me that was on board with my life and the part of me that wanted out. It was the most confronting psychedelic experience I’ve ever had — wildly disorienting, exhausting, nauseating, and full of all the traumas I hadn’t been able to look at square in the face despite years of therapy. When it finally ended, I was overwhelmed with gratitude, as if I’d been given a permission slip to break out of the old family narratives and rewrite my future. We can open our hearts, be in community, and rip the lid off our failing civilization. But then we return home, and the real work begins: We must integrate. Experience has taught me what to do after a trip, and privilege has allowed me to do those things. I have the resources to receive therapy, get a massage, or do a meditation retreat. I have the know-how and education to keep researching. I’ve gotten to know the cutting-edge thinkers of the psychedelic community through conferences, networking, and reading (not that any of that guarantees a lack of turbulence). Even among those of us who have the material resources and time to process our experiences, many of us don’t have the communal models of indigenous communities. In plant medicine circles in the U.S., in contrived ceremonial settings, we may get a glimpse of what life could look like. We can open our hearts, be in community, and rip the lid off our failing civilization. But then we return home, and the real work begins: We must integrate. Matt turns to me, his face locked in unexpressed emotion. I know that look: the dam before it bursts. Then he starts sobbing so hard, he can’t speak and can’t stop. He covers his heart with his hand, his whole body shaking. “It showed me this,” he says. And for a minute, I get to see it: the excruciating vulnerability that is so often at the center of personal chaos. Matt has a vulnerable heart, figuratively and literally under threat of attack. Iboga has helped expose this reality, but what should he do about it? I outline a treatment plan for Matt that doesn’t involve iboga. To quit cocaine and find more healthy coping mechanisms, he would most likely need to enter a sobriety program or curate a team of mental health professionals to help him stay clean. Most challengingly, Matt would need to commit to sobriety. Even then, there would be no guarantee of success. Psychedelics are immensely helpful in getting under our defenses and providing a simplicity of vision that adults can’t normally access. In this sense, they are more powerful than the greatest therapists. But when they wear off, there’s not only what’s been exposed but the rest of life to deal with also: the to-do lists and taxes and physical needs. As Buddhist practitioner and mindfulness expert Jack Kornfield put it in the title of his book: “After the ecstasy, the laundry.” Or, in Matt’s case, rehab. For anyone new to psychedelic healing, my advice is to plan for integration before even starting the journey. I’d recommend thinking carefully through three categories: set, setting, and support. Set: Mindset is all-important. Having a clear and positive intention can be very helpful. It makes you more of an empowered participant in your journey, inviting in more purpose as well as a sense of a home base to return to — which could be a mantra of some kind or a question that you lead with. Music also makes a huge difference to a journey. If you are selecting the music yourself, choose tunes that speak to your heart and relax your nervous system. Mindset is all-important. Having a clear and positive intention can be very helpful. It makes you more of an empowered participant in your journey, inviting in more purpose as well as a sense of a home base to return to — which could be a mantra of some kind or a question that you lead with. Music also makes a huge difference to a journey. If you are selecting the music yourself, choose tunes that speak to your heart and relax your nervous system. Setting: Who you have your experience with and where is important. Pick a location that feels safe and quiet and that, ideally, you have a sense of connection to. If you’re sensitive to other people’s energy, do not do psychedelics in large groups. Being in or near nature can feel very supportive even if it just means being close to a plant or a tree. If you’re journeying away from home, make sure you have a solid plan of how to get there and back. (Needless to say, it should not involve you driving.) Who you have your experience with and where is important. Pick a location that feels safe and quiet and that, ideally, you have a sense of connection to. If you’re sensitive to other people’s energy, do not do psychedelics in large groups. Being in or near nature can feel very supportive even if it just means being close to a plant or a tree. If you’re journeying away from home, make sure you have a solid plan of how to get there and back. (Needless to say, it should not involve you driving.) Support: Research the shaman or therapist who will act as your guide. Make sure there’s someone else you’re checking in with on either side of your experience — a good friend or a wisdom figure in your life. It’s always valuable to have another perspective and not make your shaman or therapist your only source of feedback and or authority. Make sure you have time carved out on the back end of a journey to go slow and talk with a counselor if you can. Journaling, meditation, and spending time in nature all help with processing psychedelic downloads as well as allowing new neural pathways to be reinforced. A trip is like any voyage: It requires preparation, time for proper digestion. For further information and support, InnerSpace Integration hosts a network of resources along with psychedelic integration circles. Tam Integration offers a collection of trip sitter manuals and guides. For those who want to go deeper, Psychonautdocs.com has curated a wealth of essays and studies on a range of psychedelics. Most importantly, hold off on big decisions after life-altering experiences. A trip is like any voyage: It requires preparation, time for proper digestion, and a willingness to not only surrender to the experience but to allow the necessary time to change your life.
https://elemental.medium.com/in-psychedelic-therapy-dont-forget-the-therapy-1d40f886ba45
['Jane Garnett']
2020-12-17 23:29:47.999000+00:00
['Psychedelics', 'Mental Health', 'Brain', 'Therapy', 'Life']
What Does Coronavirus Do to the Body?
This Is How Your Immune System Reacts to the Coronavirus And what it means for treatment Photo: Bertrand Blay/iStock/Getty Images Plus People infected with the novel coronavirus can have markedly different experiences. Some report having nothing more than symptoms of a mild cold; others are hospitalized and even die as their lungs become inflamed and fill up with fluid. How can the same virus result in such different outcomes? Scientists are still perplexed by the novel coronavirus. But it’s becoming increasingly clear that the immune system plays a critical role in whether you recover from the virus or you die from it. In fact, most coronavirus-related deaths are due to the immune system going haywire in its response, not damage caused by the virus itself. So what exactly is happening in your body when you get the virus, and who is at risk for a more severe infection? In fact, most coronavirus-related deaths are due to the immune system going haywire in its response, not damage caused by the virus itself. When you first become infected, your body launches its standard innate immune defense like it would for any virus. This involves the release of proteins called interferons that interfere with the virus’s ability to replicate inside the body’s cells. Interferons also recruit other immune cells to come and attack the virus in order to stop it from spreading. Ideally, this initial response enables the body to gain control over the infection quickly, although the virus has its own defenses to blunt or escape the interferon effect. The innate immune response is behind many of the symptoms you experience when you’re sick. These symptoms typically serve two purposes: One is to alert the body that an attack has occurred — this is thought to be one of the roles of fever, for example. The other purpose is to try and get rid of the virus, such as expelling the microscopic particles through cough or diarrhea. “What typically happens is that there is a period where the virus establishes itself, and the body starts to respond to it, and that’s what we refer to as mild symptoms,” says Mandeep Mehra, MD, a professor of medicine at Harvard Medical School and chair in advanced cardiovascular medicine at Brigham and Women’s Hospital. “A fever occurs. If the virus establishes itself in the respiratory tract, you develop a cough. If the virus establishes itself in the gastrointestinal mucosal tract, you’ll develop diarrhea.” These very different symptoms emerge depending on where in the body the virus takes hold. The novel coronavirus gains entry into a cell by latching onto a specific protein called the ACE2 receptor that sits on the cell’s surface. These receptors are most abundant in the lungs, which is why Covid-19 is considered a respiratory illness. However, the second-highest number of ACE2 receptors are in the intestines, which could explain why many people with the coronavirus experience diarrhea. “Because the virus is acquired through droplets, if it comes into your mouth and enters your oropharynx, it has two places where it can go from there. It can transition into the lung from the oropharynx when you breathe in, or if you have a swallow reflex, it’ll go down to your stomach,” Mehra says. “That’s how it can affect both sites.” The goal of the innate immune defense is to contain the virus and prevent it from replicating too widely so that the second wave of the immune system — the adaptive, or virus-specific response — has enough time to kick in before things get out of hand. The adaptive immune response consists of virus-specific antibodies and T cells that the body develops that can recognize and more quickly destroy the virus. These antibodies are also what provide immunity and protect people from becoming reinfected with the virus after they’ve already had it.
https://elemental.medium.com/this-is-how-your-immune-system-reacts-to-coronavirus-cbf5271e530e
['Dana G Smith']
2020-11-13 19:44:30.797000+00:00
['Body', 'Covid 19', 'Coronavirus', 'Immune System', 'Health']
OpenGenius, every business and individual has the power to use innovation to progress
OpenGenius has raised £1.1M in total. We talk with Chris Griffiths, its CEO. PetaCrunch: How would you describe OpenGenius in a single tweet? Chris Griffiths: OpenGenius are global experts in creativity, productivity and innovation strategy — we believe every business and individual has the power to use innovation to progress. We’re trailblazing the way for companies to ‘work creative’ with our pioneering app, Ayoa. PC: How did it all start and why? CG: Before I started OpenGenius, I was CEO at one of Europe’s fastest-growing ed-tech companies; I knew I wanted it to grow into something more and weave innovation into the fabric of the company, but the board only wanted to focus on ed-tech, so I resigned. Being true to myself, I knew I believed in the power of creativity and innovation — I wanted to start a company that could bring those things to other people through software and training. Alas, 6 months later I founded OpenGenius. Today, our mission of innovation is established company-wide and over 1.5 million people have used our software and services to drive their innovation-based progress. PC: What have you achieved so far? CG: We have achieved a lot — teams and individuals from companies such as Disney, Nasa, Apple, Coca-Cola, Nike and McDonalds have all used our software solutions. The Ayoa app — which was launched in June this year — is something we’re particularly proud of. It’s a tool that really overhauls modern society’s broken approach to productivity; it stops people app-switching, and puts an emphasis on using creativity to uncover the right ideas so you can then do the right tasks. We also have a global network of innovation professionals who are helping to spread the innovation message around the world. Our OpenGenius team is continuing to grow; we’re based in beautiful Penarth in the tech-hub of Tec Marina which I founded with my wife Gaile to foster a creativity-focussed workspace. The whole company is excited to see what we can achieve going forward. PC: How will you use your recent funding round? CG: We are using our funding to expand our international customer base — we’re doing that by being more aggressive in our marketing and sales tactics. We’re growing the teams in both these areas, as well as continuing to develop Ayoa which has a very exciting roadmap laid out for the next six months. PC: What do you plan to achieve in the next 2–3 years? CG: We’re ambitious and we firmly believe we have created a disruptive software product with Ayoa — over the next 2–3 years, we expect to see a huge increase in our user base and the awareness of our brand. People are excited by what we are doing here. We’re proud to be the first Welsh company ever accepted onto the London Stock Exchange ELITE Accelerator Programme, and we have our sights set on flotation in the future. We’re not following market trends, but instead, we’re driving change with the Ayoa tool which offers something no other software does — all I can say is, watch this space.
https://medium.com/petacrunch/opengenius-every-business-and-individual-has-the-power-to-use-innovation-to-progress-dedea6518353
['Kevin Hart']
2019-08-24 21:21:01.002000+00:00
['Startup', 'Innovation Management', 'Innovation', 'Progress', 'Creativity']
SigNet (Detecting Signature Similarity Using Machine Learning/Deep Learning): Is This the End of Human Forensic Analysis?
SigNet (Detecting Signature Similarity Using Machine Learning/Deep Learning): Is This the End of Human Forensic Analysis? SigNet (Detecting Signature Similarity using Machine Learning/Deep Learning): Is this the end of Human Forensic Analysis? My grandfather was an expert in handwriting analysis. He spent all his life analyzing documents for the CBI (Central Bureau Of Investigation) and other organizations. His unique way of analyzing documents using a magnifying glass and different tools required huge amounts of time and patience to analyze a single document. This is back when computers were not fast enough. I remember vividly that he photocopied the same document multiple times and arranged it on the table to gain a closer look at the handwriting style. Handwriting analysis involves a comprehensive comparative analysis between a questioned document and the known handwriting of a suspected writer. Specific habits, characteristics, and individualities of both the questioned document and the known specimen are examined for similarities and differences. As this problem consists of detecting and analyzing patterns, Machine Learning is a great fit to solve this problem. A handwritten document captures a lot of detail. (https://unsplash.com/photos/AbQNy5Vvpjc) Why and How? Why: My grandfather’s unique way of analyzing documents using a magnifying glass and different tools required huge amounts of time and patience to analyze a single document. This is back when computers were not fast enough. I remember vividly that he photocopied the same document multiple times and arranged it on the table to gain a closer look at the handwriting style. While I agree that we cannot replace that job with an A.I with a 100% accuracy, we can certainly build a system capable of aiding human beings. How: To build our Signature Similarity network, we will use utilize the wonders of Deep Learning. We will go through three approaches to extract the similarity between our handwritten signatures. For our initial data, we will use the HandWritten Signatures dataset from Kaggle. Requirements For this project we will require: Python 3.8: The Programming Language TensorFlow 2: The Deep Learning Library Numpy: Linear Algebra Matplotlib: Plotting images Scikit-Learn: General Machine Learning Library The Dataset The dataset contains real and forged signatures of 30 people. Each person has 5 genuine and 5 forged signatures. The Directory Structure of our data. For loading the data, I have created a simple load_data() that iterates through all the datasets and extracts real and forged signatures with a label of 1 and 0 respectively. In addition to this, I have also created a dictionary of tuples consisting of images and labels. (To be used later in the project). def load_data(DATA_DIR=DATA_DIR, test_size=0.2, verbose=True, load_grayscale=True): """ Loads the data into a dataframe. Arguments: DATA_DIR: str test_size: float Returns: (x_train, y_train,x_test, y_test, x_val, y_val, df) """ features = [] features_forged = [] features_real = [] features_dict = {} labels = [] # forged: 0 and real: 1 mode = "rgb" if load_grayscale: mode = "grayscale" for folder in os.listdir(DATA_DIR): # forged images if folder == '.DS_Store' or folder == '.ipynb_checkpoints': continue print ("Searching folder {}".format(folder)) for sub in os.listdir(DATA_DIR+"/"+folder+"/forge"): f = DATA_DIR+"/"+folder+"/forge/" + sub img = load_img(f,color_mode=mode, target_size=(150,150)) features.append(img_to_array(img)) features_dict[sub] = (img, 0) features_forged.append(img) if verbose: print ("Adding {} with label 0".format(f)) labels.append(0) # forged # real images for sub in os.listdir(DATA_DIR+"/"+folder+"/real"): f = DATA_DIR+"/"+folder+"/real/" + sub img = load_img(f,color_mode=mode, target_size=(150,150)) features.append(img_to_array(img)) features_dict[sub] = (img, 1) features_real.append(img) if verbose: print ("Adding {} with label 1".format(f)) labels.append(1) # real features = np.array(features) labels = np.array(labels) x_train, x_test, y_train, y_test = train_test_split(features, labels, test_size=test_size, random_state=42) x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.25, random_state=42) print ("Generated data.") return features, labels,features_forged, features_real,features_dict,x_train, x_test, y_train, y_test, x_val, y_val def convert_label_to_text(label=0): """ Convert label into text Arguments: label: int Returns: str: The mapping """ return "Forged" if label == 0 else "Real" features, labels,features_forged, features_real, features_dict,x_train, x_test, y_train, y_test, x_val, y_val = load_data(verbose=False, load_grayscale=False) Visualization of the data The images are loaded with a target_size of (150,150,3). A snapshot of the data loaded followed by the label. (1 represents real and 0 represents forged) Approach #1: Similarity in images (signatures) using MSE and SSIM. For this approach, we will compute the similarity between images by using MSE (Mean Squared Error) or SSIM(Structural similarity). As you can see the formulas are pretty straightforward and fortunately Scikit-Learn provides an implementation for SSIM. def mse(A, B): """ Computes Mean Squared Error between two images. (A and B) Arguments: A: numpy array B: numpy array Returns: err: float """ # sigma(1, n-1)(a-b)^2) err = np.sum((A - B) ** 2) # mean of the sum (r,c) => total elements: r*c err /= float(A.shape[0] * B.shape[1]) return err def ssim(A, B): """ Computes SSIM between two images. Arguments: A: numpy array B: numpy array Returns: score: float """ return structural_similarity(A, B) Now let us take two images from the same person, one of them is real and the other is a fake. First Image Second Image Results for MSE and SSIM As you can see, MSE Error does not have a fixed bound whereas SSIM has a fixed bound between -1 and 1. Lower MSE represents Similar images whereas lower SSIM represents Similar images. Approach #2: Building a classifier using CNNs that can detect forged or real signatures. With this approach, we will try to come up with a classifier (using CNNs) to detect forged or real signatures. As CNN's are known to detect intricate features among images, we will experiment with this classifier. We are bound to encounter with overfitting as we do not have enough data. We will probably use Image Augmentation to generate more training data. Our Model Architecture On training our model, we are bound to encounter overfitting and after applying techniques to overcome the problem, the model did not improve. Model Training Our Model’s loss Approach #2.1: Transfer Learning using Inception To improve our model we will use transfer learning and fine-tune the model for this particular problem. The InceptionV3 Model For this approach, we will load pre-trained weights and add a classification head at the top to cater to this problem. # loading Inception model2 = tf.keras.applications.InceptionV3(include_top=False, input_shape=(150,150,3)) # freezing layers for layer in model2.layers: layer.trainable=False # getting mixed7 layer l = model2.get_layer("mixed7") x = tf.keras.layers.Flatten()(l.output) x = tf.keras.layers.Dense(1024, activation='relu')(x) x = tf.keras.layers.Dropout(.5)(x) x = tf.keras.layers.Dense(1, activation='sigmoid')(x) net = tf.keras.Model(model2.input, x) net.compile(optimizer='adam', loss=tf.keras.losses.binary_crossentropy, metrics=['acc']) h2 = net.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=5) Our Model Training Model’s loss Model’s accuracy These two approaches show that if we use transfer learning, we get much better results than using a plain CNN model. Keep in mind, these approaches do not learn the similarity function but these focus on the classifying whether the image is forged or real. There are still many ways we can improve our model, one is by augmenting data. Approach #3: Siamese networks for image similarity With our third approach, we will try to learn the similarity function. We will use something called Siamese networks (due to the nature of our data i.e fewer training examples). In this approach, we will use Siamese networks to learn the similarity function. Siamese means ‘twins’ and the biggest difference between normal NNs is that these networks try to learn the similarity function instead of trying to classify (fitting the function). We first create a common feature vector for our images. We will pass two images (positive and negative) and use a contrastive loss function (Distance metric (L1 distance)) and in the end, we squash the output between 1 and 0 (sigmoid) to get the final result. Siamese network (Image from [1]) Our Feature Vector Model # creating the siamese network im_a = tf.keras.layers.Input(shape=(150,150,3)) im_b = tf.keras.layers.Input(shape=(150,150,3)) encoded_a = feature_vector(im_a) encoded_b = feature_vector(im_b) combined = tf.keras.layers.concatenate([encoded_a, encoded_b]) combine = tf.keras.layers.BatchNormalization()(combined) combined = tf.keras.layers.Dense(4, activation = 'linear')(combined) combined = tf.keras.layers.BatchNormalization()(combined) combined = tf.keras.layers.Activation('relu')(combined) combined = tf.keras.layers.Dense(1, activation = 'sigmoid')(combined) sm = tf.keras.Model(inputs=[im_a, im_b], outputs=[combined]) sm.summary() Our complete siamese network Dataset Generation To generate the required dataset, we will try two approaches. First, we will generate data on the basis of labels. If two images have the same label (1 or 0), then they are similar. We will generate data in pairs in the form (im_a, im_b, label). Second, we will generate data on the basis of a person's number. According to the dataset, 02104021.png represents the signature produced by person 21 (i.e real). Data generation Approach #1: Here we are assuming similarity on the basis of labels. If two images have the same label (i.e 1 or 0) then they are similar. def generate_data_first_approach(features, labels, test_size=0.25): """ Generate data in pairs according to labels. Arguments: features: numpy labels: numpy """ im_a = [] # images a im_b = [] # images b pair_labels = [] for i in range(0, len(features)-1): j = i + 1 if labels[i] == labels[j]: im_a.append(features[i]) im_b.append(features[j]) pair_labels.append(1) # similar else: im_a.append(features[i]) im_b.append(features[j]) pair_labels.append(0) # not similar pairs = np.stack([im_a, im_b], axis=1) pair_labels = np.array(pair_labels) x_train, x_test, y_train, y_test = train_test_split(pairs, pair_labels, test_size=test_size, random_state=42) x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.25, random_state=42) return x_train, y_train, x_test, y_test, x_val, y_val, pairs, pair_labels x_train, y_train, x_test, y_test, x_val, y_val, pairs, pair_labels = generate_data_first_approach(features, labels) # show data plt.imshow(pairs[:,0][0]/255.) plt.show() plt.imshow(pairs[:,1][0]/255.) plt.show() print("Label: ",pair_labels[0]) Preview of our dataset Training the dataset with Dataset Generation #1 Now we will train the network. Due to computational limitations, we only train the model on a single epoch. # x_train[:,0] => axis=1 (all 150,150,3) x_train[:,1] => axis=1 (second column) sm.fit([x_train[:,0], x_train[:,1]], y_train, validation_data=([x_val[:,0],x_val[:,1]], y_val),epochs=1) Siamese Network’s result The metric is calculating the L1-Distance (MAE) between the y_hat and y. between the y_hat and y. Due to computation limitations, we only train it for one epoch This represents a very simple siamese network capable of learning the similarity function. Data Generation Approach #2 In this approach, we try to set up a dataset where we cross multiply each signature with other number signature. The inputs and the outputs must be the same size. def generate_data(person_number="001"): x = list(features_dict.keys()) im_r = [] im_f = [] labels = [] # represents 1 if signature is real else 0 for i in x: if i.startswith(person_number): if i.endswith("{}.png".format(person_number)): im_r.append(i) labels.append(1) else: im_f.append(i) labels.append(0) return im_r, im_f, labels def generate_dataset_approach_two(size=100, test_size=0.25): """ Generate data using the second approach. Remember input and output must be the same size! Arguments: features: numpy array labels: numpy array size: the target size (length of the array) Returns: x_train, y_train """ im_r = [] im_f = [] ls = [] ids = ["001","002","003",'004','005','006','007','008','009','010','011','012','013','014','015','016','017','018','019','020','021','022', '023','024','025','026','027','028','029','030'] for i in ids: imr, imf, labels = generate_data(i) # similar batch for i in imr: for j in imr: im_r.append(img_to_array(features_dict[i][0])) im_f.append(img_to_array(features_dict[j][0])) ls.append(1) # they are similar # not similar batch for k in imf: for l in imf: im_r.append(img_to_array(features_dict[k][0])) im_f.append(img_to_array(features_dict[l][0])) ls.append(0) # they are not similar print(len(im_r), len(im_f)) pairs = np.stack([im_r, im_f], axis=1) ls = np.array(ls) x_train, x_test, y_train, y_test = train_test_split(pairs, ls, test_size=test_size, random_state=42) x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.25, random_state=42) return x_train, y_train, x_test, y_test, x_val, y_val, pairs, ls x_train, y_train, x_test, y_test, x_val, y_val, pairs, ls = generate_dataset_approach_two() # show data plt.imshow(x_train[:,0][0]/255.) plt.show() plt.imshow(x_train[:,0][1]/255.) print("Label: ",y_train[0]) Represents a forged signature. (0) Training the Network with Dataset Generation #2 Training the network (Due to computational limitations, we train the model for a single epoch) The biggest difference between dataset generation #1 and #2 is the way inputs are arranged. In dataset #1 we select random signatures according to their labels but in #2 we select signatures from the same person throughout. Conclusion To conclude, we present a plausible method to detect forged signatures using Siamese Networks and most importantly we show how we can easily train a Siamese network only a few training examples. We see how we can easily achieve great results using transfer learning. References [1] https://arxiv.org/pdf/1709.08761.pdf Github: https://github.com/aaditkapoor/SigNet
https://medium.com/swlh/signet-detecting-signature-similarity-using-machine-learning-deep-learning-is-this-the-end-of-1a6bdc76b04b
['Aadit Kapoor']
2020-08-17 17:58:22.362000+00:00
['Machine Learning', 'Artificial Intelligence', 'Python', 'TensorFlow', 'Data Science']
Reasons why developers make apps for Android rather than iOS
Android vs iOS What is the most popular operating system for mobile platforms? From small startups to big enterprises in Information technology. This is the paramount question and the litmus test that decides what mobile platform is to be developed first. Why not all at the same time you may ask? If you run a big development team skilled with all set, possess all the resources and tools for mobile development, then you can kick off all the platforms at once. However, it is very important to have a detailed risk analysis and a robust budget. Why Andriod is Worthy We are taking a look at the technical and commercial advantages of Andriod over iOS. These are the rays of light to the path, the main reasons why an Andriod app will be a great choice to come first in the development process. Importance of Andriod The Market share This is no doubt a pointer, even If you ignore all the other reasons mentioned later, numbers don’t Lie! Statistics from the IDC shows clearly that Android is leading the number of smartphones shipped worldwide with an 86.1% market share. “ In 2018, around 1.56 billion smartphones were sold worldwide. In the first quarter of 2019, around 88 percent of all smartphones sold to end users were phones with the Android operating system. Android operating system.” source www.statista.com/ Portability Java as the primary programming language for Native Android apps provides a special advantage. The codes are accessible to other mobile operating systems easily via porting. Besides, Android apps have a range that reaches to Chrome OS and Windows devices. Android Studio The Android Studio has the beauty and strength of a very potent IDE that is based on IntelliJ IDE. Android Studio as an IDE was designed and customized for Android app development. Speed and efficiency are key with Andriod studio. It allows the setup of a new Android project for different types of Android apps in a few seconds. This, of course, was a relief from the old era when Android app development was made with Eclipse and the Android Developer Tools plugin. Android studio is powered with the following features : Systems built on Gradle-base Real-time app layout rendering with Live-layout The option of multiple screen configurations and layout preview while editing Variants and multiple Apk file generation can be built. Lint tools for enabling version compatibility, usability, performance, etc Allows the development of Android Wear and gears, TV and Auto apps Compatible integration with Google Cloud Platform, App Engine, and Google Cloud Messaging Coding in Java Java has proven itself to be a chosen programming language for the coding of various devices and operating systems including Android. Java codes hold the keys to other operating systems including Windows, and Linux. Unlike iOS, Apple’s coding languages Objective C and Swift are really only used for developing Apple-type products iOS and OS X, and can not be easily ported to other operating systems. Except for Swift that is open-sourced with Linux tools. Quick to App Store There is a delay for weeks before Apps deployed to Apple’s App Store can be available for download by users, but for Google Play store it’s in just a few hours and you can download. Google play store allows easy updates, you can run updates multiple times a day depending on the urgency. On the other side of the Apple App Stores, it is the same lengthy protocol as a fresh deployment even if it is just fixing a bug. Play Store Monitor If you plan your app release well, you can control by specifics the percentage of users that can get updates, this will allow you to track feedback and crash reports. You can then increase the percentage of users to receive further updates. This is possible because the Play Store allows the release of an app in both alpha and beta releases, and can be made to be available to an exclusive group of testers. The advantage here is that the initial access to a subset of users, and feedbacks received can be used to finetune the app before the final release. It also allows a staged and gradual update rollout. Cost of Android phones iPhone is most times seen as high class and expensive, fewer people use it if compared to Android on the bases of cost. The notion that iPhone uses can afford more makes Andriod apps cheaper. Although in the past this might be true, however in the present day, Android apps have been surpassing iPhone apps in some categories both in the initial app and for in-app purchases. This has been proven by increasing profit from in-app adverts which are cheaper on Andriod and mobile app games. Your Road to Andriod As a product owner, if you are still confused about the platform to develop first, be it for your business automation or for the use of the general public. It is smart moves backed by economics and best to go for Andriod since it has a larger reach. Are you a new business startup, established business owner, or a product manager? Do you want to create, tweak, or manage an app? iTwis is willing to see you through this process. Our well-skilled and experienced mobile development team can successfully facilitate your app production process from scratch and bring it to a significant product release. Contact us today for a consultation.
https://medium.com/itwis/reasons-why-developers-make-apps-for-android-rather-than-ios-c345c8b1c198
['Ayo Oladele']
2020-07-02 02:41:49.251000+00:00
['Mobile App Development', 'Java', 'Software Development', 'Kotlin', 'Android']
How the U.S. got Today’s Uncle Sam
Alfred Leete’s 1914 poster Lord Kitchener Wants You (above), was a key promotional tool to encourage British men to volunteer for the army. It featured Great Britain’s Secretary for War Lord Kitchener, a man with a serious mustache, and a more serious stare. According to The Conversation, hundreds of thousands of British men volunteered to fight in World War I after the poster was released. Many other factors led to the high levels of volunteerism amongst British men, such as social pressure, peer pressure, and an aggressive recruitment campaign. Still, the poster of Kitchener was popular and effective. The Lord Kitchener poster, especially his pointing finger, influenced the 1916 Uncle Sam poster’s illustrator. James Flagg — Library of Congress James Montgomery Flagg is the illustrator of the iconic Uncle Sam, I Want YOU for U.S. Army poster, but there is also a striking resemblance. Look at his face (above); if you aged it until the hair turned white, added a white-starred hat with blue trim and a white goatee, you would see none other than the iconic Uncle Sam. Flagg based Uncle Sam’s face on his own. According to Travis Andrews of the Washington Post, Flagg’s self-influenced depiction of Uncle Sam was so effective it was printed at least 4 million times in the final year of World War I. It became so popular it was used again to recruit troops during World War II. Including the Uncle Sam poster, Flagg designed 46 propaganda posters for the U.S. during World War I. Flagg’s Uncle Sam, aside from the new face, was also more muscular and powerful than the previous depictions. Flagg’s Uncle Sam fits how many Americans see America, strong, fervent, and patriotic. Context Many Americans wanted to enter World War I soon after it began in 1914. On May 7, 1915, the Lusitania’s sinking increased those desires, but President Woodrow Wilson stayed peaceful. President Wilson was reelected on a peace platform in 1916, but it was clear war loomed on the horizon. Flagg’s reimagination of Uncle Sam deepened feelings of patriotism during the key months leading up to the U.S. officially entered the war. It had a similar effect as Leete’s poster featuring Lord Kitchener had on the British citizens. President Woodrow Wilson appeared before a Joint Session of Congress on April 2, 1917. He said: “It is a fearful thing to lead this great peaceful people into war…but the right is more precious than peace, and we shall fight for the things which we have always carried nearest our hearts, — for democracy, for the right of those who submit to authority to have a voice in their own governments…for a universal dominion of right by such a concert of free peoples as shall bring peace and safety to all nations and make the world itself at last free.” Uncle Sam represents President Wilson’s words. He is strong enough to stand up and fight against tyranny to preserve democracy and freedom. Final Thoughts Today, Uncle Sam is not only used to promote patriotism and recruitment for the military. He is used throughout American culture. I recently saw Uncle Sam wearing a mask and gloves to promote safety during the coronavirus pandemic. Now, when I look at Uncle Sam, I see Wilson, Lord Kitchener, and Flagg, as well as the distinguishable personification of the U.S. that I value as an American.
https://medium.com/frame-of-reference/how-the-u-s-got-todays-uncle-sam-b954d1bd4242
['Samuel Sullivan']
2020-11-17 18:02:11.062000+00:00
['Culture', 'Nonfiction', 'History', 'Marketing', 'Art']
My top three Fermi Paradox solutions
It is not easy to dismiss the Fermi paradox. Either technological civilizations or even life are extremely unlikely or too short-lived, or some exotic theory is the case, of which there are many. For instance, aliens may have already conquered the galaxy and are among us, or they retreated in some digital simulated life rather than venturing in space, or perhaps we live in a computer simulation with Earth being the only simulated planet with life. Here, just for the record I briefly go over three I find most likely (at least today). One important requirement for an explanation is that it should not rely on undue assumptions on other civilizations. “Other civilizations don’t wish to expand like we do” won’t do: some of them may not, but some may. It only takes one civilization with similar mentality to ours to expand across the galaxy. If we are the first, and if we don’t destroy ourselves, most likely we will eventually expand. So here are my top three explanations: A technological great filter lies ahead of us. I find it likely that science and technology require a certain free spirit of innovation, exploration, and individuality. Across our history, the leverage of an individual to cause damage is increasing steadily. Back 10,000 years ago, a strong and mean individual could kill a few people and bring down a hut or two. Today, a bad actor can cause much more harm. What if there is a technology that will unavoidably be invented, which gives the ability to anyone to instantly and irreversibly destroy the civilization? For example, an exotic and easily tapped energy source, or downloadable code for grey goo. If such a technology inexorably lies ahead of us, which is plausible, it is difficult to imagine how we could prevent every single individual from deploying it. How about other civilizations, could a collectivist civilization akin to an ant colony avoid such doom? Brains are expensive; in a collectivist civilization that confers no evolutionary advantage to individual intelligence, “free-riders” will get rid of their brains, so it is conceivable that every technological civilization consists of competing individuals and in every single one of them one individual eventually and inexorably triggers the doomsday machine. One catch to this explanation: for “best results” the doomsday machine must be triggered before exponential space exploration commences. Aliens are among us. The first civilization to develop space travel, if similar to us in mindset, will likely want to expand at least defensively across the galaxy and beyond. If nothing else, to prevent future aggressor civilizations from expanding. Or perhaps because it is aware of destructive abilities of even inferior civilizations (think: grey goo) and wants to monitor the galaxy. A defensive expansion is more likely — a no-brainer — compared to a rapid colonization, which has the downside of creating potential future competitors. A civilization that interconnects into a big internet-brain may have little use of distant colonies and expand at a rate much lower than 1% of the speed of light. In the defensive expansion scenario, the civilization will still rapidly send robot factories to build drones that will monitor all interesting planetary systems, and be ready to unleash destructive force to anything that looks threatening. Incidentally, UFOs are becoming mainstream. If UFO reports are to be believed (OK, a big IF) then the reported UFOs are acting exactly as expected from drones who inspect things, are unconcerned about us, and are ready to engage in case anything they deem threatening appears. Which raises the important question of what they might deem threatening. Or perhaps, aliens are among us in the quantum realm or in some other unexpected physical form. The exponential technological progress has to reach one or a few phase transitions, after which all bets are off. To advanced aliens, components such as neurons or silicon transistors will seem hopelessly bulky and inefficient as computational building blocks. Hence, as a colleague pointed out, SETI is severely outdated using technology and reasoning of the 1950s to search for aliens and should broaden its scope and methods. I bet Carl Sagan — my childhood hero and pioneer of SETI— would agree. Technological civilizations are unlikely. This is the explanation I find least likely (rather, I leave room for an entirely different explanation, such as a specific and compelling hypothesis of why a sufficiently advanced civilization finds the visible universe uninteresting or explores it invisibly). Intelligence has most likely only evolved once on Earth in terms of nervous system; however, higher intelligence has evolved independently multiple times. Orangutans and chimps, dolphins and whales, elephants, ravens and crows, kea and African Grey parrots, and very independently octopuses and squids, have remarkable intelligence. Many species use tools. We are first to develop technology on Earth, but isn’t it a stretch to assert that if we weren’t around no other species on Earth would develop technology in the next 100 million years? Or 1 billion years? What if life is vanishingly unlikely. Again, I don’t think that’s a robust explanation. The first step of life cannot be unlikely: while liquid water appeared on Earth 4.4 billion years ago, the first evidence of life may date back to 4.3 billion years ago, which hints to life originating quickly in geological terms once conditions are right. If any step in the evolution to intelligence was vanishingly unlikely, that step would most likely have taken a disproportionately long time on Earth. That is not what we observe: the last universal common ancestor appears about 3.5 billion years ago (bya) after a steady evolution of basic biomolecular functions; photosynthesis appears 3 bya; land microbes 2.8 bya; cyanobacteria’s oxygen photosynthesis 2.5 bya; eukaryotes 1.85 bya, land fungi 1.3 bya, sexual reproduction 1.2 bya; marine eukaryotes 1 bya; protozoa 750 million years ago, and so on, steadily evolving into intelligent species in the past few hundred million years. The coarse-grained breakdown of evolution’s steps in the early billions of years reflects our lack of data on the ancient progression of molecular biology rather than any single vanishingly unlikely event. Photo by Aziz Acharki on Unsplash Incidentally, I want to urge against jumping to the anthropic principle and stating that there is nothing puzzling about seemingly being alone because the sole intelligent civilization necessarily is puzzled about being alone. The anthropic principle is quite unsatisfying to begin with in cosmology. However, at least in that case we have a single observed event to explain — the universe and its cosmological properties — and no expectation of observing other similar events (i.e., other universes). In the case of the Fermi Paradox, because there may be yet unobserved civilizations lurking around, we have to weigh any theory of us being alone against some prior probability of it being true. Given our observations on Earth, the prior probability we assign to technological civilizations cannot be vanishingly small — everything points to steady biochemical and then organismal evolution from formation of water all the way to intelligent tool-using species — therefore we have to make every effort to completely exclude other explanations before we jump to the conclusion that we are alone. So where does this leave us? I hope (1) is false. (2) is no good news either. (3) is wishful thinking, or perhaps scary too. I would love to see a better explanation. If you have your favorite explanation in mind, or thoughts to share, please comment below! Want more stories like this? Be sure to check out more from Mission!🧠 👉 Here!
https://medium.com/the-mission/my-top-three-fermi-paradox-solutions-10d598a86197
['Serafim Batzoglou']
2019-06-25 03:39:02.577000+00:00
['Aliens', 'History', 'Cosmos', 'Science', 'Future']
TLDR Migrating a 130TB Cluster from Elasticsearch 2 to 5 in 20 Hours with 0 Downtime by Fred de Villamil
TLDR Migrating a 130TB Cluster from Elasticsearch 2 to 5 in 20 Hours with 0 Downtime by Fred de Villamil Replicate it using built-in ES capabilities, then reindex delta from your main data source. Also — it’s so cool to have everything saved in ~kafka~ a log. 77 nodes cluster, with 200TB storage, 4.8TB RAM, 2.4TB being allocated to Java, and 924 CPU core. It is made of 3 master nodes, 6 ingest nodes, and 68 data nodes. The cluster holds 1137 indices, with 13613 primary shards, and 1 replica in a second data center. 1 option — Cluster Restart: close every index, upgrade the software version, start the nodes, then open the indexes again. Downside — maintenance downtime =( Better option — expand cluster 2x times, replicate everything 2 more times, split cluster into two, then catch-up with changes: added 90 new servers with the same ES 2.3. “number_of_replicas : 1” becomes “number_of_replicas : 3" and ES takes care of copying every index and shard onto new servers. Transferring 130TB of data at up to 4Gb/s puts lots of pressure on the hardware. The load on most machines was up to 40, with 99% of the CPU in use. Iowait went from 0 to 60% on most of our servers. mitigate problems of serving clients from the busy hardware by using ES “zones” to split the data into cold and hot parts and dedicated some resources to serve the hot part to clients without reduced QoS. split the cluster: shutdown new servers, disconnect them from the cluster in terms of auto-discovery, start them separately: close all the indexes on the new cluster and now upgrade the ES to 5.0 reopen indexes and catchup with changes by reindexing the delta from kafka source. switch ES clients to use new cluster. Profit!
https://medium.com/some-tldrs/tldr-migrating-a-130tb-cluster-from-elasticsearch-2-to-5-in-20-hours-with-0-downtime-by-fred-de-65270f5ca894
['Pavel Trukhanov']
2018-03-01 13:47:06.769000+00:00
['Tldr', 'Elasticsearch', 'DevOps', 'Engineering', 'Big Data']
The Motivation To Exercise Starts From Cues.
The Motivation To Exercise Starts From Cues. Willpower is out. Cues are in. Photo by Ethan Elisara on Unsplash Very often, we rely too much on our willpower to get ourselves up for exercise. It could be a trip to the gym, to the pool, to the tennis court or even just lacing up for an evening jog. The power of our Will is very often, not very powerful. That is probably why we cannot really depend on it. Ask any smokers who tried to quit smoking based on sheer willpower. Or any fried food junkie trying to lose weight. If it is not willpower, then what can we rely on? For me, the answer is “cues”. And I am big fan of cues. For instance, if I know that I want to head for a swim tomorrow, this is what I am going to do. I will pack the swimming equipment the evening prior to my session. And then, I will put spare goggles at my working desk at home, at the working desk in the office, and in the shared locker where I deposit my bag in office. I constantly remind myself of my swim session using cues (goggles) strategically placed all over where my eyes cannot miss. That is swimming. Running is a different ballgame. With running, motivation comes from proximity. I find that I have a tendency to not run when I have to travel far to the start line. It is much easier for truckloads of excuses to infiltrate and takeover my mind when it takes a long while from preparation to start. I do this. I keep a set of running clothes near the front door of the apartment before I head to work. When I come home, I would immediately change into my workout gear and start warming up the moment I shut the gate. If I am heading for track training, I would search for the nearest stadium from the office and deposit my running attire and shoes there before heading to office. That way, no matter what happens, I will have to make my way to the stadium to collect my gear before I head home. This eliminates all excuses from my end. As with all things in life, a little deliberation goes a long way. Don’t you think so?
https://medium.com/illumination/the-motivation-to-exercise-starts-from-cues-b8599e841373
['Aldric Chen']
2020-09-22 02:43:24.438000+00:00
['Self Improvement', 'Motivation', 'Health', 'Healthy Lifestyle', 'Short Story']
Top Spring Framework Tutorials For Beginners — Learn Spring Framework [Updated 2021]
This course describes by example how to build cloud services via the use of object-oriented design techniques, Java Servlets, the Java Spring Framework, and cloud computing platforms, such as Amazon Web Services. In this course you will learn: Understand the details of the Hypertext Transfer Protocol Be able to develop cloud services using the Java Spring Framework Understand basic issues in scaling cloud services Be able to use the Java Persistence API to integrate databases into cloud services Due to the importance of building secure and scalable mobile/cloud platforms, this MOOC will not only show you how to build cloud services, but how to do so securely, scalably, and efficiently. Security and scalability topics will be woven into discussions of cloud service creation so that students learn, from the start, how to create robust cloud services. Get a comprehensive overview of Spring in this intermediate-level course. The course includes: Spring Overview Configuring the ApplicationContext Component Scanning The Bean Lifecycle Aspect-Oriented Programming The course develops applications and web services with Spring, and shares what its knowledge about configuring the ApplicationContext (the interface for accessing components, loading files, publishing events, and more), as well as the beans (objects within the Spring IOC container). It demonstrates a modern Java configuration workflow and explores the Spring lifecycle in depth, so you can extend the framework and better troubleshoot any issues you have with your applications. Plus, learn how to use aspect-oriented programming to add behaviors to your apps in a reusable way. The course focuses on a systematic approach of Spring and breaks down the entire subject into systematic sections for easier understanding. The course also includes practical work (or homework) which will help you actually grasp how to work in Spring, instead of just a theoretical approach or following the instructor. In this course, you will learn: Introduction to the Spring Framework Take a look at the core Spring Framework Detailed introduction to Dependency Injection Work with the MVC (Model-View Controller) in Spring How Spring Framework can help simplify building apps that utilize the web Take a look at some JSP basics, the visuals of the application Work with REST and API, what it is, how does it work, etc. How to configure a logger into the application This course is designed to give you a solid foundation of Spring MVC. The course covers the most recent approach of using both contained and exported WAR deployments. All configuration is done using the Java approach instead of XML. The course includes: What Is Spring MVC? Creating Your First Spring MVC Application Understanding the Structure of Spring MVC Applications Creating Controllers in Spring MVC Creating Views in Spring MVC Applications Using Java Server Pages with Spring MVC View Using Thymeleaf in Spring MVC Views Validating Objects in Spring MVC Applications Using Client-side JavaScript in Spring MVC Applications In this course, Spring Framework: Spring MVC Fundamentals, you will gain a solid understanding of creating web applications with Spring MVC. First, you will learn architecture in Spring. Next, you will discover controllers and navigation. Finally, you will explore how to create views. When you are finished with this course, you will have the skills and knowledge of Spring MVC needed to create web applications. Learn the hottest, most in-demand Java web framework, including web programming with Spring MVC and Hibernate. Lifetime access with no subscription on Udemy. An introduction to the widely-used Java Spring framework. Discover how to wire together your Java objects using Spring and dependency injection. You’ll learn how to set up your system for Spring development, how to use Maven, and how to work with databases using Spring and Hibernate and how to create web applications with Spring MVC. We’ll also look at managing user accounts with Spring Security,JDBC, working with web forms, Apache tiles for building modular web pages, aspect-oriented programming (AOP) and using Log4J and JUnit. Learn Spring with Core, MVC, JDBC, MySQL; Upcoming: Spring 5, Spring Boot 2, Thymeleaf, Security, Hibernate, JPA & more. Java, the world’s leading programming language, is used to develop Spring applications. The Spring Framework is the most popular and widely used Java Enterprise Edition (JEE) framework. Spring is an open source, lightweight framework that handles all the infrastructure. Spring makes life easy by allowing developers to focus on the business logic while it takes care of the low-level “plumbing”. Spring is super lightweight to give you faster deployment. That’s because: it advocates the POJO programming model which means you don’t need a dedicated server for deployment. Is highly modular, which means you pick and choose which modules you need. Testing Spring Framework applications are easy because of this. This course assumes you know at least a little of the basics of Java. If you don’t know Java or want a refresh, then I suggest you take my Complete Java Masterclass first before this Spring Framework course. But that’s optional. You can still get a lot out of this course, with even a little Java knowledge. New content to be released includes: Spring MVC in-depth (Forms and validation): Drilling further into Spring MVC — Handling Web forms and Validation. Spring AOP — Here’s where you’ll learn about Spring’s Aspect Oriented Programming (AOP). AOP helps to address cross-cutting concerns such as Logging, Security etc. Spring Security — This topic covers Spring’s security feature that helps to make Spring based web-apps more secure and robust. Spring with Hibernate — You’ll learn Spring integration with Hibernate, one of the most popular Object Relational Framework (ORM). Spring with JPA — This is where you’ll learn Spring integration with Java Persistence API (JPA) which helps to make Spring applications database and ORM agnostic. Spring Data — Spring Data unifies and makes it easy to access to different kinds of persistence stores, both relational database systems and NoSQL data stores. Spring with Apache Tiles — Apache Tiles is a free open-source template engine for Java web frameworks You’ll learn it’s integration with Spring. Spring Web Flow — Spring Web Flow builds on Spring MVC and allows implementing the “flows” in a web application. Spring & Testing — In this section you’ll learn how to carry out Unit testing of Spring applications with testing frameworks such as JUnit. Learn the magic of Spring Framework in 100 Steps with Spring Boot, Spring JDBC, Spring AOP, JUnit, Mockito and JPA. This is an excellent basic introduction to Spring, SpringBoot & JPA. Easy to follow and seems to cover all the basic concepts with a good potted history to explain why certain techniques have evolved as they have. Learn the magic of Spring Framework. From IOC (Inversion of Control), DI (Dependency Injection), Application Context to the world of Spring Boot, AOP, JDBC and JPA. Get set for an incredible journey. In this course, you will learn the features of Spring and Spring Modules — JDBC, AOP, Data JPA with hands-on step by step approach. You will get introduced to Spring Boot, Unit Testing with JUnit and Mockito, talking to the database with Spring JDBC and JPA, Maven (dependencies management), Eclipse (IDE) and Tomcat Embedded Web Server. We will help you set up each one of these. You will learn about Spring step by step — in more than 100 steps. This course would be a perfect first step as an introduction to Spring. You will learn about Basics of Spring Framework — Dependency Injection, IOC Container, Application Context and Bean Factory. Spring Annotations — @Autowired, @Component, @Service, @Repository, @Configuration, @Primary…. Spring MVC in depth — DispatcherServlet , Model, Controllers and ViewResolver Spring Boot Starters — Spring Boot Starter Web, Starter Data Jpa, Starter Test Basics of Spring Boot, Spring AOP, Spring JDBC and JPA Basics of Eclipse, Maven, JUnit and Mockito Basic concept of a Web application step by step using JSP Servlets and Spring MVC Unit testing with JUnit and Mockito using XML and Java Spring Application Contexts Level 1 : Spring Framework in 10 Steps Level 2 : Spring in Depth Level 3 has 3 steps on Unit Tests with Java and XML Contexts Level 4 : Spring Boot in 10 Steps Level 5 : Spring AOP Level 6 :Spring JDBC and JPA Best Spring Courses — Java Application Framework Enterprise class use of Spring Framework 4 and Spring Boot. Spring Core course is intended to be a predecessor to this course. In Spring Core, I gave you a solid foundation in working with the Spring Framework. In this course, I build upon that foundation expanding your skills with the Spring Framework. The skills taught in this course are skills you will need for enterprise application development using the Spring Framework. Topics Include: Spring Data JPA Form Validation in Spring MVC Externalized messages Using Spring Security Aspect Oriented Programming Spring Application Events Scheduled Tasks Advanced Spring Configuration The course is started by showing students how to replace the traditional JPA DAO structure we created in the Spring Core course, using Spring Data JPA. It continues building upon concepts learned in the Spring Core course by showing students how to use Command objects in Spring MVC and how to perform server-side property validations. Next, we get into using Spring Security. Spring Security is one of the most widely used modules of the Spring Framework. It shows how to add Spring Security to our existing Spring MVC web application. We configure Spring Security to read user information from our database, and then secure URLs to authenticated users and users with specific security roles. Aspect-Oriented Programming (AOP) is a really cool programming paradigm, and it is supported by the Spring Framework. In the module on AOP, I show you how to use AOP to log login activity in Spring Security. By using AOP, we don’t need to change any of the Spring Security code. The Spring Framework has a very mature events framework we can use for application events. I show you how to create a custom application event, then how to set up an event handler to take action on specific application events. In Spring Core and in this course, the project we’re working on uses Spring Boot as its foundation. Spring Boot is doing a lot of automatic configuration for us. In the last module of this course, we will remove Spring Boot from the project. This will require us to configure all the objects and data sources being provided by Spring Boot manually. In doing so, students will gain insight into all the automation being provided by Spring Boot, and how to manage a more advanced Spring Configuration. A deep-dive into the Microservice architectural style, and how to implement it with Spring technologies. Microservices with Spring Cloud is an online workshop designed to help you learn the Microservices architectural style, and how to implement it using Spring technologies This course provides a good, solid introduction to the topic of the Microservices architectural style, and combines this with practical experience gained by working through the exercises featuring Spring Cloud. Along the way, this course will provides a brief introduction to Spring Boot and Spring Data (enough to get you familiar with these technologies if you have not been immersed in them already). The course provides exercises that provide you with hands-on experience working with the various components of Spring Cloud. The goal of this course is to serve as a practical guide through the Spring Cloud projects, so you can see how they are used to implement microservice-based architecture. By the time you finish this course, you will have gained the ability to articulate what the Microservices architectural style is all about, including its advantages and disadvantages. You will gain familiarity with Spring Boot, and you’ll see how to use it to build web interfaces, REST interfaces, and how to use Spring Data and Spring Data REST. You will gain the ability to build microservice-based applications utilizing Spring Cloud technologies. You will learn about Centralized, versioned configuration management using Spring Cloud Config Dynamic configuration updates with Spring Cloud Bus Service discovery with Spring Cloud Eureka Client-Side Load Balancing with Ribbon Declarative REST Clients with Feign Software Circuit Breakers with Hystrix. Deploy Spring Boot Applications to the Cloud on AWS. The Spring Framework is very popular with large companies. In fact, Spring is the most popular Java framework. A typical company will deploy its Spring Framework application in at least 3 different environments. Having a development, test, and production environment is common. The problem developers face is each environment is different. Different server names. Different databases. Different user accounts. Different passwords. In this course, you will learn how to use Spring’s IoC container to deploy your application in many different environments. Through Inversion of Control, your Spring application can wire itself for the needs of each environment. You’ll start the course learning advanced configuration options of the Spring Framework. Next, the course takes a DevOps approach. You’ll see how to deploy Spring Framework applications in different environments. In development, it’s common to use an H2 in-memory database. Of course, this database is only temporary. Not something you’d want to use for your production deployment. Do you want to see how to flip a switch and use MySQL? Flip another switch and your app can be using an RDS database managed Amazon. You can do this with no code changes. The course also looks like the best practices used in enterprise software development. Using a continuous integration server is a best practice. Jenkins is the most popular CI server. You will learn how to install Jenkins on a Linux server. A server you provisioned in the AWS cloud. Once you have Jenkins running on your AWS server a best practice is to set up Jenkins on a friendly URL. Jenkins is a Java application running on port 8080. You don’t want to be typing some IP colon 8080 into your browser to reach Jenkins. Docker is an exciting technology. You will see how to leverage Docker to host your own Artifactory Maven repository. We’ll use Artifactory to manage build artifacts produced by Jenkins. Just for fun, we will also use Docker to set up a MySQL database server. We’ll do this by provisioning a Linux server on AWS, installing Docker on it, and then deploy MySQL in a Docker container. It will also provide an application server we can use to run our Spring Boot application. You will pull the Spring Boot jar right from Artifactory and tell it to connect to a database server. Amazon AWS also has managed MySQL databases. This is their RDS service. You will see how to provision your own RDS database. We’ll then reconfigure our Spring Boot application to connect to the RDS database. There is a lot of fun and challenging content in this course. You will learn: How to manage Spring properties. Why you want to encrypt sensitive properties, such as passwords. How Spring Profiles are used. Using YAML to configure Spring. To provision servers on Amazon AWS. Logging into your servers via SSH. How to use the yum package manager to install software on Linux. How to configure your own Linux service. How DNS works, and how to use Route 53 to setup your own hostnames. How to use webhooks in GitHub to trigger your builds immediately. Why you don’t want to use root accounts for your application. Configure Jenkins to perform a Maven build. Use Jenkins to deploy build artifacts to Artifactory. This is a very hands-on course. To get the most out of this course, you will need an account on AWS. You should be able to use the AWS free tier to complete the course assignments. To get the most out of this course, you will need a domain name. You will need to have control of the domain. Without this, you will not be able to configure subdomains in Route 53. The course does leverage AWS for its cloud services. The skills you learn on the AWS platform will transfer to most corporate environments. AWS is used to mimic the typical company. Spring Boot gives you all the power of the Spring Framework without all of the complexity. Start writing apps today. Spring Boot and the Spring Framework makes it easy to create both powered and production-grade applications and services that run on their own and can be maintained with a minimum fuss. It also provides production-ready features such as metrics, health checks, and even externalized configuration. It is software designed to run anywhere, meaning you can create executable JARs, which is one of the most favorable features of this type of program. While learning this type of application might seem like a daunting task, this course structures Spring Boot and Spring Framework learning in an easy to comprehend fashion. Featuring topics like an introduction into the Framework as well as step by step guidelines into creating your first application, this course is perfect for almost any user. The only requirements in order to excel at this courses’ teaching of Spring Boot are some familiarity with Java and Groovy programming languages, some web development experiences as well as a computer that is capable of running both Java + Intellij or Eclipse. Besides this course offering lifetime access to all eighty featured lectures and over ten hours of teaching content, it also offers you the opportunity to create Spring MVC applications and also tutorials on how to connect to various databases using Spring Data. This course will be extremely beneficial to students who are new to Spring Boot, students who are unfamiliar with Spring Framework or those who are looking into writing their own apps. This course applies to all of these cases. Spring Framework, Hibernate & Java: Programming, JPA, OCA Java SE, JDBC, Oracle, Database App, SQL & MySQL For Beginners. Spring Framework course will show you the exact techniques and strategies you need to develop a full CRUD app with Hibernate, write unit tests with XML, Java application contexts, build web applications and do programming. In This Spring Framework Training, You’ll Learn: Aspect-Oriented Programming Setting Up Spring Environment Java Development Kit (JDK) Setup Installation of Apache Common Logging API Eclipse IDE Setup The Necessary IOC, BeanFactory & Application Container The Application Context Container The Singleton and Prototype Bean Scope Bean & Life Cycle Initialization & Destruction Callbacks Default Initialization And Destroy Methods Dependency Injection Injecting Inner Beans & References Autowiring Modes & Constructor JDBC Framework Configuring Data Sources Data Access Object Executing SQL and DDL Statements Local and Global Transactions Programmatic and Declarative, Transaction Management Logging with LOG4J Jakarta Commons Logging (JCL) API Get Ready for Your Spring Interview with Spring, Spring Boot, RESTful, SOAP Web Services and Spring MVC. Spring Framework is the most popular Java Framework ever. It continues to evolve with changing architectures. Spring Boot is one of the most popular Spring projects. Spring Boot is the most used Java framework to develop RESTful Services and Microservices. Preparing for Spring Interview is tricky. There are a wide variety of Spring Modules and Spring Projects you would need to recollect and be prepared to answer questions on. You would need to get a good understanding of the new features of Spring and have a firm grasp of the concepts you implemented in your projects. This course helps you prepare for Spring Interview with code examples covering 200+ Spring Interview Questions and Answers on Spring, Spring Boot, Spring MVC, Spring JDBC, JPA, AOP, RESTful Services and SOAP Web Services. You will learn below topics Spring Spring MVC Spring Boot Database Connectivity — JDBC, Spring JDBC & JPA Spring Data Unit Testing AOP SOAP Web Services RESTful Web Services Spring Framework 5: Learn Spring Framework 5, Spring Boot 2, Spring MVC, Spring Data JPA, Spring Data MongoDB, Hibernate. Learn Spring with the most modern and comprehensive course available for Spring Framework 5 and Spring Boot 2. You will see how to build multiple real world applications using Spring Framework 5. The in demand technologies you will use to build Spring Framework applications, include: Spring Framework 5 Spring Boot 2 Spring Data JPA Spring MVC Spring MockMVC Spring WebFlux Spring Data MongoDB Spring Security (Coming in Q1 2018) Hibernate Project Lombok MapStruct Maven Gradle In addition to teaching you Spring Framework 5, you will learn about modern best practices used in enterprise application development.As we build the applications, you’ll see me using Test Driven Development (TDD) with JUnit and Mockito. Using Mockito mocks keeps your Spring Framework unit tests light and fast. You’ll also see how the Spring context can be used for more complex integration tests. These techniques are best practices used by companies all over the world to build and manage large scale Spring Framework applications. Spring MVC and Hibernate have long been cornerstones of the Spring Framework. You will learn how to use Spring MVC, Spring Data JPA and Hibernate to build a real world web application. You’ll learn about Hibernate configuration, and about the mapping of JPA entities. Spring MVC has a lot of robust capabilities. I start you off showing you how to build recipe application (using TDD, of course). Initially, it’s all happy path development. We go back and add custom exception handling, form validation, and internationalization. In the course you will also learn how to use Spring MVC to create RESTful APIs. A big theme of Spring Framework 5 is Reactive Programming. Inside the course we build a web application using Thymeleaf, Spring MVC, Spring Data MongoDB, and MongoDB. We then take the MongoDB application we built and convert it to a Reactive application. You’ll see how you can leverage the new Reactive types inside the Spring Framework from the data tier to the web tier. You will get to see step by step how to convert a traditional Spring MVC application to an end to end reactive application using the WebFlux framework — which is brand new to Spring Framework 5. Coming soon to the course in early 2018: Spring Security Documenting your APIs with RestDoc Aspect Oriented Programming (AOP) Using Spring Events Scheduling Tasks Caching with eHcache Spring JDBC (JDBC Template) JMS Messaging AMQP with RabbitMQ Logging configuration for Logback and Log4J 2 And more real world Spring Framework apps. Build a web application using Spring Framework 4 and Spring Boot. If you’re new to the Spring Framework, this is the course you want to start with. This course covers the core of the Spring Framework, the foundation which all of the other Spring Framework projects are built from. In this course, you will learn about important key concepts, such as dependency injection and inversion of control, which are used throughout the Spring Framework. Within the Spring Framework, you have the option of using the traditional XML configuration, or the new Java based configuration. I’ll show you step by step how to configure Spring Beans using best practices in XML and Java. I’ll also show you how to use Spring to persist data into a database, and Spring MVC to show content from the database on a webpage. Throughout the course you will have access to the code examples being presented in the tutorials. This is code you can build and run on your computer. You will be able to study the working code examples. Whenever possible, I will go into real world use cases and examples from my years of experience as a Spring Source consultant. I’ve seen a lot of good code, and bad code over the years. Through my experience with Spring, I will show you good code and poor programming practices to avoid. By the time we reach the end of this course, you will be able to build a functioning Spring Web Application. In this course, you will learn about:
https://medium.com/quick-code/top-tutorials-to-learn-spring-framework-for-the-java-application-12db01d9c288
['Quick Code']
2020-12-25 16:47:09.235000+00:00
['Development', 'Spring Framework', 'Java', 'Coding', 'Application']
Machine Learning Classification Models
A brief guide to Model Evaluation Techniques: Machine Learning Machine Learning Classification Models Model Evaluation Techniques for Machine Learning Classification Models Image courtesy: Great Learning In machine learning, we often use the classification models to get a predicted result of population data. Classification is one of the two sections of supervised learning, and it deals with data from different categories. The training data-set trains the model to predict the unknown labels of population data. There are multiple algorithms: Logistic regression, K-nearest neighbor, Decision tree, Naive Bayes etc. All these algorithms have their own style of execution and different techniques of prediction. To find the most suitable algorithm for a particular business problem, there are few model evaluation techniques. In this article different model evaluation techniques will be discussed. Confusion Matrix It probably got its name from the state of confusion it deals with. If you remember hypothesis testing, you may recall the two errors we defined as type-I and type-II. As depicted in Fig.1, type-I error occurs when null hypothesis is rejected, which should not be in actual. Type-II error occurs when the alternate hypothesis is true, but you are failing to reject null hypothesis. Fig.1: Type-I and Type-II errors In figure 1 it is depicted clearly that the choice of confidence interval affects the probabilities of these errors to occur. But if you try to reduce either of these errors, it will result in the increase of the other one. So, what is confusion matrix? Fig.2: Confusion Matrix Confusion matrix is the image given above. It is a matrix representation of the results of any binary testing. For example let us take the case of predicting a disease. You have done some medical testing and with the help of the results of those tests, you are going to predict whether the person is having a disease. So, actually you are going to validate if the hypothesis of declaring a person as having disease is acceptable or not. Say, among 100 people you are predicting 20 people to have the disease. In actual only 15 people to have the disease and among those 15 people you have diagnosed 12 people correctly. So, if I put the result in a confusion matrix, it will look like the following — Fig.3: Confusion Matrix of prediction a disease So, if we compare fig.3 with fig.2 we will find — True Positive: 12 (You have predicted the positive case correctly!) True Negative: 77 (You have predicted negative case correctly!) False Positive: 8 ( You have predicted these people as having a disease, which they actually don’t. But do not worry, this can be rectified in further medical analysis. So, this is a low risk error. This is type-II error in this case.) False Negative: 3 (Oh ho! You have predicted these three poor fellows as fit. But actually they have the disease. This is dangerous! Be careful! This is type-I error in this case.) Now if I ask what is the accuracy of the prediction model what I followed to get these results, the answer should be the ratio of the accurately predicted number and the total number of people which is (12+77)/100 = 0.89. If you study the confusion matrix thoroughly you will find the following things: The top row is depicting the total number of predictions you did as having the disease. Among these predictions, you have predicted 12 people correctly to have the disease in actual. So, the ratio, 12/(12+8) = 0.6 is the measure of the accuracy of your model in detecting a person to have the disease. This is called Precision of the model. Now, take the first column. This column represents the total number of people who are having the disease in actual. And you have predicted correctly for 12 of them. So, the ratio, 12/(12+3) = 0.8 is the measure of the accuracy of your model to detect a person having disease out of all the people who are having the disease in actual. This is termed as Recall. Now, you may ask the question that why do we need to measure precision or recall to evaluate the model? The answer is it is highly recommended when a particular result is very sensitive. For example you are going to build a model for a bank to predict fraudulent transactions. It is not very common to have a fraudulent transaction. In 1000 transactions, there may be 1 transaction which is fraud. So, undoubtedly your model will predict a transaction as non-fraudulent very accurately. So, in this case the whole accuracy does not matter as it will be always very high irrespective of the accuracy of the prediction of the fraudulent transactions as that is of very low percentage in the whole population. But the prediction of a fraudulent transaction as non-fraudulent is not desirable. So, in this case the measurement of precision will take a vital role to evaluate the model. It will help to understand out of all the actual fraudulent transactions, how many are being predicted. If it is low, even if the overall accuracy if high, the model is not acceptable. Receiver Operating Characteristics (ROC) Curve Measuring the area under the ROC curve is also a very useful method for evaluating a model. ROC is the ratio of True Positive Rate (TPR) and False Positive Rate (FPR) (see fig.2). In our disease detection example, TPR is the measure of the ratio between the number of accurate predictions of people having the disease and the total number of people having the disease in actual. FPR is the ratio between the number of people who are predicted as not to have disease correctly and the total number of people who are not having the disease in actual. So, if we plot the curve, it comes like this — Fig.4: ROC curve (source: https://www.medcalc.org/manual/roc-curves.php) The blue line denotes the change of TPR with different FPR for a model. More the ratio of the area under the curve and the total area (100 x 100 in this case) defines more the accuracy of the model. If it becomes 1, the model will be overfit and if it is equal below 0.5 (i.e when the curve is along the dotted diagonal line), the model will be too inaccurate to use. For classification models, there are many other evaluation methods like Gain and Lift charts, Gini coefficient etc. But the in-depth knowledge about the confusion matrix can help to evaluate any classification model very effectively. So, in this article I tried to demystify the confusions around the confusion matrix to help the readers. Example: Machine Learning Models Spotify uses to recommend music you’ll like In the early 2000s, Songza implemented a manual music recommendation system for its listeners, where a team of music experts and curators would create playlists. But these recommendations were not objective, as they were dependent on the personal taste of the curators. It was an average experience for listeners, with a fair share of hits and misses, because it was impossible to make a playlist which catered to the varied tastes of a diverse set of people. The technology and the data did not exist back then to build a playlist that would be personalized to the taste of each individual listener. Along came Spotify a few years later, offering a highly personalized weekly playlist called Discover Weekly that quickly became one of their flagship offerings. Every Monday, millions of listeners receive a fresh playlist of new song recommendations, customized to their personal tastes based on their listening history and the songs they’ve engaged with. Spotify uses a combination of different data aggregation and sorting methods to create their unique and powerful recommendation model that’s powered by machine learning. “One of our flagship features is called Discover Weekly. Every Monday, we give you a list of 50 tracks that you haven’t heard before that we think you’re going to like. The ML engine that’s the main basis of it, and it’s advanced some since, had actually been around at Spotify a bit before Discover Weekly was there, just powering our Discover page” — David Murgatroyd, Machine Learning Leader at Spotify. Spotify uses three forms of recommendation models to power Discover Weekly. 1. Collaborative Filtering Collaborative Filtering is a popular technique used by recommend-er systems to make automated predictions about the preferences of users, based on the preference of other similar users. On Spotify, the collaborative filtering algorithm compares multiple user-created playlists that have the songs that users have listened to. The algorithm then combs those playlists to look at other songs that appear in the playlists and recommends those songs. This framework executed by matrix math in Python libraries. The algorithm first creates a matrix of all the active users and songs. The Python library then runs a series of complex factorization formula on the matrix. The end result is two separate vectors, where X is the user vector representing the taste of an individual user. Vector Y represents the profile of a single song. To find out users with similar taste, collaborative filtering will compare a given user vector with each and every single user vectors to give a similar user vector as the output. The same procedure is applied to the song vectors. Spotify does not only rely on collaborative filtering. The second recommendation model used is NLP. 2. Natural Language Processing NLP is the ability of an algorithm to understand speech and text in real-time. Spotify’s NLP constantly trawls the web to find articles, blog posts, or any other text about music, to come up with a profile for each song. With all this scraped data, the NLP algorithm can classify songs based on the kind of language used to describe them and can match them with other songs that are discussed in the same vein. Artists and songs are assigned to classifying keywords based on the data, and each term has a certain weight assigned to them. Similar to collaborative filtering, a vector representation of the song is created, and that’s used to suggest similar songs. 3. Convolutional Neural Networks Convolutional Neural Networks are used to hone the recommendation system and to increase accuracy because less-popular songs might be neglected by the other models. The CNN model ensures that obscure and new songs are considered. The CNN model is most popularly used for facial recognition, and Spotify has configured the same model for audio files. Each song is converted into a raw audio file as a waveform. These wave forms are processed by the CNN and is assigned key parameters such as beats per minute, loudness, major/minor key and so on. Spotify then tries to match similar songs that have the same parameters as the songs their listeners like listening to. With these key machine learning models, Spotify is able to tailor a unique playlist of music that surprises its listeners every week with songs they would have never found otherwise. A key problem in many machine learning models is the lack of access to clean, structured data that can be processed. Spotify has been able to circumvent that problem due to their access to massive amounts of data that they collect from their users. They’ve been able to shine as a great example of effective use of Machine Learning models to give their users an unrivaled personalized experience. Saikat Bhattacharya is a Senior Software engineer at Freshworks, and is pursuing the PGP-Machine Learning program from Great Learning. This article originally appeared on Towards Data Science and has been syndicated with permission from the author. Happy modelling!
https://medium.com/my-great-learning/machine-learning-models-great-learning-7c258eeb68b6
['Great Learning']
2019-08-27 10:35:03.344000+00:00
['Machine Learning', 'Technology', 'Artificial Intelligence', 'Marketing', 'Business']
The Clathrate Gun Hypothesis or Why Methane “Burping” is Cause for Concern
The Clathrate Gun has already been fired. The video below shows a lead researcher breaking down while reading out the results of her study which concludes that the Clathrate Gun has been already been fired. So what is the hypothesis specifically? The Clathrate Gun Hypothesis suggests that the release of methane from the earth due to warming could cause a massive increase in temperatures within a lifetime (‘gun’ because the process can’t be stopped, the process is irreversible). Now let’s break it down. Methane. There are a couple of things to know about methane. It’s a colorless greenhouse gas 24 times more potent than carbon dioxide. That being said, methane can be a problem in the atmosphere, even in small quantities. Alright, next: Clathrates. For our situation, methane in ‘ice’ = clathrate. Now, clathrates are stable in cold temperatures or under high pressure. One cubic meter of clathrate could release 164 cubic meters of methane. It’s quite a bit. To put that in perspective, that’s 43,324.2 gallons. Simple enough. So Methane Clathrate is simply ice holding a lot of methane within its crystal structure. Next. Where is this thing found? Methane clathrate is found in seabed permafrost. Essentially, it’s mainly found on the ocean floor, however, there isn’t a consensus on just how large these deposits are. We’ll jump back in history to grasp the scale of this theory. Apparently, some scientists theorize that the violent degassing may have affected the planet significantly. They suggest it could have resulted in the Eocene hothouse period. Eventually, the period of great warmth gave way for the cooling climate. The large extinction occurred near the end of the Permian period about 250 million years ago. The damage to marine life was great; more than 94 percent of all species then abruptly disappeared as oxygen levels sank. It took at least 20 million years, and in special cases over 100 million years for environment diversity to recover. All of this happened simply from a temperature increase less than 6.5 degrees Celsius. So how would methane clathrate figure into all of this? The release of methane from the methane clathrate could increase global temperatures. An increase in global temperatures would release even more of the compound increasing temperatures further. An abrupt release could drastically impact the environment. Even if a runaway effect is unlikely, as postulated by few, it could still ocean acidification and alter the atmosphere. During a period of the Glacial Minimum, the temperatures went up 6 degrees Celsius. Side note: The release of methane from the compound can be referred to as methane degassing or “burping.” So, the trapped methane from the seabeds may have caused the End-Permian Extinction. Alright, that covers all the basics. We’ll delve further into the issue. By using a process called ebullition, researchers could find the density of bubbles from the permafrost and found 100–630 mg of methane per square meter is released daily from the East Siberian Shelf into the water column. It basically suggests that methane release is gradual, not abrupt. However, events such as arctic cyclones could increase the rate of methane being released. Another thing to consider is that clathrates can also exist not just in seabed permafrost. They can be found in water if the temperature is lower. Also, the methane could be contained by a ‘lid’ of permafrost. Picture: Chris Butler/Science Source In light of this info, let’s investigate Snowball Earth. About 630 million years ago, it’s believed the earth’s surface was almost entirely frozen. The frozen ice sheets of the planet would’ve had quite a bit of methane trapped within. However, because these sheets were unstable, they would collapse after growing big enough, releasing methane into the atmosphere. Temperatures increased, melting more sheets, releasing more methane, increasing temperatures further bringing the Snowball Earth to its end. It is thought that the last ice age was not brought to its end by methane-related warming. Many of the methane clathrate deposits are in sediments that are too deep to be released suddenly. Furthermore, in the overall scheme of warming, the effect of methane would not be drastic. This is due to the fact that they destabilize from the deepest part of their stability zone, which is usually 100 meters below the seabed. It will surface eventually, but not as rapidly as previously thought. Our Current Situation Photo by Annie Spratt on Unsplash Around 2008, there was research in the Siberian Arctic which claimed millions of tons of methane were escaping through breaches in the seabed permafrost. In certain areas, the concentrations of methane hit 100 times normal levels. There’s a release of .5 metric tons of methane per year. Also, 50 gigatons of it are risked to be released at any moment. What if that happened? The amount of methane on the planet would increase by a factor of 12. Miscellaneous: There’s also a trapped gas deposit off Canada in the Beaufort sea. Considered to be the shallowest known deposit of methane, it lies 290 meters below sea level. Along the eastern continental slope of the United States, destabilizing methane hydrate can be found, about 2.5 gigatons worth. It’s still unclear whether it would reach the atmosphere. Although there isn’t exactly a consensus on the Clathrate Gun, after all, it is a hypothesis, it’s nonetheless good to be informed about the current state of our planet and how it’s evolved in the past and may change in the future. Thanks for reading!
https://eashanreddykotha.medium.com/the-clathrate-gun-hypothesis-or-why-methane-burping-is-cause-for-concern-bf664bc1723f
['Eashan Reddy Kotha']
2020-07-24 17:33:33.968000+00:00
['Climate Change', 'Environment', 'Climate', 'Education', 'Science']
MLOps In Action: Training-serving skew
MLOps In Action MLOps In Action: Training-serving skew Training-serving skew is one of the most common problems when deploying ML models. This post explains what it is and how to prevent it. A typical Machine Learning workflow When training a Machine Learning model, we always follow the same series of steps: Get data (usually from a database) Clean it (e.g. fix/discard corrupted observations) Generate features Train model Evaluate model Once we clean the data (2), we apply transformations (3) to it to make the learning problem easier. Feature engineering is a particularly important task when working with tabular data and classic ML models (which is the most common setting in industry), the only exception are Deep Learning models, where there is little to no feature engineering. This post focuses on the former scenario. When deploying a model, the pipelines look very similar, except we make predictions using a previously trained model after computing the features. However, not all deployments are equal, the two most common settings are: Batch (e.g. make predictions for every user every week and upload predictions to database) Online (e.g. expose a model as a REST API to make on-demand predictions) What is feature engineering? Feature engineering is the set of statistically independent transformations that operate on a single (or group of) observation(s). In practical terms, it means that no information from the training set is part of the transformation. This contrasts with some pre-processing procedures such as feature scaling, where information from the training set (i.e. mean and standard deviation) is used as part of the transformation (subtract mean, divide by standard deviation). These pre-processing methods are not feature engineering, but part of the model itself: mean and standard deviation are “learned” from the training set and then applied to the validation/test set. In mathematical terms, we can express the feature engineering process as a function that transforms a raw input into another vector that is used to train the model. Training-serving skew Ideally, we should re-use the same feature engineering code to guarantee that a given raw input maps to the same feature vector at training and serving time. If this does not happen, we have training-serving skew. One common reason for this is a mismatch of computational resources at training and serving time. Imagine you are working on a new ML project and decide to write your pipeline using Spark. A few months later, you have the first version and are ready to deploy it as a microservice. It would be very inefficient to require your microservice to connect to a Spark cluster to make a new prediction, hence, you decide to re-implement all your feature engineering code using numpy/pandas to avoid any extra infrastructure. All of a sudden, you have two feature engineering codebases to maintain (spark and numpy/pandas). Given an input, you must ensure they the same output to avoid training-serving skew. This is less of a problem with batch deployments, since you usually have the same resources available at training and serving time, but always keep this situation in mind. And whenever possible, use a training technology stack that can also be used at serving time. If for any reason, you cannot re-use your feature engineering training code. You must test for training-serving skew before deploying a new model. To do this, pass your raw data through your feature engineering pipelines (training and serving), then compare the output. All raw input vectors should map to the same output feature vector. Note that training-serving skew is not a universally defined term. For the purpose of this post, we limit the definition to a discrepancy between the training and serving feature engineering code. Re-using code at training and serving time If you are able to re-use feature engineering at training and serving time, you must ensure the code is modular so you can integrate it in both pipelines easily, the next sections present three ways of doing so. Solution 1: Abstract feature engineering in a function The simplest approach is to abstract your feature engineering code in a function and call it at training and serving time: Then, call your generate_features function in your microservice code, here's an example if using Flask: Solution 2: Use a workflow manager While simple, the first solution above does not offer a great development experience. Developing features is a highly iterative process. The best way to accelerate this is via incremental builds, which keep track of source code changes and skip redundant work. Say for example you have 20 features that are independent of each other, if you modify one of them, you can skip the rest since they’ll produce identical results if you executed them before. Workflow managers are frameworks that allow you to describe a graph of computations (such as feature engineering code). There are many options to choose from, unfortunately, only a few of them support incremental builds, Ploomber is one of them. Other options are DVC Pipelines and drake for R. To enable incremental builds, workflow managers save results to disk, and load them if the source code hasn’t changed. In production, you usually want to perform in-memory operations exclusively because disk access is slow. As far as I know, Ploomber is the only workflow manager that allows you to convert a batch-based pipeline to an in-memory one for deployment without any code changes. Solution 3: Use a feature store Another solution is to use a feature store, which is an external system that pre-computes features for you. You only need to fetch the ones you want for model training and serving. Here’s an example using Feast, an open-source feature store: A feature store is a great way to tackle training-serving skew. It also reduces development time, since you only need to code a feature once. This is a great solution for batch deployments and some online ones. If your online model expects user-submitted input data (e.g. a model the classifies images from a user’s camera), a feature store is unfeasible. The main caveat is that you need to invest in maintaining the feature store infrastructure, but if you are developing many models that can benefit from the same set of features, it is worth considering this option. Found an error? Click here to let us know.
https://towardsdatascience.com/training-serving-skew-77d947c4c100
['Eduardo Blancas']
2020-12-29 13:11:42.567000+00:00
['Mlops', 'Machine Learning', 'Python', 'AI', 'Data Science']
Likes, Comments, Shares Aren’t a Reliable Proxy for Success, Period.
On social platforms like Facebook, engagements — likes, comments, shares, and click-throughs long dominated conversation around the particular success of a post. Yet, as Facebook writes: “Online engagement metrics are a proxy for interest, but they are not a reliable indicator of the content’s persuasiveness. Persuasive content influences your audience in a way that helps move your business.” In reality, these engagements do not effectively correlate with business results for brand content. Facebook research has found that “content doesn’t need to be persuasive to elicit an engagement.” and inversely, “not all persuasive content elicits an engagement”. Campaigns that wish to drive brand awareness cannot therefore be measured by the level of engagement, as a potential customer can inherently notice and be influenced by that content without interacting with it. In fact, a 2012 Facebook & Datalogix ROI study found that: “more than 90% of offline sales come from people who don’t interact with ads during the campaign.” Engagement on Facebook — When it Matters Engagements Don’t Represent Your Audience One of the biggest issues with engagements is that they may be more indicative of a user’s behaviors (e.g. a “clicky” user) rather than the effectiveness of the content. In fact, analysis of content from major brands at BBDO increasingly shows that those engaging with content registered outside of the target audience the brand wished to impact. For example, we took real-life Brand X and looked at the demographics of people who engaged with its promoted content, which included all formats across Facebook and Instagram. While this brand’s target skewed relatively young, data showed that the engagement rate for those under 34 was at 3%, but the highest engagers were 64+ at 22%, with the engagement rate increasing dramatically with every new age block. However, when we examined Estimated Ad Recall Lift by age, we found all blocks consistently performed at or above benchmarks. Depending on who you are targeting, engagements may be more of a red herring than they are an indicator of positive responses from your target. Invest in Equitable Measures of Success As brands shift away from shiny engagement metrics on Facebook, it is essential to invest in studies that can more concretely and more thoroughly measure the impact of campaign efforts on consumers. Nielsen Brand Effects studies, for instance, have long been used to analyze the impact of Facebook ads in key brand metrics. Consumers see a piece of advertising and shortly thereafter answer a survey to help determine the impact of an ad in shifting awareness, attitudes, favorability, intent, or preference. Brands also have the ability to track the effect of their work in driving business objectives through studies such as Datalogix or marketing mix modeling. Datalogix studies can help marketers understand how their Facebook spends impact offline sales by matching purchasing data for 70 million American households via loyalty cards and programs. By pulling and anonymizing information associated with their Facebook accounts, marketers can start to see the difference in sales when someone is exposed to a Facebook ad. Marketing mix modeling studies can also pinpoint the value of social marketing in driving business objectives, but are not available in the short term. However, as short-term metrics continue to dominate marketing, brands have the opportunity to track metrics that demonstrate the largest probability of success — namely the 10 Second Retention Rate and Estimated Ad Recall Lift. 10-SECOND RETENTION RATE: The Facebook Marketing Science commissioned Nielsen to analyze the value of Facebook video in driving three key brand metrics: lifts in ad recall, brand awareness, and purchase intent. Initial data analysis showed that from the moment a video was viewed, there were statistical lifts across each of the three metrics, even amongst those who did not watch the video but did see the impression. Further investigation then focused on how video duration potentially impacted the metrics outlined. The results revealed a notice lift in cumulative impact when viewers were retained to the :3 second mark. The most statistically significant results, however, came from users who were retained to the :10 mark, with massive lifts seen across ad recall, brand awareness, and purchase intent. The longer a user is engaged with a piece of content the larger the effect. Yet, the strong correlation between 10-second with impact-led metrics demonstrates a huge opportunity for marketers to measure effectiveness and optimizes their work on the platform in the short-game. ESTIMATED AD RECALL LIFT: Marketers can also track a Facebook-calculated proxy metric known as “estimated ad recall lift” (EARL), which measures the impact of ads on driving ad recall by comparing the reach of an ad, coupled with the relative time users spend looking at the ad. This is then weighted against historical data for ad recall taken from 300 previous campaigns. This metric offers a more effective proxy for real-time measurement of the lift in ad recall a brand can expect to gain from a campaign. Importantly, EARL normalizes for users’ scrolling habits, so the quick scroll a younger user might be used to and the slower scroll of an older user are taken into account, along with historical ad recall lift data. While an attractive measurement, brands should not rely on this metric alone, given that estimates employ a degree of probability. To gain a more robust look at the effectiveness of work in driving awareness and return on investment, further tracking studies should be employed alongside. This article is part three of a five part series highlighting BBDO Comms Planning’s latest report, About Face: A New Approach to Facebook for Big Brands. To download this white paper, click here.
https://medium.com/comms-planning/likes-comments-shares-arent-a-reliable-proxy-for-success-period-65426c2ea524
['James Mullally']
2016-09-28 14:17:01.124000+00:00
['Advertising', 'Measurement', 'Marketing', 'Facebook', 'Social Media']
TOP 5 — Must Read for every CEO.. IMPORTANT NOTE BEFORE I GET STARTED…
IMPORTANT NOTE BEFORE I GET STARTED: This is my first list of my own, I consider myself as evid reader and technologist. I hate reading books on Kindle, Google rather than at bookstore or at my study table. I work with IT Consulting firm — Cloud Certitude and works typically in SMB Market and Mid Market. Hit Refresh — by Satya Nadella, CEO Microsoft Though I’m not a big Microsoft fan but this book is all about individual change, the transformation happened inside Microsoft and the arrival of the most exciting and disruptive wave of technology humankind has experienced — including artificial intelligence, mixed reality and quantum computing. One of the finest words I loved was “Ideas Excites Me.” Empathy ground and centres me”. 2.META HUMAN — Unleashing your infinite potential by Dr. DEEPAK CHOPRA Well I would say this is the most controversial book in the entire list. I met with Dr. Deepak in United States, SFO for a brief discussion and I become his big fan. In this brilliant book Dr. Deepak successfully argues that consciousness is the sole creator of self, mind, brain, body and the universe we know it. METAHUMAN is a brilliant vision of human potential and how we can move beyond the limitations, concepts, and stories created by the mind. 3. THE TECH WHISPERER — ON DIGITAL TRANSFORMATION AND THE TECHNOLOGIES THAT ENABLE IT BY JASPREET BINDRA. This book is all about Digital Transformations and the technologies which can enable it. Companies across the world are being buffeted by new technologies, disruptive business models and start up innovation. Business leaders knows that need to adopt the new technologies, like blockchain, Artificial Intelligence (AI), Internet of Things (IoT), using them to keep pace with rapid customer and business environment changes. My loved chapter of the book is Brahma and Business Models. 4. HOW TO CREATE A MIND — THE SECRET OF HUMAN THOUGHTS REVEALED RAY KURZWEIL prevents a provocative exploration of the most important project in human — machine civilization: reverse — engineering the brain to understand precisely how it works and using the knowledge to create even more intelligent machines. Kurzweil also has explained how the brain functions, how the mind emerges, brain computer interfaces, and the implications of vastly increasing the power of our intelligence to the address the world’s complex problems. I would say every page of this is unique and inspiring. 5. Customer — Support — Focused — Driven — Obsessed — A whole company approach delivering exceptional customer experiences. GET AHEAD OF THE CUSTOMER EXPERIENCE CURVE This book is for the companies who defines success through business outcomes and put customers at the center of their business realize sustainable, continuous growth. Customer experience is a key driver of technical innovation and business success — customer obsessed teaches organization how to leverage it across all the levels of the organizations to sustain competitive advantage in the digital era.
https://medium.com/doctorsalesforce/top-5-must-read-for-every-ceo-32c758784987
['Sumit Mattey']
2020-02-28 18:40:52.300000+00:00
['CEO', 'Artificial Intelligence', 'AI', 'Microsoft', 'Mindfulness']
Super Simple React Native Redux Example
Inspired byhttp://blog.tylerbuchea.com/super-simple-react-redux-application-example/ In this article we explore the barest of solutions to get started with React Native + Redux. The only pre-requisite to the below is to have “create-react-native-app” installed (https://facebook.github.io/react-native/docs/getting-started.html) Setup create-react-native-app superSimple cd superSimple npm install --save redux react-redux redux.js App.js Notes
https://medium.com/david-vassallos-blog-posts/super-simple-react-native-redux-example-f0db89e7338
['David Vassallo']
2018-07-09 14:12:36.385000+00:00
['React', 'React Native', 'Reactjs', 'JavaScript', 'Development']
What's the Role of Fiction in Social Change?
We all love a good apocalyptical story. When I was a teenager, I used to love the comic book Tank Girl. Set in a post-apocalyptical world where water was scarce, Tank girl fought together with mutant kangaroos against the Water & Power Corporation. In the "apocalyptic" genre, we have major pieces of work picturing the absurd structures and technologies we create. They show how high the stakes are if we don’t take care of our reality. Apart from big explosions and heroic scenes, sci-fi and the like bring about social criticism, delivering a digestible bite of reflection on the crazy world we have created. Published 70 years ago, 1984 is a great example of reflection coming from fiction. It's a piece that we, who were born before the 90’s, got exposed to long before cable internet existed. Up until today, it is impossible to talk about it without comments on the impact of government surveillance on free will or making a parallel to what is happening in society today. On the other hand, take Blade Runner 2049. Besides the beautiful images and engaging plot, as most sci-fi goes, it also poses important social criticisms and a sad perspective of what the (not so far) future could hold. Implications of the technology we are building on the environment and society are something we can no longer ignore. Moral issues in genetic engineering, corporate power, consumer culture, they are all there. Yet, we barely talk about these things in conjunction with the film. These are only pieces that make the story exciting. Like nice furniture in a living room. We are now so bombarded with entertainment and embedded in a consumer culture that our capacity to analyze seems to have changed tremendously. All we see in Blade Runner 2049 is a great piece of entertainment. Together with the Netflix & chill phenomena, it seems as if fiction has been losing its ability to make us question, review beliefs and set actions (that is, if we believe that fiction ever had such potential). If anything, fiction seems to be a coping mechanism, to relax after our workday. Dream, baby, dream It’s not only that we reduce great works of art to space-out-of-reality couch entertainment. My issue is how we have come to over-romanticize everything, and normalize the apocalypse. The fight is lost even before it has began. Going to Mars to save our lives becomes a reachable dream while stopping plastic consumption and changing our eating behaviors to save our planet a big fat annoyance. Quite a few friends have told me that they are fascinated by the sky and the potential of SpaceX but that they are terrified to death of what is under our oceans and of seeing Blue Planet II. That is, after all, the point of romanticizing something: not caring about what we have and dreaming of what we might never have. Dreaming doesn’t require much energy anyway, whereas maintaining and improving our situation does. Tired of our 9–5 jobs and defeated by our dreams not coming true, we watch fiction, fantasizing to be the main character. In an apocalyptic scenario, sure, we could totally be Ryan Gosling in Blade Runner 2049. In this romantic view, we have a central role. We kill people, we have power. The thing is…. of all the 7,4 billion people in this world, are you and I really the ones for the job? If this was war, chances are that you and I would be a mere casualty on the street, shot on the head when we were fighting for a few coins to get water while someone else (a lot stronger, more important, with better contacts and more money) looked for replicants and killed us instead. “You are not Denzel,” repeats Chris D’Elia on his show Man on Fire on Netflix over and over again. His constant cry is a reminder of the absurd situation we find ourselves. As I was growing up, films like The Matrix, Fight Club and Avatar triggered people around me, bringing up questions about the broken society we have built, and our distorted relationship with nature. Maybe we did not do anything serious about these questionings, anyway. When Black Mirror was launched, we were impressed by the harsh critics and what the future could entail, but we were sure we would never get to any of those social absurdities. Now officers have to fight with people who, instead of helping a teenager, film him drowning. Where do we go from here? We don’t create things from nowhere, we imagine them first and then we work on it. Fiction is a rich pot of concrete technological developments and images to take inspiration from. What happens when we can’t either write about a better world, nor stop to analyze the sublots of apocalyptic stories? We create a dreadful narrative that is going to be normalized. In a world with so much content and overwhelming busyness, who has time and willingness to think critically about the film they watched and make a parallel to their lives? If we are awake by the time the film finishes, we will probably just consume another piece of content. When chaos is normalized, everything is just entertainment to forget a hard day’s work, and Netflix sees our time sleeping as a competition, the only thing that is clear is that if there is ever going to be a revolution it is not going to be consumer-led. But not everything is shit. What brings me hope is the fact that most of us want to be the hero in the movie, not the villain. That points to the existence of something inside us that wants to be more than we are, that wants to act, do something for the common good. The question is, how do we wake it up, how does the voice go from a whisper to a shout? Yes, it is a big role, where no one is to blame and yet everyone is responsible. In the era of endless bread and circuses, I see that content creators could be more aware of their role in creating narratives. Create stories set in utopias where we would actually like to live, help society visualize a different future. Otherwise, we are doomed to live a life of bread and circuses, if we are not yet.
https://medium.com/literally-literary/whats-the-role-of-fiction-in-social-change-564497f91125
['Aline Müller']
2020-02-05 02:28:42.938000+00:00
['Essay', 'Science Fiction', 'Writing', 'Society', 'Fiction Writing']
How to Create Eye-Catching Maps With Python and Kepler.gl
How to Create Eye-Catching Maps With Python and Kepler.gl Use this intuitive tool to simplify mapping In this article, we’ll explore Kepler.gl, an open-source solution for geospatial data visualization and exploration. Kepler was developed by Uber to make it easier for users of all levels to design meaningful maps that also look good. The tool can handle large amounts of data and has a friendly, intuitive interface that allows users to build effective maps in an instant. Available for all to use since 2018, it’s about time we get a closer look at how the tool fits into the data visualization landscape. In this article, we’ll cover the basics of importing data to Kepler using Python’s Pandas and GeoPandas, how to design your visualization, and export the map to an HTML file. Vancouver Number of Graffitis by Block Getting Started The dataset for this example is NOOA’s Global Significant Earthquakes dataset. [Kaggle] Import statements. Pandas I’m interested in looking at the intensity of the earthquakes and if they generated a tsunami, but those aren’t the only values we’ll need. We also need some Geolocation. # read csv df = pd.read_csv('data/Worldwide-Earthquake-database.csv') In this dataset, our geolocation is stored in two fields, Latitude and Longitude. Those will be essential for Kepler to draw our data, so we need to make sure all those values are clean and usable. # lat and lon to numeric, errors converted to nan df['LONGITUDE'] = pd.to_numeric(df.LONGITUDE, errors='coerce') df['LATITUDE'] = pd.to_numeric(df.LATITUDE, errors='coerce') # drop rows with missing lat, lon, and intensity df.dropna(subset=['LONGITUDE', 'LATITUDE', 'INTENSITY'], inplace=True) # convert tsunami flag from string to int df['FLAG_TSUNAMI'] = [1 if i=='Yes' else 0 for i in df.FLAG_TSUNAMI.values] After loading the data to Pandas, we can use .numeric to make sure they’re numbers, then use .dropna to remove the empty values. You can also convert the TSUNAMI_FLAG from yes and no to 1 and 0. Cleaning and preparation are up to your needs, you may have different requirements or use other tools for that, but once your data is in a Pandas data frame you can map. Kepler.gl Kepler is straightforward. It gives you a world map and tools to build the visualization; it expects the data, and the configurations of the map. Let’s start by defining a map. (I’m using Kepler for Jupyter) kepler_map = keplergl.KeplerGl(height=400) kepler_map Default map without data. Then we add the data frame to it. kepler_map.add_data(data=df, name="earthquakes") Map marking the earthquakes. And the map is updated. Quite easy! You can load your data to Kepler with Pandas and Geopandas, which support a more comprehensive array of extensions, or directly from a GeoJSON and CSV files. Design On the top left of the map, there’s an arrow that opens the settings menu. Settings Menu On the menu we have: Layers — Defines how the variables are encoded to the map — Defines how the variables are encoded to the map Filters — For selecting smaller sets of data — For selecting smaller sets of data Interactions — Defines interactions such as Tooltips, search boxes, and others — Defines interactions such as Tooltips, search boxes, and others Basemap — Defines the style of the world map and other elements like labels, roads, styles Layers You can select an existing layer or create a new one, then click the ellipsis besides Basic. That’ll open a selection of different encodings for your map, try selecting Hexbin for the next example.
https://medium.com/nightingale/how-to-create-eye-catching-maps-with-python-and-kepler-gl-e7e897eff8ac
['Thiago Carvalho']
2020-06-26 15:03:18.577000+00:00
['Mapping', 'Python', 'Data Science', 'Programming', 'Data Visualization']
Self-Driving Cars Aren’t Just About Safety
Self-Driving Cars Aren’t Just About Safety Time, energy, human life quality- FSD would improve all of these. Photo by Bram Van Oost on Unsplash One thing that you hear a lot nowadays is how stressed out people are. There’s a good number of reasons for this. So good, in fact, that I probably don’t need to spell them out. You’re probably well aware of the stressors in your own life. But there is one major stressor that we never think about anymore- having to drive. Of course, not everyone has to (or gets to) drive. Some places have robust public transportation or they live within walking distance of most places that they go. Some people use scooters or bikes. That said, about 227 million people in the US, or 69% of the population, have driver’s licenses. This doesn’t mean that all of these people drive every day- however, there’s probably also a lot of people that drive without licenses. I’ll assume that these differences pretty much even out to this number of drivers. By the way, that’s 69% of the total US population, including children and people who otherwise can’t drive. So the percentage of drivers taken out of people who actually could drive is even higher. Driving is one of the most stressful things I do every day, and yet if someone asks me on a particularly stressful day what’s wrong, having to drive will be the last thing on my mind. And yet it is stressful. When you’re driving, there is the ever-present need to be alert at all times. If you mess up, maybe you die. Or maybe you plow into another vehicle. Maybe that vehicle has a family in it. Or maybe you run off the road and crash into a building. These are all real things that happen every day. For all I know, after I get up from writing this and drive to work, one of them will happen to me. And my articles are set to automatically post, so who knows if it did? I mean, probably there will be articles after this one if I lived. So imagine self-driving cars. If you actually have a car with a computer installed that can drive better than a human, all that stress is taken off of your plate. Yes, you’re handing the car over to a machine. But if the machine has been properly validated, then you’re doing the safer thing. Now think about all of the time you’ve gotten back if you don’t have to drive any more. My current commute is 20 minutes. That’s one of the shortest commutes that I’ve ever had. It’s still 40 minutes a day, and currently I commute to the office 4 days a week. So that’s 160 minutes a week, or 8230 minutes a year, which is 138 hours a year. That’s about 3.5 work weeks per year. It looks like the average US commute time is 26.9 minutes. Crunching the same numbers as before, you get up to 4.6 work weeks per year. And this is with just four commutes per week. So self-driving cars get people back time and energy. That’s not as talked-about as the lives that will be saved. But I think it’s an important part of the story. Here’s an example: It’s sometime in the future, maybe 10 years from now. Self-driving cars are now fairly common and have made themselves available to the mid-market. You are looking at car options and the self-driving computer option is available for an extra $2500 (because it’s the future, the price has gone down despite the value being basically the same as it is today). Do you buy it? Well, let’s crunch some more numbers. Let’s say that you do get the computer. Can you justify that financially? Let’s say you use the computer to spend your newly freed-up 4.6 work weeks working an extra job on Fiverr that pays $15/hr. After a year, you’ve made $2797.60. That pays for the computer. And then you have an extra 4.6 work weeks every year for as long as you have the car. Think of what you could do with that time. I’m optimistic about the potential that these cars have to improve not just safety, but human life quality. I believe that humans driving cars was very necessary for the 20th century- I am less sure about the 21st.
https://medium.com/carre4/self-driving-cars-arent-just-about-safety-2328fec6a061
['Paul Cipparone']
2020-11-20 14:34:18.981000+00:00
['Technology', 'Cars', 'Artificial Intelligence', 'Robotics', 'AI']
Is Decluttering Good for Your Creativity?
Is Decluttering Good for Your Creativity? How cleaning up your act could be a boon to your brain, or lead to creative block. Organizing and decluttering is all the rage these days. For that, we can largely thank the present-day titan of tidiness, Marie Kondo, whose KonMari method has fueled several books, a Netflix series, and invigorated an entire industry devoted to helping people maintain control over their physical possessions. But as an architect who’s written a book about scientific research into the psychology of creative space, I have long wondered whether Ms Kondo’s prescriptions for self-dispossession were beneficial to creative types, many of whom work in home environments. So I started to look into it. My conclusion? Assuming you’re not a pathological hoarder, it depends. Granted, that sounds a tad wishy-washy. Why the hesitation? And how could having an abundance of stuff be anything but detrimental to one’s creativity? Let me explain. KonMari and the Question of Science A curious thing happens when you search for the word ‘science’ in Marie Kondo’s debut blockbuster manual, The Life-Changing Magic of Tidying Up: it doesn’t show up. Not once. Hmmm. How about the word ‘scientific’? Slightly more reassuring news: it appears a single time in a passage where the author acknowledges that she has no scientific basis for her theory that people accrue a variety of mental and physical health benefits from getting organized. In this regard, Ms Kondo is misinformed. In truth, there’s a considerable body of research indicating that putting one’s house in order does exactly that. What’s more, these findings make it clear that a dishevelled environment can indeed depress creative task performance, largely by diminishing our well-being. Home office. Austin, Texas. Architecture and interior design by Tim Cuppett Architects. Photography by Alec Hemer. Take, for example, a 2010 study published in the scientific journal Personality and Social Psychology Bulletin. It found that subjects who described their homes as cluttered exhibited greater depression and fatigue, diminished coping skills, and increased difficulty transitioning from work to home compared to people who viewed their place of residence more positively. What’s the connection between unkempt physical surroundings and a lack of mental well-being? Biology. According to the researchers, the group with messy environments registered elevated levels of the stress hormone cortisol, a substance released into the bloodstream by the adrenal glands. Normally, the body boosts the flow of cortisol when it perceives an external threat in order to sharpen our focus and analytic thinking skills, and by extension our ability to defend ourselves against potential harm. We then return to normal levels after the threat has passed. The problem with being stressed out by a messy environment is that the mess tends to remain in place, thereby leading to constant cortisol production and the kinds of disorders evident among subjects in the 2010 study. And those are only some of the maladies linked to an oversupply of the hormone. Others include headaches, irritability, intestinal problems, high blood pressure, low libido, poor sleep, heart disease, suppressed immunity to disease, and difficulty recovering from exercise. Home office. Scarborough, Maine. Architecture by Caleb Johnson Studio. Photography by Trent Bell. But wait — there’s still more, as in more weight. That’s right — another potential consequence of mess-induced stress is weight gain. According to one source, people with unkempt homes are an eye-popping 77 per cent more likely to be overweight than those who reside in well-tended surroundings. Unsurprisingly, kitchens overladen with goods are especially detrimental for maintaining narrow waistlines; a 2017 study from the journal Environment and Behavior found that subjects living in chaotic food environments significantly upped their consumption of scale-busting high-calorie snacks (aka junk food), the effects of which become all too plain for everyone to see. Other unhealthy consequences of clutter accrue indirectly. Air quality, for instance, often suffers in disorganized environments because the profusion of objects creates more surfaces to attract dust. The extra layers of dust not only increase the possibility of respiratory problems among occupants, but they can also reduce the amount of natural light inside a space by making those surfaces less reflective. Households with pets and in urban locations are particularly susceptible to the loss of light and dirtied air resulting from having too much stuff out and about. And if all this weren’t enough to instantly turn you into a neat freak, clutter can also hamper your ability to focus on task completion. This insight comes to us via a 2011 paper out of Princeton University, where researchers found that our sensory apparatus can be easily overwhelmed by having too many things to look at at one time, thereby making it harder to sort out only those objects relevant to the task at hand. More stuff also makes it more likely that people will be distracted from what they’re doing as something new catches their eye with each pass of the room. Library. Yonkers, New York. Architecture by Gary Brewer for Robert A.M. Stern Architects. Photography by Francis Dzikowski / Otto. More Problems: Incompleteness and Control The inability of people in disorganized settings to focus points to one of the reasons that clutter affects us as it does: a space in disarray imparts a sense of unfinished business. Sometimes that sense derives from projects or tasks that remain undone, the residue of which lingers in stacks of unfiled papers or the detritus of half-completed household chores. At other times it might stem from deferred decisions, such as whether to keep possession and if so, where to store it. Given the discomfort most people experience when confronted by a plethora of unresolved conditions, it’s hardly surprising that the unhappy subjects in the 2010 study I discussed at the beginning of this article repeatedly used the term ‘unfinished’ to describe their disorderly habitats. A second possible explanation for the negative impact of disorganization involves a psychological construct known as the locus of control. In a nutshell, the concept proposes that people fall into two main camps: those who believe that they are in control of their lives, and those who believe that external forces largely determine their fate. As you might expect, people who create tidy environments for themselves tend to fall into the former category, while those in less organized surroundings often feel that their belongings have gotten the better of them through no fault or desire of their own. As also might be expected, a landmark British study found that people with an ‘internal’ locus of control are generally more successful, healthier, better educated, and less anxious than those with an ‘external’ locus. Mark Twain, possibly at home in New York City. 1901. Photograph by Theodore C. Marceau. Library of Congress. On the Other Hand… So all this would strongly suggest that going full Kondo can only boost your creative performance by sparing you the downsides of disorganization. Why, then, did I qualify my judgment at the beginning of this essay by suggesting that there might be more than one side to the story? Answer: Because there’s evidence that a messy environment can materially stimulate idea generation. Exhibit A: a 2013 study that found that a group of subjects brainstorming ideas around a messy table evinced greater creativity than a second group performing the same task around a tidy table. The researchers who oversaw the study theorized that the neat table primed the subjects for conformity because neatness is a socially acceptable norm, whereas the unkempt work surface suggested a more devil-may-care attitude toward conventional expectations. I would also add that the findings are entirely consistent with the observation that creative thinking is by nature a ‘messy’ process — that is, non-linear and riddled with unanticipated surprises. As for Exhibit B: I offer you a cadre of historically eminent creatives who did some of their best work in messy spaces, Mark Twain being a revered avatar in this category. So where does all this leave you? Mess or no mess? As with nearly all techniques for enhancing creativity, ultimately it depends on your personal work habits, even when the scientific evidence suggests that your preferences might run counter to those of the general population. My suggestion, then, would be to test both conditions to determine which will be most beneficial to your work. My apologies for not being more definitive, but in this case, I wouldn’t be truthful if I said I could give you a neat and tidy answer.
https://medium.com/the-creative-mind/is-decluttering-good-for-your-creativity-4ca5e6f66030
['Donald M. Rattner']
2020-03-15 21:31:10.526000+00:00
['Self Improvement', 'Konmari', 'Creativity', 'Interior Design', 'Psychology']
Creating Little Nightmares
CREATOR INTERVIEW Creating Little Nightmares An exclusive interview with Dave Mervik of Tarsier Studios, creators of the creepy platformer/puzzler Little Nightmares Formed in 2004, Tarsier Studios in Sweden got its start working with Sony on the Little Big Planet series of games. That association with Media Molecule lead to them bringing a bigger and better version of Tearaway to the PlayStation 4 as Tearaway Unfolded. They headed next to the wilds of PSVR to make the puzzler Statik, and then in 2017 released Little Nightmares, a creepy and atmospheric platformer/puzzler, to wide acclaim. Their latest game is The Stretchers, a Nintendo Switch exclusive comedy puzzle game. Before the end of 2020 they will return to the existing IP well for the first time, as they are currently wrapping up development on Little Nightmares 2. We recently sat down (virtually) with the studio’s Head of Communication, Dave Mervik (Merv) to talk about that, the studio’s origins, and so much more! SUPERJUMP Thank you Merv for joining us on the interview, I wanted to start by congratulating you and everyone at the studio for the tremendous success of Little Nightmares, now having sold over 2 million copies. MERV Thanks man. SUPERJUMP Our readers really enjoy learning about the genesis of independent studios, so could you share a bit about the founding of the studio and how you came to be associated with Sony, making your first two titles exclusively for PS4? MERV The studio was originally a bunch of students named Team Tarsier, who made a prototype called The City of Metronome. As the well-told story goes, it was one of the darlings of E3, but was never asked to the dance, so now it remains a shadow looming over everything we do :) The happy outcome from that, though, was that it was the beginning of our relationship with Sony and Media Molecule. Malmo, Sweden. Source: Matador Network. SUPERJUMP The studio is located in Malmo, Sweden, which has become a massive center for game development studios in Europe. What do you think it is about that location in particular that is making studios congregate there? Is there any collaboration between studios with so many being in such close proximity? MERV I could wax lyrical about a culture of creativity, but that’d be just some bullshit. Of course those things exist here, but they exist everywhere, it’s just not always a story. I think it’s most likely a practical thing, being close to Copenhagen Airport makes it easier to attract talent from all over the world, for publishers to visit their developers, and for events like Nordic Game Conference to flourish. Or maybe it’s just be one of those things like when people accidentally form a queue. You see a bunch of game devs loitering around Malmö, so you start loitering too, and the next thing you know you’ve got to have a difficult conversation with a whole bunch of loitering developers. One thing I’ve learned is that Swedes will do anything to avoid a difficult conversation, so maybe it was just easier to turn Malmö into a massive centre for game development studios in Europe. In terms of collaboration, it really depends on the studio. I imagine large studios like Massive have all they need in-house and then some, whereas a place like Game Habitat is a lot more open to collaboration and sharing of resources and learnings. Personally I love that mentality, but you can’t escape the fact that all of these companies are in some kind of competition with each other, so I wouldn’t think there’d be any linking of arms or singing Kum-Bah-Ya round the campfire just yet. SUPERJUMP Your second game was a PSVR-exclusive title called Statik. What were some of the unique challenges of making a PSVR title in comparison to the non-VR games the studio has developed? Do you see yourselves going back to a VR title, or perhaps building a VR mode into a future game? MERV Speaking personally, the unique challenge/opportunity was the ways in which we could tell our story. The potential and restrictions of the VR format is something we didn’t try to ‘solve’, but rather worked with it and in some cases made it central to the experience. The game became about sitting in a chair with a device on your head, a device in your hands, and a slightly tragic person in your ears, solving puzzles for a reason you don’t fully understand. I loved what we tried to do with Statik, and am only sad that more people didn’t get to experience it. Messing with people in that way, playing with their expectations and sense of ‘self’ was something we could only do with VR, and I would only want to go back to VR if that opportunity presented itself again. I’m just not a fan of VR for the sake of it, it reinforces this notion that it’s novelty tech, when it could offer so much more. Statik. Source: vrnerds.de. SUPERJUMP The art style and character design for Little Nightmares is quite unique, and with the odd proportions of the “enemies” and the way they move, it’s all very frightening, nightmare-fuel type stuff. What were the inspirations when it came to designing the characters and the world they inhabit? MERV Our world and the characters that inhabit it. SUPERJUMP Reviews of Little Nightmares universally praised its style and atmosphere, but several thought that parts of the design and control mechanics were flawed or difficult to come to grips with. How have you incorporated the feedback and criticism like that into the design of the sequel? MERV We don’t really work that way. We know ourselves what worked, what didn’t, and what could have worked better; and it’s important that we maintain that focus, and refine the execution of our ambitions. Some of the measures we take may please some of the critics, but it’s important that we remain our most incisive critics, or we’ll forget how to do properly what we love the most. SUPERJUMP You’ve made four very different games since the studio was created, from a 3D platformer (Tearaway Unfolded) with a unique paper-based look, to a PSVR title (Statik), to a horror-esque action-puzzler, and then a comedic puzzle co-op title (Stretchers). Little Nightmares 2 will be your first time revisiting an IP, was it easier to develop the game not having to start from scratch, or is it more difficult creating something that lives up to the hype and expectations that fans now have based on the success of the original? MERV Probably a little of both :) We’re not trying to live up to anything though, as I said earlier, it’s important that we keep our own counsel and know what feels right in any of our games, rather than what people might expect. If you’ve seen Dumb & Dumber 2 or Anchorman 2, you’ll have an idea what can go wrong when you pander to audience expectations instead of listening to your own best instincts.
https://medium.com/super-jump/creating-little-nightmares-462d5d880d
['Bryan Finck']
2020-08-21 06:31:09.850000+00:00
['Gaming', 'Startup', 'Interview', 'Creativity', 'Videogames']
Top A/B Mobile Testing Services and Tools to Adopt in 2020–21
With more than 100,000 new Android apps released in the Google Play Store every month and users are projected to spend 90% of their internet time on the mobile apps, it is no surprise that the mobile industry is thriving at the fast-paced. And this highlighted the fact that companies from all the industries can take their business to the next level by merely investing in mobile app developments. Well, the trend of migrating the business from a physical store to digital stores is not new, but Covid-19 pandemic has increased the demand for mobile apps in all sectors. From teleshopping to telehealth, mobile applications are revolutionising things all around the globe. Now the eye-popping fact is, 40% of applications submitted on the app stores are rejected due to app completeness, design spam, incorrect metadata and more. Apple’s Director of Federal Government Affairs, “Timothy Powderly said- The App Review team reviews more than 100,000 submissions per week and rejects approximately 36,000 of those submissions”. If we conclude these figures, then it won’t be wrong to interpret that 3 out of 5 apps are rejected. According to the survey, 42% of apps are rejected due to completeness, 10% are due to design spam, and 8% are due to incorrect metadata and so on. So how to get your app published in the app store and how to make it run smoothly on different app stores are few of the major concerns of the developers. Don’t worry, this blog has listed 10 most Amazing A/B testing tools and services that help you improve the performance of your mobile apps. But before jumping on the tools and services, it is important to understand what exactly A/B testing tools are and why do you need it? What is A/B Testing and Why Do You Need It? This is true that a million-dollar app development idea can help you achieve success, but you can’t ignore this fact that the success of the app is in the way you develop it. So A/B testing should be the initial step of mobile app strategy that enables you to evaluate each element of your app deeply. A/B testing usually conducts tests on every aspect of the app to understand what’s working and what’s not. So this testing consists of various comparing elements that help you know which one drives more traffic and app installations and more. So basically A/B testing carried out in two ways: App Store A/B Testing: This will help to test the elements on the store listing and product pages such as title and description, visuals, gallery, and more. This will help to test the elements on the store listing and product pages such as title and description, visuals, gallery, and more. App A/B Testing: In which you will test the product. Benefit of A/B Mobile Testing Everybody wants a perfect app to generate better traffic and leads for their business, but that’s required constant optimization and never-ending experiments to keep your app up with the next update. And A/B testing will help you analyse with all the statistics about what to make the next change and when in your mobile app. So A/B testing tools and services play an integral part in enhancing mobile app performance. Now the central question is, what are the best A/B mobile testing tools and services you can look for in 2020–21? In this post, we have rounded up a list of best testing tools and services that you can consider to improve the performance of the mobile app. Let’s get started with the list… Top 10 A/B Mobile Testing Tools and Services To Ensure Best Performance in 2020–21 1. Apptimize: Experiment Anywhere & Track Everywhere Founded in: 2013 Pricing to Use: Feature flags are free, but subscription plans are based on request. Major Users: Glassdoor, Hotels.com, Delivery Hero and more. Whether it’s about testing a Native, Web or Hybrid mobile app, Apptimize is one of the leading A/B mobile testing tools that offers you a seamless way to optimize the performance of the app on across all channels. It is ultimately a cross-platform A/B testing solution that allows you to test a variation on any platform and evaluate the chance on all across the channels. With this app testing tool, you have complete control of feature releases, no matter which platform you are releasing to. With the feature flags, your mobile app development team can easily manage and ramp up mobile, server-side and OTT and web changes without any risk. Apptimize allows you to launch new functionality in your mobile app with complete confidence. 2. Mixpanel: Built Better Product With Power Analytics Founded in: 2009 Pricing to Use: Free for starters but plans are starting from $24 per month. Major Users: Uber, Skyscanner, Expedia, Twitter and more. Mixpanel is one of the most powerful product analytics tools that helps you build a better product and enables you to convert, engage and retain more users through the app. With Mixpanel A/B testing tools, you can get insights into the app, generate a simple report and make integrations that best fit your app. This tool is majorly used to analyse, measure and improve your customer experience. The simple features of this tool are Product Analytics, Product Metrics, and Product Foundations. Moreover, this tool is easy to access and lets you change any part of your application, without having to deploy any coding. 3. HubSpot & Kissmetrics’ A/B Testing Tool Kit Founded in: NA Pricing to Use: Free to use Major Users: Humana, Unbounce, Groove and more. Boost the performance of your mobile app with HubSpot as it enables you to download a complete A/B testing kit for free. With this kit, you can access an easy-to-use significance calculator that optimizes variables of the app. Secondly, get a template tracking feature that helps you improve your conversion rates over time. This is an ideal testing tool for business apps, as it helps you testing everything right from the landing page, emails to call-to-action, that significantly affects the number of leads. 4. Optimizely: Deliver Better Software, Products, and Growth Founded in: 2010 Pricing To Use: $1,440 With Monthly Charges Major User: Microsoft, IBM, Zendesk and more. Optimizely is a standalone powerful and fastest A/B testing tool that allows you to experiment with various elements including onboarding, feature discovery and other strategies that overall help in improving engagement and retention. With the use of this tool, a developer can easily optimize the app experience across any platform including website, backend code, mobile and conversational apps. Being a super fast and powerful testing tool, it helps you update the app in real-time without waiting for the review report of the App Store and Google Play store. 5. VWO: Most Trusted A/B testing Tool in the World Founded in: 2009 Pricing To Use: $1,440 With Monthly Charges Major User: Hilton, eBay, Disney, Target, PennState and more When it comes to choosing the best yet leading A/B mobile app testing tool to boost the performance of the app, VMO is the first choice of developers. Being used by the world’s best brands, including eBay, Target, Virgin, Holidays, VMO has built its reputation as the best A/B testing tool that also helps in optimizing conversion rates. To simplify the testing process, VMO offers you a robust reporting dashboard, where you can leverage Bayesian statistics that enables you to run the testing on the fastest mode. Moreover, it will give you more control of your tests and help you reach accurate app test conclusions. This testing tool has been designed to suit A/B tests, Split URL tests and multivariate tests with a drop-and-drop editor. 6. Omniconvert: Optimize Your Customer Journey With Data-Driven Results Founded in: 2013 Pricing To Use: Plans are based on client request Major User: NA Omniconvert is a well-known mobile app conversion rate platform that offers you A/B testing tools along with the survey, personalisation, overlay and segmentation tools to help you get better results. Using their testing tools, you can quickly test apps running on different platforms, desktop, mobile and tablet. With Omniconver, you can achieve better testing conclusions. It has blended their segmentation tool with their A/B testing tool to allow you test approx 40 segmentation parameters including geolocation, traffic source, visitor behaviour, product features and ability to verify the quality of content to engage the visitor. Omniconvert can be an ideal A/B testing solution for medium-sized business apps. 7. Taplytics: A/B Testing and Experimentation Founded in: 2011 Pricing To Use: On Request Major User: Ticketmaster, Chick-Fil-A, CBS and more Taplytics is a unique and widely used testing tool that allows you to change anything you can see in the iOS and Android apps. Right from buttons, images to colours, the entire UI/UX you can access with this tool and keep your track on it. With the help of this tool, you can see the impact of the recent changes in the real-time, which leads to better user experience. Taplytics is built with an advanced analytics system that helps you get accurate data for your team and other third-party data systems. Moreover, you can also manage A/B test push notifications from all across the platforms. To leverage this testing tool, all you need is to hire the best mobile app development company that has skills and experience to use it. 8. Leanplum: Multi-Channel A/B Testing Platform Founded in: 2012 Pricing To Use: Plans are on request Major User: Tinder, Zynga, App Annie, NBC and more. Leanplum is the most renowned simple to access and flexible A/B testing platform that helps you optimise every aspect of the app right from user engagement to in-app experience. With this testing tool, you can set any number of goals for campaigns, and be able to conclude highly accurate customer impacts and trends. Since Leanplum is a highly flexible testing tool, you can access it to understand both the negative and positive impacts of every campaign. For example, did your last push notification increase the app conversion but also lead to additional app uninstallations? Like this way, this testing tool will help you go beyond the average app testing strategies and allow you to make necessary steps for better success. 9. SplitForce: Drive Statistically Significant Results Founded in: 2013 Pricing To Use: Plans starting from $14 per month Major User: Marks and Spencer, FreeCharge, Burpple and more. Since its inception in 2003, it is a widely used A/B Testing platform that supports all the major existing and emerging platforms and providing libraries for iOS native, Android Native and Unity projects. Moreover, it provides you with a unique feature set that enables you to segment your users virtually based on different criteria, including mobile OS, regional, more. These settings will help you collect information stored on your backend using their targeting API. SplitForce is based on adaptive learning algorithms, though it helps you automate the entire A/B testing by automating the process. Overall, testing with this tool will help you save time and gradually show you better results. 10. Monetate: Complete App Optimization Tool Founded in: 2008 Pricing To Use: Prices are based on request Major User: The North Face, National Geographic, True Religion and more. It is a leading A/B testing tool designed to test marketing apps as it uses contextual data for decisions and recommendations as its core. The technicalities it uses for testing will help you bring together first-party data from sources like your CRM and POS and then combine them with the real-time behavioural and contextual observations to develop influential customer segments. This testing tool provides you with a unique blend of easy-to-use interface with a powerful backend-testing and segmentation engine to help you create impressive customer experiences that increase conversions and revenue. Conclusion To end this post, it is worth mentioning that these are the constant A/B mobile app testing tools that are free to get started with the basic functionalities and features. These testing tools will provide you with the platform to analyse every aspect of the app and help you understand the scope of improving the performance of the app with accurate statistics. You can choose to hire a software development company to better leverage the features of these testing tools.
https://medium.com/quick-code/top-a-b-mobile-testing-services-and-tools-to-adopt-in-2020-21-665296e787db
['Sophia Martin']
2020-11-02 05:08:06.615000+00:00
['Technology', 'Mobile Apps', 'Mobile App Development', 'Business', 'Startup']
Syncfusion Essential Studio 2020 Volume 4 Is Here!
Syncfusion is glad to roll out the last major release of this year, Essential Studio 2020 Volume 4. You can now enjoy the enhancements available in this release. Here is the brief description of the major features we have implemented. WinUI (preview) We have included the following seven new controls in WinUI: — Ribbon — Calendar — Calendar Date Picker — Date Picker — Time Picker — Slider — Range Slider There is also support for many new chart types in the Charts control. This includes: — Polar charts — Radar charts — Pyramid charts — Funnel charts The Radial Gauge now supports adding labels to represent gauge ranges. The DataGrid control has been enhanced to support printing. The TreeView control has horizontal scrolling and a context menu. The Barcode control now lets you generate GS1Code128Barcode and Pdf417Barcode symbology. Flutter The all-new Sparkline Charts widget is included in this release. The Charts widget was enhanced to allow: — Defining the maximum width of the axis labels. — Using a customization template for the trackball. — Converting a logical pixel value to chart data points and vice versa. — Restricting maximum zoom level on pinch-zooming in the Cartesian chart. The Calendar widget was enhanced to support: — Navigation animation. — A custom widget builder for time regions and appointments. The DataGrid widget was enhanced to load more data and support infinite scrolling. The Maps widgets now lets you add: — Polylines — Arcs — Sublayers to shape layers The PDF Viewer widget now supports: Text search. Text selection and copying. Navigation using document link annotation. The PDF Library for Flutter now allows users to: — Encrypt and decrypt PDF documents. — Create PDFs in the following conformances: PDF/A-1B, PDF/A-2B, PDF/A-3B — Add attachments to PDF documents. The Excel Flutter library now lets you: — Add hyperlinks to text and images. — Insert and delete rows and columns. — Autofit rows and columns. — Create Excel documents with logical functions, string functions, and nested formulas. Xamarin Xamarin.Forms WPF platform support is now extended to the ComboBox component. In the Autocomplete component, place the drop-down either in the top or in the bottom, based on space availability. Auto tab width support has been added to render the Tabbed View control’s tabs based on the text size. The circular cropping feature in Image Editor allows users to crop images in a circle or ellipse shape. Subscript and superscript for the Rich Text Editor. Dark and light themes for the StepProgressBar. Blazor Syncfusion Blazor components are now compatible with .NET 5.0. You can now perform lazy loading on the Syncfusion Blazor assemblies in Blazor WebAssembly applications. A new Button Group component. The following components have been developed to industry standards and moved from preview to production-ready: Individual NuGet packages have been provided for our Syncfusion Blazor components. The DataGrid component now supports virtual scrolling in the virtual placeholder. Kanban now supports customizing workflow validation, a card UI, and a tooltip template. In Scheduler, WebAssembly performance was greatly improved for the following views: — Timeline day — Timeline week — Timeline workweek — Timeline month — Month Essential JS 2 You can now freeze columns on the right side of the DataGrid. Workflow validation is now possible in the Kanban control. You can now include charts in the Spreadsheet control. In Gantt Chart, you can now perform virtual scrolling and task splitting. The Scheduler control now supports resizing and the drag and drop of appointments in the timeline year view. Word Processor now supports inserting, accessing, and editing the footnotes and endnotes in a Word document. WPF Syncfusion WPF controls are now compatible with .NET 5.0. A new Badge control shows additional details of elements, like the online status and number of notifications. The following controls have been developed to meet industry standards and are marked as production ready: A new Office 2019 high contrast white theme has been provided for all controls. The Diagram control has a new ribbon to access its tools. Navigation Drawer adds: — Compact and extended display modes. — Built-in items. — Itemsource support. The Ribbon control can now be hosted inside a normal window or any part of an application. RichTextBox provides suggestions when typing words. WinForms Syncfusion WinForms controls are now compatible with .NET 5.0. The RibbonControlAdv is now available in a simplified layout mode, similar to the most recent Office product. File Formats .NET PDF Library The .NET PDF Library now supports: Drawing HTML-styled text in PDF pages and PDF grids. Rendering EAN-13 and EAN-8 barcodes in PDF pages and images. .NET Word Library With the .NET Word Library, you can now access the metadata properties of a Word document used in the SharePoint document library. .NET PowerPoint Library With the .NET PowerPoint Library, you can now access and modify the language property of the text in a PowerPoint presentation. Java Word Library The Java Word Library now supports: Creating, reading, and editing RTF documents. Encrypting and decrypting Word documents. Conclusion These are just some of the features added in our 2020 Volume 4 release. You can check out the list of all the features in our release notes and on the What’s New page. Try these features and share your feedback as comments in this blog. You can also reach us through our support forums, Direct-Trac, or feedback portal.
https://medium.com/syncfusion/syncfusion-essential-studio-2020-volume-4-is-here-f50ae4cb6110
['Rajeshwari Pandinagarajan']
2020-12-18 03:09:40.164000+00:00
['Dotnet', 'Mobile App Development', 'Software Development', 'Productivity', 'Web Development']
9 Stories Our Editors Can’t Stop Thinking About in 2020
In a year where time has stretched itself beyond meaning and the bigness of loss has burned us all out, I’ve found that a lot of the writing on grief hasn’t always resonated. This year’s grief is universal in many ways, because we’re all experiencing it, but stories of an entire nation in mourning lose the specificity of grief, and that specificity is what makes grief both painful and beautiful. All this to say, Elizabeth Hackett’s writing on her personal corner of grief stayed with me. If you want me to be technical and say why I liked it as a piece of writing, I can tell you it was sentimental without being cliché, it was well-paced, and creatively structured. If you want me to tell you why I liked it as a human being, I’ll say that it gave meaning and weight to a seemingly mundane moment, simply picking out an outfit, and it also gave space for deep, heartbreaking, specific grief in the loss of her mother. In a year that forced us into isolation, where our people were often far away, I am so grateful that Elizabeth shared a piece of herself with the world. — Sam Zabell, Audience Development Manager In October, Kim Kardashian West announced she was taking her “closest inner circle” on vacation. To a private island. For her 40th birthday. In 2020. Sounds normal, right? Yeah, pretty normal. I sort of caught onto the meme late (“Why is everyone posting jokes about going to a private island?” I thought as I scrolled through Twitter alone). When I did, I thought it was funny… then slightly horrifying. Also, where did they go? Also also, what were they thinking? I still don’t know what they were thinking (I can guess) but now I know where they went. Or at least I’m pretty sure, thanks to Vicky Mochama’s expert analysis of Kim and Kendall’s Twitter and Instagram feeds. Want to match two photos of the same island bar? Want to sift through Kendall Jenner’s selfies trying to pick out which resort bathroom she’s standing in? Track down exactly which private jet the Kardashians rented (sorry, “chartered”)? Have you lost your mind and would prefer to just rabbit-hole your way through a list of things rich people do during a pandemic? Reading Mochama’s investigative reporting was thrilling for me. Maybe you’ll enjoy it, too. — Harris Sockel, Deputy Editor of Human Parts Great writing grows from a great subject. But a great subject doesn’t need to be existential in scope or even inherently remarkable to very many people: It simply needs to matter a lot to you. The reader will follow. This is exactly why I loved Maya Kosoff’s story about a beloved (and, I’ll say it, fairly gross-looking) Jell-O salad recipe. Deep in the creamy, pear-stuffed gelatin you will find a story filled with heart and humanity — a story about family and acceptance and marching forward amid unfathomable darkness to construct a seafoam-tinted dessert that means everything, even though, at a glance, it would seem to mean nothing at all. — Damon Beres, EIC of OneZero 2020 has been a dumpster fire. Between the consistent themes like Covid-19, racism, and the worst president ever, you have one-off terrible moments like murder hornets, celebrity passings, and Chet Hanks cosplaying a Jamaican. It’s hard not to think that we’ve through the worst year in history. But Saamir Ansari argues differently; in his post “536 AD — the Worst Year in History,” he argues that 2020 isn’t even top 5 on the “all-bad era” rankings. I love this story for several reasons. One, I love how he approached it technically; it’s quick and gets straight to the point while avoiding leaving the reader unsatisfied. And two, anyone who knows me knows that I’m a sucker for a good history lesson. And in a time that feels like the worst time ever, I found it helpful to contextualize 2020 in relation to the eras that came before. I felt better about my year leaving this story than I felt coming in, and that’s really what it’s all about. — Shaq Cheris, Editor of Creators Hub My favorite thing about the platform is coming across writers who can perfectly articulate everything I might be thinking or feeling. This summer was a stressful one. As if the pandemic wasn’t already turning life asunder, the deaths of Ahmaud Arbery, Breonna Taylor, and George Floyd; the global protests; and the subsequent conversations about race, policing, and justice made it even harder to feel like I could function like a normal human being. Medium writer Shenequa Golding captured this feeling in her essay “Maintaining Professionalism in the Age of Black Death Is… a Lot.” She writes about the exhaustion inherent with navigating life as a Black person in America and how it’s almost impossible (and shouldn’t be asked of us!) to go about business as usual, especially as we may be actively mourning, biting back rage, or living in fear. Her words make it easy to feel seen and to know that if we’re feeling fired up or even burned out by racism in America, we’re not alone. — Jolie A. Doggett, Platform Editor ZORA If we talk long enough, eventually meditation will come up. For like 10 years it was my New Year’s resolution, and finally a couple years ago I started to sit more regularly because of anxiety. It doesn’t “solve” everything, but it helps. So I keep going. Over the months and years, I’ve noticed how I can get closer to what I feel, how I feel, and maybe even start to see why. There’s a healing happening. I found my creative work moves in a direction that explores all of this. This year it’s been especially helpful. A story I keep coming back to is “Invite Your Writing Demons in for Tea” by Gavin Lamb, PhD. Drawing inspiration from Tibetan Buddhism and Joli Jensen’s book Write No Matter What, Lamb offers an approach to overcome challenges in the writing process. When you’re stuck, the advice here is to pause, and notice what’s happening. Notice the beliefs coming up. “Don’t be judgemental of the beliefs you discover. Simply notice their presence.” Approach them with a nonjudgmental curiosity, and you will discover more about yourself and what you need. I love how this advice combines ideas from meditation and healing with the creative (and difficult) practice of writing. As we write, we discover more about ourselves and grow. — Kawandeep Virdee, Editor of Creators Hub My friend Brooke Hammerling is a natural born blogger. Six days a week Brooke’s a communications professional, but on that seventh day she publishes Pop Culture Mondays, her weekly missive to her “darling pop culture junkies” that rounds up of “all the news you’re too embarrassed to admit you don’t know… or too embarrassed to admit you DO.” To wit: on a recent Monday she went deep on the “hot priest” Carl Lentz, Elliot Page’s transition, “Bad Romance” lip syncing on TikTok, and actors wearing masks on Law & Order SVU. As a pop culture junkie, most weeks I’m embarrassed to admit I do know what she’s blogging about. But to be honest I don’t read Brooke for the news, I read it for her voice. Her posts are like having her on the phone, delivering piping hot takes on the absurd joy that is pop culture in 2020. Which made her post from late September, “My Heart: a short but true story” such a shocker. On Thursday, October 1st I am checking into the Ronald Reagan UCLA Medical Center to have open heart surgery. I am getting my aortic valve replaced with a cow’s valve and my aorta replaced with this thing called a Dacron Graft. It’s super sci-fi and cool and I will be part cow and part robot JUST LIKE I HAVE ALWAYS DREAMED OF. In the post she tells the story of how as a kid she was diagnosed with Severe Aortic Stenosis (which involved her literally chasing a boy), how she ignored it in her twenties and thirties, and then the recent shock of her cardiologist telling her she needed open heart surgery ASAP. “I drank a SHIT-ton of tequila that night,” she writes. (Can relate.) As a long-time blogger, I’ve always struggled with voice. What’s the right tone? How much of myself to let through? Where do I draw the line between the public persona and my private life? If those questions trouble Brooke, she never lets it show. Instead she just lets it fly — whether she’s writing about TikTok collabs, or the prospect of getting her chest cracked open on the operating table. And she does it with joy, humor, and love for her darling pop culture junkies. — Michael Sippey, VP Labs I love stories where writers act as a tour guide. They escort you down an almost comically narrow rabbit hole of information, answering a question that you never asked or knew you cared about. But once you’ve made it through to the end, that issue suddenly consumes your every thought. Sydney Urbanek perfectly captured that level of niche nuance earlier this year when she dissected the 2009 Lady Gaga/Beyoncé collab, “Telephone.” Urbanek brings a seemingly random pop culture artifact to life, pulling every relevant interview and article available to reconstruct the song and music video’s origin story and legacy. The result is an unexpected delight. Sign me up for more guided tours! — Amanda Sakuma, Senior Editor GEN The only thing writers complain about more than writing is not being able to write. How-to books, Twitter threads, and conference panels are so often filled with screeds against writer’s block that when you stumble upon a truly unique and insightful insight into such a well-worn topic, it can feel truly revelatory. That’s exactly how it felt to read Alexander Chee’s frank but empathic investigation into why writer’s block exists and what we can do to stop it. Chee argues that writer’s block doesn’t come from your creative well running dry, but instead arises from “the fear of humiliating yourself” — the nagging worry that you will write something so dumb and wrong that people stop loving you and start hating you. Shame and embarrassment are often overlooked amid discussions of ambition, imagination, and work ethic, but as Chee wisely notes, these deepest, simplest emotions are often at the root of our feeling stuck, and they can easily compound upon themselves. “It’s hard enough to have a problem without also being ashamed of the problem,” Chee writes, and as well as being useful writing advice, it’s a good reminder for anyone who’s lived through 2020. It’s often more liberating to first admit you are embarrassed by your problems — be they loneliness, aimlessness, or grief — than it is to simply hope you can push past them. — Jean-Luc Bouchard, Senior Platform Editor Marker
https://medium.com/creators-hub/9-stories-our-editors-cant-stop-thinking-about-in-2020-8d1034ef773c
['Medium Creators']
2020-12-17 22:01:34.782000+00:00
['Writing', 'Wrap Up', 'Writing Tips', 'Creativity', 'Creators']
This 20 Year Old Made 1 Million in 8 Minutes. How Can You Replicate That?
The Great ‘One Million’ Sale Photo by Sharon McCutcheon on Unsplash It was in the first quarter of 2020, MoonXCosmetics were not operating at full fledge. Due to lack of workforce and minimal operations at the manufacturing hub. MoonXCosmetics were dealing with orders which outran their production and shipping capabilities.The entire online store was blocked for some time. While people were still eagerly looking for skin products from MoonXCosmetics. In April 2020, Mariee decided to restock her site and make products available for her customers. On April 30 2020, as soon as the stock was live — thousands of people jumped onto the store. Just in the matter of 8 minutes. She was able to get 1300+ orders which valued over one million dollars in sales. The biggest reasons why her store was able to do well was:- Unintended Scarcity That Led To Hype MoonXCosmetics was a million dollar brand even before the famous ‘One Million Dollar Sale’. But Mariee always struggled to manage and operate her business with a small team. This led to delivering orders quite late to her customers, and sometimes people never got their products delivered. There are many reviews and complaints from customers online — Regarding that they never got their products even months after ordering them. Screenshot 1 (Source: Twitter) MoonXCosmetics Instagram Post (Source: Instagram) These negative reviews and opinions could have taken a worse turn for Mariee’s business. But it didn’t happen, because customers loved her products and were after her products. Screenshot 2 (Source: Twitter) Mariee had been making natural skincare solutions since 2017. Over the years, people have reported to her regarding the skincare solutions — which didn’t work on them. Mariie made multiple changes in her recipe to make it work for her customers. She created her skincare line to suit every kind of skin. The best selling product was ‘Rose Galore’. People loved it and are mad after this product. Because it proved to work on them and they didn’t wanna lose such valuable skincare solution. Due to the slow production and shipping capabilities. It meant that the product was scarce and would be available for a shorter periods on site. Thus this created an unintended scarcity around people. Customers who got their skincare solution felt special and obviously were benefited from it’s advantage. As a founder Mariee used this scarcity to build hype for April restock which led to the one million dollar sale. Mariee’s Personal Branding on Social Media Screenshot of Mariee’s Instgram account (Source: Instagram) Mariee is the brand face of ‘MoonXCosmetics’. Today we see a lot of entrepreneurs use their social media handles to talk about their brand and business. The more prominent example of personal branding on today’s social media is Elon Musk. His twitter has over 39 Million followers. He shares everything regarding his businesses, and it’s progress. Screenshot of Elon Musk’s Twitter Tweets (Source: Twitter) This is the reason why most of us see Elon Musk in a news headline every other week. His words and his actions on social media get him that kind of media coverage. On the other hand, Mariee Revere has replicated this concept of personal branding on a small scale through her various social media accounts:- YouTube (par moon) — 27K subscribers Instagram (parmoonx) — 75K followers Twitter (@parmoonx) — 30K followers Customer believes in her products because of her authentic behaviour on social media. Mariee let’s customers know what they are buying and from whom they are buying from. Building Transparency = Building a Community These days where brands utilize influencers and professional marketing strategies to make millions. MoonXCosmetics, as a online brand is a mixture of both professional and authentic branding. MoonXCosmetics did used to run some high paid influencer campaigns in the past that got them customers. As of today if you check their Instagram account (moonxcosmeticsllc). Around 20–30% of their content is user generated. The brand is focused on showing you the results and the products which are helping people to get that results. MoonXCosmetics Instagram Feed (Source: Instagram) It’s not only on Instagram, but this also reverts backs to how Mariee is leveraging personal branding through her social media. She shares everything on her social media. On December 25 2019, Mariee shared her tweet that says, I paid my grandma mortgage for all of 2020 and paid off my mom’s credit card debt Screednshot (Source: Tweet) This tweet and the video has more one million views, and people loved what they saw. As a MoonXCosmetics customer, I would feel like I knew Mariee and her brand more personally just by looking at what she shares on social media. (That’s Building Transparency) Moving onto the business side of trust. Mariee runs her vlogging channel on YouTube, which ultimately takes you through the behind the scenes of the buisness and Mariee’s daily life. Watching her YouTube videos shows you how genuine Mariee is with her products and customers.
https://rahulthakursingh.medium.com/this-20-year-old-made-1-million-in-just-8-minutes-how-can-you-replicate-that-96d4c844afd9
['Thakur Rahul Singh']
2020-10-09 13:48:05.725000+00:00
['Business Strategy', 'Business', 'Startup', 'Marketing', 'Marketing Strategies']
The Seven Habits of Highly Creative People
Photo by Ricardo Rocha on Unsplash There are two myths that are often perpetuated about creative people. One — that they’re all “artsy”. Two — that they’re all unpredictable. We tend to associate creativity with the arts, and for good reason. Art is by definition born out of creation, be it in the form of a sculpture, a poem, a symphony or a photograph, and people who craft those are undoubtedly creative. It’s wrong, however, to think of creativity as restricted to the arts. Creativity refers to the ability to come up with new things or new ways to look at things, and that’s a desirable quality to have regardless of which industry one works in. So those of us who come up with new business ideas, new processes, new product formulae and new people management methods are undoubtedly creative — as much as the sculptors and writers among us. This brings me to the second myth, about unpredictability. We often think that creative people live life entirely as it comes — no schedules, no timings, just giving in to their creativity as and when it chooses to emerge. That’s wrong. While they certainly don’t live like automatons, they recognise the importance of gaining control over their creativity so that it can be channelised into the right pursuits, at the right time. Creativity has an inherent aversion towards too much control, but like any other human trait, it’s at its best when mastered. The most creative among us, therefore, display a number of healthy habits that help them use their creativity in the most efficient and enjoyable way, and which are listed as follows: · They have a routine — yes, it’s tempting to think of creativity as an excuse to ditch discipline. Too many rules can suffocate the spontaneity of creativity, no doubt. But the most creative people are so because they set aside time specifically to be creative. They know that they have other things to do all day — jobs, chores, family time and socialising — so they manage their time such that everything else gets done, and they also have a chunk of time to devote wholeheartedly to what they do best. · They know when to break the routine — when the next big idea hits, it’s often without warning and it’s likely to drift off without warning too. That’s why creative people don’t wait — when it’s truly essential to capture or expand on an idea, they put everything else aside and do so, even if it’s just a quick note or a sketch. · They write things down — creative minds know the power of the pen. Be it lists or mind maps, bullet journals or paragraphs, creative people always have paper handy to capture ideas and random thoughts when they float by. Digital aids help, of course, but any creative mind will tell you that they’d far rather grab a notebook and pen than an iPad when inspiration strikes during the day. · They get their sleep — those caffeine-fuelled 72-hour working marathons are few and far between, and for good reason. Sleep deprivation has multiple ill-effects, and one of them is causing the brain to become sluggish, which leads to low energy levels and reduced productivity in the short run as well as the long run. Creative people know when it’s time to call it a day and resume their work next morning — and the handy notes and diagrams they sketch will help preserve their ideas until it’s time to start again. · They are curious about everything — creative minds get their fuel from exploring the world around them. Inspiration lurks everywhere — a nature documentary could trigger an idea for an environment-friendly innovation, and a news photo could spark the next war novel. Creative people know this, which is why they are constantly reading, learning, watching, listening and asking. · They aren’t afraid of failure — not every great idea materialises into something lasting, and creative people are okay with that. Tying creativity to ultimate success will stifle the natural urge to try new things. What is important is to keep ideating and experimenting so that the perfect idea — the one that will translate into a finished creation — can come along. · They know how to be happily un-creative — too much of anything is bad, and creativity is no exception. Highly creative people know this, which is why they know the importance of switching off their “creative mode” and indulging in simple, fun activities like watching Netflix shows, going for walks, meeting friends and cooking favourite meals. The mind, like the body, needs the occasional vacation. It’s the best protection against burnout. If you’re a creative person who struggles to use that creativity optimally, try adopting the habits listed above, one by one. By bringing more method and organisation into your life, you’ll find that you have better control over how and when to engage in creative pursuits, while also allowing your creativity enough leeway to run free and play with ideas that may just develop into your next masterpiece.
https://medium.com/maice/the-seven-habits-of-highly-creative-people-eb18cd83cc5b
['Deya Bhattacharya']
2018-09-24 11:01:21.079000+00:00
['Productivity', 'Creativity', 'Maice', 'Time Management']
2018 Projects & Showreel
Every few years, our talented team of 2D / 3D designers & animators work towards putting together a video that showcases our latest work (it also gives them the opportunity to show off their skills). With 2018 now just 2 months away from drawing to a close, we felt it would be a good time to unveil our showreel containing some of the fantastic projects we were fortunate enough to work on this year. Starting off the video is The Body Shop, a globally renowned brand in the cosmetics, skin care and perfume space. For this project, we custom built a robust E-commerce web application that conformed to the global standards and guidelines set forth by the brand, while providing the local bodies enough flexibility to cater to the region’s specific needs. The site was built to be optimized for marketing campaigns, SEO and conversion and provided the administrators a back-end system where they can moderate and maintain every aspect of the product. We offer a fully custom solution built to your needs, covering everything from inventory management to payment processing. Next up is the MIT Innovation Ecosystems web application. MIT is a name that needs no further introduction. We had a fantastic experience working with them on this project, which entailed designing & developing a web application that allows MIT students and faculty members to build custom graphs & reports from 3 decades worth of data for 180 countries using 40 different metrics. These can be plotted in various ways, compared against each other, exported and downloaded to be opened in Excel, or as an image file to be used in presentations. MIT is using this tool in Masters and P.H.D. level economics classes as a tool for research & learning. Then we have iEvent, a SaaS product that allows event managers to dynamically create beautiful and customized applications (for iOS & Android) for their events. It even generates a microsite for the event as well, using all the data that is fed in through the self-serving back-end panel. Once the event manager uploads all the event information such as agenda, speakers, attendees, map, etc. (there are over 20 different categories available), the product automatically generates an iOS & Android app which is submitted to the App Store & Play Store. And of course, the colors & UI of the application & microsite are fully controllable from the back-end panel as well, ensuring that your event application is perfectly in sync with the brand guidelines. Our team of user-experience designers can turn a project brief into a visual prototype, collaborating with you every step of the way. Then we have DocLock — a fantastic new way to share documents in a secure manner. We were approached by a serial entrepreneur looking to disrupt the document sharing workspace with a fantastic and novel idea. We designed and developed an application where users can share documents with each other, and specify a geo-location where this document is accessible. Complete with enterprise grade security protocols and document access tracking, users of DocLock can share sensitive documents with the peace of mind that their documents will not leave their office premises (or other specified safe-zones). As we move forward with this product, we are embarking on the next step of bringing DocLock onto the Blockchain platform. We then got to embark on a fun new style of project, different from anything we had built previously. In this B2C space, we built PicTakToe for a group of entrepreneurs. This E-commerce product aims to take people’s memories out of their phone camera rolls and into their hands, or onto their walls. With PicTakToe, the customers can go to the web site either from any of their devices, and upload their pictures. They can then design their own stunning Photo Books, Canvases or select from available Frame styles. There are dozens of themes and templates to choose from and place your order, which is then delivered to your doorstep. In just its first 3 months, PicTakToe has taken the country by storm. Need to build an ecommerce app? We offer a fully custom solution built to your needs, covering everything from inventory management to payment processing. Contact Us For GMC Sierra, we built a Facebook Hub. We custom designed & animated a 3D model replica of their latest truck, brought it to life and put it on the web for users on social media to be able to get a real feel for the product. On this Facebook Hub, users are able to browse the different features, a gallery of interactive elements and find out what people are saying on the different social networks about the truck. Next up, we worked with some incredible minds over at John Hopkins University & Emory University. In order to spread awareness for getting the Flu Shot (as flu season is quickly upon us), the Moms Talk Shots web application provides a quick survey for expecting moms, or new moms to take. Based on the answers they provide, the survey adapts and asks further follow-up questions. Then, based on the final answers of the survey, the user is shown a series of videos which are most relevant to them. At the end, they are given a discount coupon for Walgreens where they can receive a flu shot, or any other medication they require. The system also sends follow-up surveys, custom email reminders, and much more. This entire system is controllable via a dynamic back-end through which the admins at John Hopkins University can build their own surveys, detail the rules based on which different videos are served, create follow-up email templates, setup rules based on which reminder emails and follow-ups are sent, and much much more. This project was an incredible exercise in building systems which can adapt to several different use cases, and scale accordingly. We replace old enterprise implementations with the latest technology, custom built for better scale, security, usability and value. Finally, we cap it all off with some VR fun! We designed and developed 3 VR games, VR Cricket, VR Food Truck & VR Basketball. This was a study in Virtual Reality for us and we learned an immense amount putting it all together. Our VR Cricket & Basketball apps were downloaded over 50,000 times and well received by users across the board. We were pleased to hear all the feedback, especially considering these were just fun experiments for us, as we embark on bigger enterprise grade VR Projects in which we are tackling user training scenarios for on-site workers. We are extremely excited about the direction and potential of VR and AR in the years ahead. Need to build an enterprise grade product? We replace old enterprise implementations with the latest technology, custom built for better scale, security, usability and value. Contact Us Wrapping Up These are just a few of the fantastic products we had the pleasure of working on in 2018 with some incredible clients and partners. Growth and learning are two core pillars of our ethos at Cygnis Media and we are extremely proud of the fine work our team of men & women have put forth in 2018. We can’t wait to see what we create in 2019! Stay tuned.
https://medium.com/cygnis-media/2018-projects-showreel-21bf6dab9949
['Cygnis Media']
2018-11-08 06:57:11.722000+00:00
['Mobile App Development', 'Entrepreneurship', 'Web Development', 'Virtual Reality']
Joint Friendly Fitness: Make Light Weights Feel Heavy
Perform The Big Movements Last I’ve spoken plenty about how your training should be centered around the big, compound lifts — press, squat, row, and deadlift variations are all staples in a sound strength training program. Traditionally, these movements are performed first in a workout, when we’re at our “freshest”, so that we can move the heaviest weight possible. After this main lift is done, we move onto the accessory movements like curls and side lateral raises; the more “bodybuilder” style exercises. This is sound logic, and it’s clearly the way to go if your one and only objective is to move as much weight as humanly possible on that given day. But a great way to make things a bit more “joint friendly” while still getting a productive workout is to occasionally flip the exercise order of your workout on its head. By performing those accessory movements first, you’re pre-exhausting the muscles, which means when it comes time to perform that compound exercise at the end of your workout, you’re not going to be able to move as much weight as you normally could. This is a good thing from a joint health standpoint, because you can still train hard and reap the muscle building benefits that these compound lifts provide, but with less weight. If you’re a competitive powerlifter training for a contest, this may not be ideal for you — but for pretty much anybody else, this is a great strategy to cycle into your training every few months to give your joints a break from all of the heavy lifting you’re otherwise performing throughout the year. I like performing my compound lifts in this fashion for about 6 weeks at a time, every 3 or 4 months.
https://medium.com/datadriveninvestor/joint-friendly-fitness-make-light-weights-feel-heavy-d8ee4caec553
['Zack Harris']
2020-12-27 16:05:12.401000+00:00
['Health', 'Wellness', 'Fitness', 'Life', 'Self Improvement']
How to Get Cloud Certified????
Over the past few years, the cloud computing industry has generated a lot of interest and investment. Cloud computing has become an integral part of the IT infrastructure for many companies worldwide. Industry analysts report that the cloud computing industry has grown swiftly over recent years. According to Wikibon, the Amazon Web Services (AWS) revenue will climb to $43 billion by 2022 with Microsoft Azure and Google Cloud close behind. As cloud computing becomes critical to IT and business in general, the demand for cloud skills will increase. Aspiring cloud professionals must prove that they have the skills and knowledge to be able to compete favorably in the market, and a cloud certification is the best way to do that. The 3 Most Valuable Cloud Certifications to Have on Your Résumé ▸ AWS Certified Solutions Architect — Professional ▸Microsoft Certified Azure Solutions Architect Expert ▸Google Cloud Certified — Professional Cloud Architect My Background: I am Cloud and Big Data Enthusiast , I am here because I love to Talk about Cloud. I am 11x Cloud Certified Expert. 4x AWS Certified , 3x Oracle Cloud Certified , 3x Azure Certified 1x Alibaba Cloud Certified . How to Get Started. ▸I started my journey into the cloud last year around june 2018. I was always interested in cloud technologies and found it fascinating. I was surfing a lot of random articles and blogs carrying information about cloud . ▸I found that In Order to learn AWS , I have to practice all the AWS Services and thus I quickly signed up on the AWS . ▸I used to spend almost 3 to 4 hours daily trying to practice the various AWS Services and it Helped me a lot in Understanding the AWS Architecture. To know more about my journey into the aws cloud refer to this article. Getting Started with AWS Educate. ▸AWS has recently introduced it’s new learning platform called AWS Educate and it is best if anyone wants to start with AWS . ▸AWS Educate is Amazon’s new Learning Platform to Learn Amazon Web Services and it is basically free for all students and Educators who want to learn AWS Services and Explore all the new Stuff which AWS has to Offer that to Absolutely Free without any charges . ▸If you’re a student, you can benefit with no-cost, at-home learning opportunities through AWS Educate Cloud Career Pathways and specialty badges, and online workshops and webinars to help you continue to build cloud skills. ▸AWS Credits for Students (renews annually) ▸AWS Educate Starter Account with $75 in AWS Promotional Credits if your school is an AWS Educate member institution ▸AWS Educate Starter Account with $30 in AWS Promotional Credits if your school is not an AWS Educate member institution. To Know more refer to this article. Tips and Tricks for everyone to get Microsoft Azure Certified within days for free. For anyone wanting to start their journey into the Microsoft Azure , now is the perfect time as Microsoft Azure is Offering free Azure Training and AZ-900 Exam Voucher. What are the steps required to prepare for Microsoft Certified: Azure Administrator Associate Exam ? How to prepare and pass the AZ-300 Exam & AZ-301 Exam ? To Know all about getting Azure Certified refer to my article. Tips and Tricks for everyone to get oracle cloud infrastructure (OCI) Cerified within days for free. Oracle Cloud Infrastructure certifications carry great value and can increase your salary exponentially. Normally these costs around $230 — $250. The list of the certifications are as follows: – To know more refer to my article. Why to Get Certified? A certification in cloud computing implies that you are skilled to help your organization reduce risks and costs to implement your workloads and projects on different cloud platforms. This will provide opportunities for cloud-related projects, and your clients will see you as a credible subject matter expert. Why Hands-On Labs are Important. Hands-on labs help you learn faster and better because the lab offers the ability to practice and experiment. You do not need your own account credentials to use the hands-on labs. When you start a hands-on lab, the system launches all the necessary resources and starts the timer. I hope that this guide helps you in building your career with Cloud and getting Cloud Certified, If you have any doubt or unable to understand any concept feel free to contact me on LinkedIn :https://www.linkedin.com/in/adit-modi-2a4362191/ Instagram :https://www.instagram.com/adit_aweesome/ Twitter : https://twitter.com/adi_12_modi Github : https://github.com/AditModi You can view my badges on: https://www.youracclaim.com/users/adit-modi/badges I also am working on various AWS Services and Developing various Cloud , Big Data & Devops Projects. If you are interested in learning AWS Services then follow me on github. If you liked this content then do clap and share it . Thank You .
https://medium.com/analytics-vidhya/how-to-get-cloud-certified-4cf8888edc3f
['Adit Modi']
2020-12-18 16:19:17.521000+00:00
['Cloud Certification', 'Certification', 'AWS', 'Cloud Computing', 'Cloud']
Tackling Mitochondria-Based Diseases
by Jackie Swift Inside almost every cell in our bodies live little powerhouses known as mitochondria. These tiny organelles, with their own genome, primarily produce adenosine triphosphate (ATP), the fuel on which your cells depend in order to function. If something goes wrong in the chain of mitochondrial electron transport components that ultimately produce ATP, disease results. “ATP production is the only system in the body that is under dual genetic control,” says Joeva J. Barrow, Nutritional Sciences at Cornell University. “Your nuclear genes and your mitochondrial genes work together to make the system functional. Any defect in either genome leads to disease because if you can’t produce enough ATP, then you don’t have enough energy in your body, and your cells begin to die. Typically tissues that are very energetic and require a lot of ATP, like the brain, heart, and muscles, are most susceptible.” Mitochondrial Disorders, What Are They There is no cure for mitochondrial disorders, which are hard to diagnose and impossible to treat. They result in complex diseases that are hardly household names, such as mitochondrial encephalomyopathy, lactic acidosis, and stroke-like episodes (MELAS) and Leber’s hereditary optic neuropathy (LHON), yet they are more common than most people realize. One in 4,500 people suffer from a mitochondrial disease, and one in 200 show no symptoms but carry a mitochondrial mutation potentially able to trigger disease later in life or when passed on to the next generation. These asymptomatic carriers are all women, since mitochondria are maternally inherited. Photo Credit: Dave Burbank To understand the processes that cause mitochondrial disease, as well as potential treatments, the Barrow lab depends on unbiased, high-throughput screening mechanisms, such as small molecule chemical targeting and genome-wide CRISPR-Cas9 gene ablations. “Our goal is to identify any genes or proteins that may be linked to mitochondrial bioenergetics, then significantly leverage them to see if we can push them toward therapy,” Barrow says. Studying the Genetics and Biochemistry Underlying Mitochondrial Disorders The researchers use a combination of cell and mouse models, in addition to tissue from patients, to explore the genetics and biochemistry behind these diseases. “Our typical experiments start off with seeing how long we can keep cells with damaged mitochondria alive,” Barrow says. “We put them under certain nutrient conditions we know will kill them because they can’t make ATP. Then we try to promote survival by treating them with small molecules or by modifying certain genes.” Once they’ve established which compounds can rescue the cells, Barrow and her collaborators move on to the discovery phase of their research. “We have to figure exactly what the compound does,” Barrow says. “What is it binding? How is it targeting this function? How is it boosting ATP production? To maximize the potential for therapy, we need to answer questions like those. At the same time, we might discover other additional factors that show therapeutic potential along the way.” “My lab is looking at genetic and molecular components to discover if some people have a predisposition that makes them more or less obese.” Barrow is following up on her earlier work as a postdoctoral researcher at Harvard University, where she profiled 10,015 small molecules — naturally occurring and synthesized compounds that target various proteins in the body. She and her colleagues identified more than 100 promising chemical compounds. Now her lab is characterizing them to evaluate their ability to correct mitochondrial damage, specifically in muscle cells. So far, a significant subset has a positive effect, and the researchers are trying to pin down exactly how they work. Obesity and Metabolic Diseases, Mitochondria-Related Continuing her research connected to mitochondria, Barrow also explores metabolic disease in the context of obesity. Worldwide, 1.9 million, or one in three people, are overweight, and 41 million of them are children under the age of five. With obesity comes associated metabolic diseases such as cancer, cardiovascular disease, and hypertension. “Every year we do the statistics on obesity, and no matter how much we counsel on diet and exercise, no matter how easy it should be to maintain an energetic balance, something is amiss,” Barrow says. “So my lab is looking at genetic and molecular components to discover if some people have a predisposition that makes them more or less obese or to see if we can take advantage of the molecular system to increase energy expenditure. This could offer another form of therapy to fight against obesity in conjunction with diet and exercise.” Photo Credit: Dave Burbank Thermogenic Fat The researchers have turned their attention to thermogenic fat. This subset of fat cells, also called brown and beige fat, is prevalent in animals that go through hibernation, but scientists recently discovered it in humans as well. “Brown and beige fat don’t only store fat molecules, like white fat does, they have a special ability to burn them to produce heat,” Barrow explains. Thermogenic fat has a protein known as uncoupling protein 1 that pokes a hole in the membrane of mitochondria, allowing protons to leak out. These protons are part of a proton gradient that is integral to the production of ATP. Without them, mitochondria are no longer able to effectively make the chemical. “Your body’s response is to start burning everything it can to try to maintain the proton gradient,” Barrow says. “And as a result, your energy expenditure goes through the roof.” Brown fat is prevalent in newborn humans where it serves to keep infants from going into thermal shock as they exit from maternal body temperature to the much colder temperature outside the womb. Later, other mechanisms, such as shivering, serve to keep adults warm while maintaining their body weight. “But adults still have brown fat that we can activate to increase energy expenditure components,” Barrow explains. Using proteomics, metabolomics, and genomics, Barrow and her colleagues seek to unveil factors that will activate brown and beige fat cells. “We have discovered a host of novel genes that are involved in turning on the thermogenic pathway that protects you against obesity,” Barrow says. “Now it will be fascinating to discover how these genes work so that they can be targeted toward therapy.” For Barrow, who has a doctorate in biochemistry and molecular biology, with clinical expertise as a registered dietitian, mitochondria are a perfect target for research. “The mitochondria are the metabolic hub of the cell,” she says. “No matter what aspect of metabolism you study — lipids, carbohydrates, vitamins — they all feed back into whether or not you can effectively produce energy. Everything my lab works on centers around this very mighty, tiny organelle that’s so important to life.”
https://medium.com/cornell-university/tackling-mitochondria-based-diseases-4a28591f9a8d
['Cornell Research']
2019-12-16 20:01:01.495000+00:00
['Health', 'Cornell University', 'Nutrition', 'Science', 'Medicine']
How to Persuade People Without Being a Scam “Artist” — The Catalyst by Jonah Berger
Introduction Berger starts with a story about a hostage negotiator who helped a SWAT team get a criminal to come out on his own without incident. (From the perspective of Greg Vecchio, an FBI agent) Crisis negotiation came after the 1972 Munich Olympics, when 11 Israeli athletes were killed. Before, it was about force. Now, people have learned to get the guy to “come out by himself.” Everything has something they want to change, but change is hard. Isaac Newton was the one who talked about this concept of inertia. Inertia means that people tend to do what they’ve always done. Some people think that if you just push people, give more information, more facts, more reasons and arguments, or more force, people will change. But people are not like marbles. They push back. In chemistry, chemists use catalysts, special substances that speed up chemical reactions. They do this not by increasing heat or pressure, but by providing an alternate route. In other words, faster change with less energy. Being the catalyst is equally powerful in the social world. It’s not about trying to be a better persuader or be more convincing. It’s about changing minds by removing barriers. Push people and they will snap. Tell them what to do and they probably won’t listen. Good hostage negotiator start by listening and building trust. They encourage people to talk to their fears and motivations and who’s waiting for them at home, even pets. Great negotiators don’t push harder or increase the heat. They identify the barrier and remove it like a catalyst. Most people think changing minds is about presenting evidence and explaining reasons, but we forget about the person who’s mind we’re trying to change. Catalysts start with this basic question: “Why hasn’t the person changed already? What’s blocking them?” Sometimes change doesn’t require more horsepower. Sometimes we just need to unlock the parking brake. The 5 principles that address roadblocks — REDUCE: Reactance: People push back when pushed. So catalysts encourage them to persuade themselves. Endowment: People are stuck two what they’re doing and don’t want to switch. Catalysts highlight how inaction isn’t costless. Distance: People have an innate anti-persuasion system. New info must be within zone of acceptance for them to listen. Uncertainty: This makes people pause. Catalysts reduce risk. Corroborating evidence: One person’s evidence is not enough. Catalysts find reinforcement. In the following chapters there will be illustrations on each principle, …From changing the boss’s mind and driving Britons to support Brexit to changing consumer behavior and getting a grand Dragon to renounce the Ku Klux Klan. Chapter 1 Reactance Berger starts with story of Chuck Wolfe, asked to get teens to stop smoking in Florida. Difficulty: warnings often become recommendations (think Tide Pods fad) A nursing home found that residents who had more control (where to put their decorations, etc) were more cheery and active, and long term lived longer. People need freedom/autonomy. They like feeling they have control over their choices/actions/behavior. When others threaten or restrict people’s freedom, they get upset. Threatening to restrict something makes it more desirable. Restriction creates a psychological effect called reactance. And this happens even when you’re asking people to do something rather than telling them not to do something. In the absence of persuasion, people think they’re doing what THEY want. Pushing, telling, even encouraging people to do something often backfires. When you try to convince people, you give them an alternate explanation for their interest which threatens their perceived freedom. And they then react against persuasion and do the opposite. This happens even when people wanted to take that action in the first place. People need to see their behavior as freely driven or it’ll backfire. People have an anti persuasion radar, and they’re constantly scanning for influence attempts. If they find one, they set up countermeasures, such as avoidance and ignoring the message. And when you make a claim, people don’t take it at face value. They scrutinize and argue against those claims. They raise objections until the message falls apart. Catalyst allow for agency. They don’t try to persuade and get people to persuade themselves instead. The reason why the anti-smoking campaigns didn’t work before, because they always implied that they knew what’s best for you and you should listen to them. So Chuck took a different route: He showed teenagers how the tobacco companies were trying to manipulate them in order to sell cigarettes. He showed how the company’s manipulated politics, sports, TV, Etc, to make smoking seem cool. “Here is what the industry is doing, you tell us what you want to do about it.” Leon didn’t demand anything from the teens or tell them what to do. It left it out for them to decide, and it works. The truth campaign was so powerful that in 2002, tobacco companies try to sue them. 4 ways to reduce reactance Provide menu: Limited set of bounded/guided options. (2–3 not 15–16) Ask, don’t tell: Don’t make statements. Ex: GRE prep course asked students how much time they thought they’d need to master the material. Highlight a gap: The Smoking Kid campaign sent children to ask smokers for a light. When refused, the kids would give them a note saying “you care about my health, why not your own?” (cognitive dissonance) Start with understanding: Starting by trying to influence someone makes it about you. It’s not about other people, their wants and motivations, it’s about you and what you want. Menu-ing and questioning shifts listener's role. Instead of thinking of counterarguments, they’re trying to think of an answer to the question, and how they feel about it (opinion). Questions increase buy in/commitment to the conclusion and behaving consistently. They may not follow others’ lead, but their own ideas. Being too forceful can backfire. You can rephrase as a question. Before people will change, they have to be willing to listen. They have to trust the person they’re communicating with. Seasons negotiators don’t start with what they want, they start with whom they want to change. Listening makes a person feel like they are a stakeholder in the relationship. Stay in their frame, make it about them, and that lays the groundwork for influence. You become their helper, their Advocate, their means to get what they want. Use the right language. You, we. Mirror their words back to them. Instead of trying to persuade, start by understanding. When people feel understood and cared about, trust develops. To truly get rid of weeds, or change minds, find the root. No one likes feeling someone is trying to influence them. After all, when’s the last time you changed your mind because someone told you to? Berger ends with a case study about a KKK member who was won over by a kind Jewish rabbi and his wife, Michael and Julie Weisser. They did it by leaving “love notes,” kind phone messages, in response to Weisser’s hateful harassment. Larry was in the KKK because of his abusive father In some strange way, emulating the thing that had hurt him the most gave Larry the strength you needed to go on. Until one day someone showed him another option. But Michael told him: “Larry, you better think about all this hatred you’re spreading, because one day you ‘re going to have to answer to God for all this hatred, and it’s not going to be easy.” No one had tried to think why Larry was a problem in the first place. As Michael Weisser said, “love your neighbor” means loving neighbors different from you. Chapter 2 Endowment Berger starts with a story about not wanting to change to a new phone. (Loss aversion — people value what they already have) If potential gains barely outweigh potential losses, people don’t change…advantages have to be at least 2x better (larger)*. *It’s PERCEIVED gain that matters, what the person cares about. Understand a person’s needs/values to know whether change will be PERCEIVED as gain or loss. Switching costs: psych/financial/time/effort barriers to switching. But consider the cost of doing nothing. Paradox: recovery is faster for severe > mild injuries because people will do their physical therapy for serious injuries. But leaser injuries tend not to marshal the same resources. Even if people have a plan, they won’t follow it. It’s hard to get people to change when things are not-terrible, or just-okay, not-great. Ex: A financial advisor convinced her client to invest by keeping track of how much potential money he was losing by not investing. Cost-benefit time gap: You pay up front for the product before you receive the benefit. This is another deterrent to change. People need to see how much time or money is lost: more motivating than seeing what is gained. Burn the ships: Cortes and Tariq ibn Ziyad, and ancient Chinese saying. Burning bridges/ships takes inaction off the table and forces people to get off the old way. Catalyzing change isn’t just about making people more comfortable with new things, it’s about helping them let go of old ones. Case study on how Brexit passed: they used Brexit bus ads showing the cost of sending the EU 350 million pounds a week for health. Slogan? “Take BACK control.” The “back” is important, because “take control” implies taking action/change, triggering thoughts of switching costs, whereas “back” triggers loss aversion. Same with Trump’s “Make America Great AGAIN.” (Reagan did a similar thing in 1980s. Chapter 3 Distance For people with less favorable attitudes toward X [Ex: perceived link between vaccines and autism], learning the truth about X actually backfires and pushes them farther. Region of rejection vs Zone of acceptance: Won’t consider vs the place where people agree the most and the range of views they could consider. If info falls in the latter zone, forget it. It will backfire. That’s why “one person’s truth is another’s fake news.” Whether something seems true depends on where you stand on the “field.” Plus, don’t forget confirmation bias. Strong feelings reduce the range of ideas you’re willing to consider. Catalysts use a “more surgical approach” and target people with specific, relevant messages, looking for the moveable middle” and “behavioral residue” that indicates conflicting ideas/willingness to change. Start by asking for less. Chunking change: Break big asks into smaller ones. And if someone’s really stubborn, change the field. Find a dimension where there is already agreement and use it as a pivot. This has a long-lasting effect. Deep canvassing: Encourage people to find a parallel situation from their own situation, when they felt similarly about something. (Not exactly the same, because people can’t really imagine what it’s like to be others) Look for an unsticking point where they agree. People appreciate when you help them be their best self. Highlight ways people already agree or are moving in the direction. Ex: book that says: Congratulations, whether you realize it or not, simply by picking up this book you have taken the first of what I hope will be many steps… To reclaiming your physical health, well-being, and happiness. (Greene, 2002) Berger ends with two stories of Repubs and Democrats switching sides. As with most big changes, things didn’t happen right away. Someone had to shrink the distance. It took a number of small steps rather than one big leap. Multiple interactions over months or even years. A slow, gradual change… Chapter 4 Uncertainty Berger starts with the story of Shoesite.org which became zappos, and how it was hard to get started because of the Uncertainty Tax: devaluing things that are uncertain. And this is a BIGger tax than you think. People hate uncertainty. It’s worse than known negatives. The more ambiguity there is around a product, service, or idea, the less valuable that thing becomes. Uncertainty can stop decision-making completely. Uncertainty is good for maintaining the status quo, but terrible for changing minds. How to combat uncertainty? Trialability: How easy to experiment with something Freemium: Dropbox Reduce up-front cost: Zappos free shipping, drowning simulator to show how important life jackets are Drive discovery: Zappos “mental pic of bringing shoe store to your home,” test drive cars (sell apartments by encouraging house parties, birthday party boxes) Make it reversible: Test drive cars, return policies The real barrier isn’t money, but uncertainty. Make magazine free, then people pay for the privilege of NOT LOSING it. We’re all neophobic to some degree. Risk aversion is relevant to the domain of gains, but risk-seeking int he domain of losses (gambling). Freemium takes advantage of switching costs. Dividing big things into smaller bits, like a monthly not yearly contract, helps. Ends with story about employee who wanted to convince boss to treat customers with more personalization, which he did by enacting the plan on the company employees first: To write a few words, personally and accurately, was what generated the most emotion. Chapter 5 Corroborating Evidence Berger starts with the story of a drug addict who didn't change until his family staged an intervention, and all got together to talk to him at once. If an opinion is important to you, it takes more evidence to change. we discount info that we disagree with, so more proof is required for more certainty. But hearing the same words over and over is annoying. You are more likely to accept an opinion from “Another you” someone who is like you, in terms of likes/dislikes, concerns/values. Addicts need to change their entire ecosystem to change. Dr. Vernon E Johnson, forefather of interventions: “rationalization and projection work together to block [addicts’] awareness of the disease.” But if many people say the same thing at the same time, there can be a breakthrough. Who, when, and how are important though. People are more likely to change if the people who are doing the thing are from separate, independent groups, because that provides additional info. Also, compressing an intervention into one shorter period of time works better than extended. It’s the difference between a sprinkler and a hose strategy. The former for a less opiniated person, the latter for a harder-to-change mind. The more proof needed, the more important multiple sources are. Case study: How the US got people to be willing to eat organ meat during WWII to save the meat for the soldiers: Kurt Lewin reduced the size of ask (mix some organ meats in meat loaf), reduce uncertainty and obstacles (gave out free recipes), used group discussions to make it feel more voluntary. Epilogue Another story: to help with the Israeli-Palestinian conflict, the US created a Seeds of Peace camp to bring youth together in Maine. This camp helped them improve relations, even after a long time. Kurt Lewin: “If you want to truly understand something, try to change it.” The opposite is true too. Eureka moments are great in movies but not realistic. big changes are more like the Grand Canyon. Appendix: Active Listening Listening is about asking the right questions and showing people you are listening. Why questions can put people on the defense. Better open-ended. Also, use effective pauses. As does mirroring. Use emotional labeling: understand underlying emotions to ID the issues affecting people’s behavior. Appendix: Applying Freemium If you give freemiums, you need to give enough time to try so they sense value before paying. Appendix: Force Field Analysis Force field analysis: Framework for analyzing factors in a situation to help make change possible. Identify restrainers (forces against change) and drivers( forces for change) Get your copy of Catalyst here
https://medium.com/be-a-brilliant-writer/how-to-persuade-people-without-being-a-scam-artist-the-catalyst-by-jonah-berger-ebcd4ecb9d14
['Sarah Cy']
2020-09-03 13:41:01.820000+00:00
['Inspiration', 'Creativity', 'Life Lessons', 'Love', 'Writing']
Why Companies Desperately Need Generalists to Innovate
Field outsiders see structural similarities better Shubin Dai, a specialist in bank data analysis is maybe one of the best model of the outsider. Passionate about his data financial work, he also spends his time responding to the various challenges posed by Kaggle, a community of data scientists. Quite surprisingly, he enjoys working on various topics such as nature conservation and medicine where he has gained considerable knowledge. For example, in these fields, he has achieved to identify the human causes of deforestation in the Amazon, and he has become a leading expert in disease prediction. Like Kaggle, these kinds of outsiders are more and more prospected by companies, as they respond perfectly to the issue of open innovation. They have increasingly been opposed to the field insiders that know deeply their stuff, but have been suspected of being blinded by the limits of their own expertise. The outsider specific talents have been based on what researchers have conversely called the “outside view”. It consists in relying not on familiar experience or analogies but on more distant and deeper analogies. By adopting an open perspective, they can seek and find similar structures with their project and thus better judge as they compare it with examples from other horizons. InnoCentive is an organization that has tried to promote an outsider-like vision of innovation. It has bet on open-mindedness by setting up crowdsourcing with communities of various specialists. Thus, biologists or chemistry specialists can work on issues as distant from their field than IT or network architecture issues. Yet, by providing diverse and open expertise, they find connections that the clients themselves haven’t found about. This enables InnoCentive to have a 75% more chance of success in their clients’ projects, compared to the 20% of corporate and internal research projects. The bottom line is that organizations need to rely more and more on collaborators who have remote expertise and are therefore more original and accurate on their issues.
https://medium.com/swlh/why-companies-desperately-need-generalists-to-innovate-866b5c3bdf35
['Jean-Marc Buchert']
2020-11-27 15:32:33.096000+00:00
['Management', 'Innovation', 'Productivity', 'Creativity', 'Jobs']
Forecasting with Stochastic Models
We all want to know the future. Imagine the power we’d possess if we knew what was going to happen in the future, we could alter it to get more suitable results, bet on it for financial gain, or even budget better. Although we can not outright determine what will happen in the future, we can build somewhat of an intuition of what it may be like. We hear often from self-improvement evangelist that we can distinguish how we have arrived at our present point in life by reflecting on our past actions. Thereby, to some degree we can predict the trajectory of our lives if we continue on a particular path. This is the essence of Time-series analysis. As stated in Adhikari, R and Agrawal, R. (2013). An Introduction Study on Time Series Modelling and Forecasting, “The main aim of time-series modelling is to carefully collect and rigorously study the past observations of a time-series to develop an appropriate model which describes the inherent structure of the series. This model is then used to generate future values for the series, i.e. to make forecast. Time-series forecasting thus can be termed as the act of predicting the future by understanding the past.” “The present moment is an accumulation of past decisions” — Unknown A popular and frequently used stochastic time-series model is the ARIMA model. It assumes that the time-series is linear and follows a particular known statistical distribution, such as the normal distribution, and has subclass of other models such as the Autoregressive (AR) model, the Moving average (MA) model, and the Autoregressive Moving Average (ARMA) model of which the ARIMA model was based on. Before effectively applying the ARIMA model to a problem, there are some things that we should understand about our data as you’ll come to understand by the end of this post. Things to know - The 4 main components of a time series: Trend → The propensity of a a time series to increase, decrease or stagnate over a long period of time. Seasonality → Fluctuations within a year that are regular and predictable. Cyclical → Medium term changes in the series that repeat in cycles. Irregularities → Unpredictable influences that are not regular and do not repeat in a particular pattern. Data To download the data, click this Link and follow the instructions. The data I will be using in this article is from the M5 Forecasting - Accuracy competition on Kaggle, which is currently still live (at the time of writing this article). The competition challenges competitors of whom have been provided with hierarchical sales data — thanks to Walmart- from 3 different states (California, Texas and Wisconsin) to forecast the sales 28 days into the future. For access to the code generated in this article can be found on a Kaggle Notebook that I created, which can be found here or in the link below. Here are the frameworks we must import to perform the task at hand. import numpy as np import pandas as pd import matplotlib.pyplot as plt import plotly.graph_objects as go from plotly.subplots import make_subplots from statsmodels.graphics.tsaplots import plot_acf from statsmodels.graphics.tsaplots import plot_pacf from statsmodels.tsa.arima_model import ARIMA from statsmodels.tsa.stattools import adfuller I have done some preprocessing to the data to make use of it’s hierarchical structure. # store of the sales data columns d_cols = full_df.columns[full_df.columns.str.contains("d_")] # group columns by store_id df= full_df.groupby(full_df["store_id"]).sum()[d_cols].T df.head() Figure 2: Data grouped by the store_id The competition is evaluated on RMSSE (Root Mean Squared Scaled Error), which is derived from the MASE (Mean Absolute Scaled Error) that was designed to be invariant and symmetric — you can learn more about forecast accuracy metrics here (the difference for this competition is that the A(Absoulute) in MASE is replaced with S(Squared) for Mean Squared Scaled Error and we take the Root of this for RMSSE). Concept of Stationarity Understanding the concept of stationarity is important as it has high impact on the type of model that we can fit to our data to forecast future values. We refer to a time-series as stationary when its properties do not depend on the time at which the series was observed. Some criteria for stationarity are as follows: Constant mean in the time-series Constant variance in the time-series No seasonality Simply, a time-series that is stationary will have no predictable patterns in the long term. For the mathematicians, a random process is known to be stationary when the joint distribution remains the same over time. Let’s look at some random items from our data to see whether they are stationary. The ARMA model is a combination of the Autoregressive and Moving Average models. This traditional approach requires the data to be stationary, however, things do not always work out as we’d expect in the real world. In-fact, in the real world, data is much more likely to be non-stationary, hence the birth of ARIMA which uses a clever technique called differencing to make non-stationary data stationary. Differencing Differencing computes the change between consecutive observations in the original series, which helps to stabilize the mean since it removes the changes in the level of a series — This has the effect of eliminating (or reducing) seasonality and trend. This technique is widely used for non-stationary data such as financial and economic data. The ARIMA model adopts the differencing technique to convert a time-series that is non-stationary to a stationary time-series. We can express the differenced series mathematically as shown in Figure 2. Figure 2: First difference When the differenced data does not appear to stationary, we can do differencing a 2nd time — It’s almost never required for you to go past the 2nd order in practice — which can be expressed mathematically in Figure 3. Figure 3: Formula for First difference of the 2nd degree. We can also get the difference of an observation and another observation from the same season. This phenomenon is known as seasonal differencing. Figure 4: Formula for first degree seasonal differencing Occasionally, we may be required to take the ordinary differences (this is the differencing technique we discuss in Figure 2.Referred to as first differences meaning the differences at lag 1) and seasonal differences to make our data stationary. Figure 5: Formula for first differences and Seasonal difference In python, we can use visualization and/or unit root test to determine whether differencing is required for our data — not there are other methods to determine stationarity. There are many different unit root test which have different assumption, but we will be using the Dickey-Fuller. Below I will visualize a store and try to determine whether you think it is stationary before you look at the results of the dickey-fuller test. Figure 6: Total sales per store — note that I have illuminated the store “CA_1”. In the notebook you can click on whichever store you’d like to for it to be illuminated or visualize them all at the same time. # Dickey-fuller statistical test def ad_fuller(timeseries: pd.DataFrame, significance_level= 0.05): non_stationary_cols= [] stationary_cols= [] for col in timeseries.columns: dftest= adfuller(df[col], autolag="AIC") if dftest[1] < significance_level: non_stationary_cols.append( {col:{"Test Statistic": dftest[0], "p-value": dftest[1], "# Lags": dftest[2], "# Observations": dftest[3], "Critical Values": dftest[4], "Stationary": False}}) else: stationary_cols.append( {col:{"Test Statistic": dftest[0], "p-value": dftest[1], "# Lags": dftest[2], "# Observations": dftest[3], "Critical Values": dftest[4], "Stationary": True}}) return non_stationary_cols, stationary_cols non_stationary_cols, stationary_cols= ad_fuller(df[stores]) len(non_stationary_cols), len(stationary_cols) >>>> (10, 0) non_stationary_cols[0] Figure 7: Augmented DIckey-Fuller results for store CA_1. The p-value is greater than the significance level that we set (0.05) therefore we do not reject the null hypothesis that there is unit roots in our data. In other words, our data is non-stationary — it does not meet the criteria for stationarity that we described above, hence why we must do some differencing for our data to become stationary. Pandas has a cool function DataFrame.diff() that does this for us — you can read more in the documentation here. # making the data stationary df["lag-1_CA_1"]= df["CA_1"].diff().fillna(df["CA_1"]) ACF and PACF plots The ARIMA model has hyperparameters, p, d and q that must be defined. Autocorrelation Function (ACF) and Partial Autocorrelation function (PACF) plots make determining the order p, and q of the model easier. The ACF plot shows the autocorrelation of the time-series, meaning that we can measure the relationship between the y_t and y_{t-k}. The simplest way to put this is as the coefficients of correlation between a time-series and the lags of itself. Note: “y_t” denotes the subscript. PACF plots show a measure of relationship between y_t and y_{t-k} after the effects of lags are removed. If we think of correlation, it’s the interdependence of variables. The “partial” correlation talks of the correlation between them that is not explained by their mutual correlations with a specified set of other variables. When we adjust this for autocorrelation, we speak of the correlation between a time-series and a lag of itself that is not explained by correlations from lower order lags. This is a great resource to learn more about ACF and PACF plots. Let’s see some of our visualizations… _, ax= plt.subplots(1, 2, figsize= (10,8)) plot_acf(df["lag-1_CA_1"], lags=10, ax=ax[0]), plot_pacf(df["lag-1_CA_1"], lags=10, ax=ax[1]) plt.show() Figure 8: Autocorrelation Function and Partial Autocorrelation Function for lag-1_CA_1; As stated in Identifying the orders of AR and MA terms in an ARIMA model, by mere inspection of the PACF you can determine how many AR terms you need to use to explain the autocorrelation pattern in a time series: if the partial autocorrelation is significant at lag k and not significant at any higher order lags — i.e., if the PACF “cuts off” at lag k — then this suggests that you should try fitting an autoregressive model of order k; This suggest that we should try to fit an AR(8) model to our data, which I have done in the next section. Lag/Backshift Notation Lag/Backshift notation is an extremely useful notation device. Various sources use different notation to denote Lag, L or Backshift, B. Figure 9: Backshift operator notation The Autoregression, AR(p), model generates forecast by using a linear combination of past variables of the variable. We can think of autoregression as regression of the variable against itself. Figure 10: Autoregression model (without Lag/Backshift notation) Moving Average, MA(q), model on the other hand uses past forecast errors instead of past values, in a regression like model. Therefore, we can think of each forecast value to be a weighted moving average of the past few forecast errors. Figure 11: Moving average model (without backshift notation) The ARIMA model All roads lead to this point. If we combine differencing, our autoregression model and moving average model, we get ARIMA(p, d, q). Figure 12: Arima formulation. Source: Hyndman, R.J., & Athanasopoulos, G. (2018) Forecasting: principles and practice, 2nd edition, OTexts: Melbourne, Australia. OTexts.com/fpp2. Accessed on 09/06/2020 Note that it is often much easier to use lag notation to denote the ARIMA model. You can learn more about how to do this here. p = The order of the Autoregressive part of the model d= The degree of first differencing in our model q = The order of the Moving average part of the model Figure 13: Special cases of ARIMA. Source: Hyndman, R.J., & Athanasopoulos, G. (2018) Forecasting: principles and practice, 2nd edition, OTexts: Melbourne, Australia. OTexts.com/fpp2. Accessed on 09/06/2020 # fitting the model model= ARIMA(df["lag-1_CA_1"], order=(8,1,0)) results= model.fit(disp=-1) # visualizing the fitted values fig= go.Figure(data= [go.Scatter(x= df["date"], y= df["lag-1_CA_1"], name= "original", showlegend=True, marker=dict(color="blue"))]) fig.add_trace( go.Scatter(x= df["date"], y=results.fittedvalues, name= "fitted values", showlegend= True, marker=dict(color="red"))) fig.update_layout( title="Fitted values", xaxis_title="Dates", yaxis_title="Units Sold", font=dict( family="Arial, monospace", size=14, color="#7f7f7f" ) ) fig.show() Figure 14: Fitted values from ARIMA model. We can have a closer look… # a closer look _, ax= plt.subplots(figsize=(12,8)) results.plot_predict(1799, 1940, dynamic=False, ax=ax) plt.show() Figure 15: Closer look at Actual vs forecast To see how we done against actual predictions, we must first go back to the original scale of the data to compare. There is a useful cumsum() function we can use in pandas — Documentation. compare_df= pd.DataFrame({"actual": df["CA_1"], "predictions": pd.Series(results.fittedvalues.cumsum(), copy=True), "d": df["d"]}).set_index("d") compare_df.loc["d_1", "predictions"]= 0 Then we plot this… Figure 16: Actual vs Predictions of the model. I have joined this competition a little late, but there is still sufficient time to better this result (which I will be sharing with you all). Useful Resource: Workflow Guide A useful flow chart was provided by Rob Hyndman in the book Forecasting: Principles and Practice which is extremely useful. A link to the online book will be in the Other Resources section below. Figure 16: The ARIMA flow chart. Source — Hyndman, R.J., & Athanasopoulos, G. (2018) Forecasting: principles and practice, 2nd edition, OTexts: Melbourne, Australia. OTexts.com/fpp2. Accessed on 09/06/2020 Final Word Thank you for taking the time to read this article. I am a self-taught Data Scientist from London, England. I can be reached via LinkedIn via https://www.linkedin.com/in/kurtispykes/. Please do not hesitate to reach out to me, meeting new people is awesome. Other Resources: Forecasting: Principles and Practice, 2nd Edition An Introduction Study on Time Series Modelling and Forecasting Autoregressive Integrated Moving Average
https://towardsdatascience.com/forecasting-with-stochastic-models-abf2e85c9679
['Kurtis Pykes']
2020-06-12 14:25:38.410000+00:00
['Machine Learning', 'Data Science', 'AI', 'Towards Data Science', 'Artificial Intelligence']
How to Raise Your First Fund With Right Side Capital Management’s Dave Lambert
How do you make it to Fund II, III, and beyond? It’s challenging enough to identify and make investments that will create extraordinary returns. On top of that, GPs have to juggle fundraising, manage current investors, and handle the fund as a business itself — including employees, payroll, and HR and capital management and operating costs. Oftentimes, Dave added, you’re also starting with less funds and staff than expected. It takes a long time to raise your first fund and there’s only so much to show for it. This means GPs are operating a small company with less bandwidth on a lower salary. How do you keep all the plates spinning? No matter what, it won’t be easy to keep up with diligence, deal flow, LP management, fundraising, legalese, and all the challenges of a growing company. It goes back to your background, track record, and team. According to Dave, managers with operational experience can manage all the plates better. They’re already accustomed to, or at least have experience with, handling these matters. Secondly, what team have you built to face these challenges? For instance, although legal and accounting work will get outsourced, you still need a knowledgeable partner within the fund. Make sure to surround yourself with partners with complementary skill sets and a passion for the fund that matches yours. Where do you go with Fund II? Do you build Fund II as a larger fund, as a follow on fund, or as a continuation of your Fund I strategy? This is one of the biggest challenges. Oftentimes, successful early-stage firms choose option A: build a larger fund. However, “instead of doing what made them successful,” Dave conveyed, “the default is to write bigger checks.” Raising a larger fund and writing bigger checks moves the fund into a completely different investment stage with a different return profile and ecosystem. “Many funds go from successful to struggling, and it’s hard to avoid,” Dave added. Instead, cut the same size checks. However, with the larger fund, you’ll need to bring in more deal flow, perform more due diligence, and make more deals. Fund managers need to then bring on more people to keep up with the workload, but hesitate as it spreads the management fees even thinner. Funds that neglect to recognize that they need a larger labor force to get this done will fail. “In the VC model, labor is the scarce resource” It’s difficult to keep pitching your vision when you don’t have much evidence accrued from Fund I yet, but if you really believe in your thesis, stick with it. “There’s a lot more randomness and luck involved in success,” Dave admitted. Stick with what has brought you success and convinced your first LPs to sign on, and make sure you manage your bandwidth. Is now the right time to scale up for you and your fund? Pivot or persevere? What do you do if you’re missing out on opportunities? Is it wise to change and adapt if your thesis is too narrow, or should you persist with what you’ve planned? Most VC funds are under-diversified. If you don’t think your thesis is working, definitely change, but be reasonable with the changes. First-time fund managers tend to give themselves too small of a box to operate within. Dave advised, “You’re going to learn as you invest. The world is going to change as well, so you need some flexibility to adjust with it and make the best decisions for your fund and investors.” The key is to communicate any changes clearly with LPs before they happen. Take advantage of your resources — reach out to your LP committee, explain why, and get their advice and thoughts. When do you start fundraising for Fund II? You never stop raising. When you’re in Fund I, you’re already raising for Fund II. This is one of the challenges of managing a fund. “You should always be out there talking to potential LPs, making them aware of the fund’s work with quarterly updates,” Dave said. This will help them make a quicker decision when it comes to committing. It’s also a good time to circle back to anyone who was close to investing or wanted to invest in Fund I. Bring your metrics and returns from Fund I, but understand that there still may not be a lot to go off of. Dave found that investors are much less data-driven than they think they are. A lot of LPs believe data more easily when it aligns with a thesis they already believe in. There are a lot of ways to describe what you do and what your goals are. Craft the story that resonates well with your target investor profile, and keep bringing your passion and conviction. What do you do if investors have decided to not invest further? It’s a very strong signal for future funds if your original investors return. But if some of them don’t, it may be an issue. It depends on the profile of the investors, Dave shared. It can be a red flag if an institutional investor doesn’t stay on. They may have concerns about ROI, a member of the team, or the fund as a whole — take their concerns to heart and reevaluate if changes are needed. If it’s an individual or smaller family office — their financial situation has changed or they can’t afford to keep investing until there are greater returns — it’s not ideal if they can’t continue, but it doesn’t market badly. Do your best to retain existing LPs and use them to create FOMO and get new investors to commit. If there are investors stepping away, make sure you’re prepared to answer the question of why.
https://medium.com/swlh/how-to-raise-your-first-fund-with-right-side-capital-managements-dave-lambert-99d91382156e
['Theron Mccollough']
2020-11-25 16:45:09.161000+00:00
['Entrepreneurship', 'Business', 'Fund Management', 'Startup', 'Venture Capital']
Coronavirus: Experts Have Failed Us (Expensively)… and They Will Again
Let down by a system of experts that had promised competence. The novel Coronavirus has shown that our current system, that relies on experts at the top, lacks either the skill or the inclination to protect us — our jobs, our savings, our lives. Yet, we give this system approximately $4.9 trillion every year to do just this. For this money, it has missed obvious risks, over and over. And it continues to ignore future calamities. How much longer do we lend legitimacy to this system of experts that doesn’t work for us? What Color Are All These Swans? Should we really blame this system of experts for a crazy black swan event like the Coronavirus? Well, if it were truly a Black Swan, probably not. But a Black Swan is not merely an unforeseen event, it is an unforeseeable event. This particular swan is, to use Nassim Taleb’s categorization, grey. That is to say, it is a risk we absolutely knew was out there. We did not know when it would hit, but we knew it would eventually. In 2015, Bill Gates was giving TED talks about the threat of new viruses. And in October 2019 the Bill and Melinda Gates Foundation hosted Event 201 in NY, in conjunction with the Johns Hopkins Center for Health Security and the World Economic Forum, to discuss how best to respond to a pandemic . Gates is not exactly a fringe character. Other experts have warned of the dangers of new viruses. Taleb himself pointed out that our increased interconnectedness means diseases spread more rapidly. Not Prepared for Swans of Any Color Was our system standing by, for years, with additional medical equipment? Did we have resources in place, ready to jump into action? It appears, instead, we had failed virus test kits from the CDC and an FDA process that slowed down the adoption of new tests. And now, as these experts warn us that this crisis could overwhelm our medical systems (future readers will know whether they got that one right), one wonders, why weren’t there stockpiles of supplies, additional ventilators, emergency facilities? The system of experts has not given us readiness, but red tape, pharmaceuticals dependent on foreign ingredients, and brittle supply chains. Criticize Trump if you wish, but this lack of preparedness predates him. In 2008, the financial crisis struck. Was the “mortgage meltdown” unforeseeable? Of course not. The Federal Deposit Insurance Corporation had never had enough money to insure more than a few percent of depositors money. It was built to stop runs on a few banks. Fannie Mae owned literally trillions of dollars in mortgages, with very little capital. In fact, a company guaranteed by the government in turn guaranteeing mortgages pushes the system toward over-leverage and too much risk by its very nature. It was another brittle system. And this system of experts is ignoring future crises now. We’ve known for 40 years that the Social Security and Medicare systems are out of balance by trillions of dollars. State pension plans are massively underfunded, as well. Are our leaders devising a strategy to resolve this? Of course not. And when those particular chickens come home to roost, our system will look for black paint and tell us the chickens are swans. Mammoth Budgets Misplaced And the cost of all this incompetence? Well, ignoring state and local governments, and the costs of regulations pushed onto people and corporations, the Federal government alone spends $4,900,000,000,000 every year. That’s $14,893 for every man, woman, and child in the country. Imagine if we invested 0.1% of this budget on preparedness. That’s $4.9 billion per year. Information Systems are Not Enough We’ve all heard glowing accounts of how our new communication tools allow the flow of data around the world in an instant. And it’s all true. But data from too limited a set of sources is fragile. And data without the means to take action is useless. Let’s Find Another Way Imagine the year is 1998 and I just asked you to take a video in your home, add music, and post it on the Internet for the entire world to see. Ten years earlier, you would have said, “what’s an Internet?” But in 1998, you could have done what I asked if you owned the right camera, knew how to upload the content to your computer, had the right editing software, and had the knowledge to put the whole thing together. Now, you just make a TikTok. What required expertise and special tools is now easy. It is as though we have made people experts, to the point where that expertise is no longer special. And in place of a set of tools that an expert would use, we’ve embedded the expertise in the tools, enabling easy interaction with others. Let’s give people tools with the expertise embedded. Now, let’s do the same thing, but for the big stuff. Let’s give people tools for interaction, with the expertise embedded, to handle their financial lives and risks, to give them more options, to improve their health, to safeguard our economy. Once we’ve built these kinds of tools, people will come together to solve their problems without relying on experts. In the financial arena, they will seek return knowing the real risk they’ve taken. Businesses will protect themselves against the risks that can end their business. But this dynamic of exchange and action can occur outside finance. It can lead to improvements in life and health. In other words, we’ll give people the ability to generate their own ideas, find real solutions, and interact in ways that benefit them most.
https://medium.com/greyswandigital/coronavirus-experts-have-failed-us-expensively-and-they-will-again-5100bc6323c3
['Peter Harrigan']
2020-03-21 19:04:19.015000+00:00
['Economics', 'Coronavirus', 'Risk', 'Health']
Prefect Cloud has Launched! 🎉
For more than two years, Prefect has been making steady progress on our mission to eliminate negative engineering. Today, we’re excited to announce that Prefect Cloud is available to the public — including its free Scheduler tier! Learn more here. We worked with hundreds of early Cloud previewers and tens of Lighthouse Partners to reach this point. Since July 2019, when we onboarded Cloud’s very first customer, we have made enormous strides in our understanding of workflow systems and user requirements. The biggest lesson of all is that a system we built for a very specific set of customers — large financial institutions — has come to dominate our business model. Our Hybrid Model delivers cloud convenience with on-prem security, and is so innovative it has resulted in two separate patent filings. Users keep their code and data on their private infrastructure — whether that’s a personal laptop, an IoT device, a cloud-hosted cluster, a serverless function, or just bare metal — while Prefect Cloud’s managed orchestration service provides complete oversight and confidence. The hybrid model is a clear advantage that Prefect provides over any alternative system. Learn more about the hybrid model here. The public release of Prefect Cloud caps “Phase 1” of our company’s story. It represents everything we’ve learned about negative engineering, and is informed by thousands of user stories gathered from all industries and experience levels. Just as when we launched our open-source Prefect Core library with the extreme confidence that comes from iterating with a small group of early previewers, we’ve already seen Prefect Cloud deployed at institutions large and small. We know that it fulfills the objective we laid out in this blog a year and a half ago: Prefect is the codification of the patterns we observe in modern data engineering. At our core, we provide two things. One is our open-source framework [Core], which operates like a hardware store: stocked with all the necessary components for building great data applications. The other is our platform logic [Cloud], which we think of as the store manager: guiding users to the right tools and making sure their projects are successful. With these two things working together, we can offer a compelling solution for both positive and negative engineering problems. What will you build? — The Prefect Team
https://medium.com/the-prefect-blog/prefect-cloud-has-launched-ed4b1cc6a6e
['Jeremiah Lowin']
2020-03-03 14:40:05.415000+00:00
['Data Science', 'Prefect', 'Python', 'Data Engineering', 'Workflow']
Five Books That Made Me Laugh Out Loud in Quarantine (and Taught Me Amazing Lessons)
Five Books That Made Me Laugh Out Loud in Quarantine (and Taught Me Amazing Lessons) #3 is one of the most unorthodox, original books I’ve ever read Image by Christopher Ross from Pixabay I have a special place in my heart for writers who can make me laugh when I’m alone, especially now. We’re in a pandemic, people! That news outlet you love? It’s just going in circles. Why not step back, and read something that doesn’t put the weight of the world on your shoulders? All the books I’m including in this list manage to interweave fantastic life lessons with the comedy, so even if you’re a hyper-efficient self-help junkie, there are pearls of wisdom awaiting you between the humor. I’ll be sharing a quote from each book and a short summary of my takeaways. Enjoy!
https://medium.com/books-are-our-superpower/five-books-that-made-me-laugh-out-loud-in-quarantine-and-taught-me-amazing-lessons-11510f9fa136
['Aaron Nichols']
2020-12-02 05:24:13.772000+00:00
['Comedy', 'Books', 'Reading', 'Creativity', 'Self Improvement']
PI and Simulation Art in R
I spent the better part of an afternoon last week perusing a set of old flash drives I’d made years ago for my monthly notebook backups. One that especially caught my attention had a folder of R scripts, probably at least 15 years old — harking back to my earliest days with R. I could only smile at some of the inefficient scripts I wrote then, reflecting an early, awkward attempt to switch gears from SAS to R. The script I reviewed, in particular, had to do with Monte Carlo estimation of pi, as in pi*(r**2), for the area of a circle. Estimating pi via random sampling is quite straightforward and generally a first assignment in an intro numerical/statistical computation course. The old code actually worked fine but was far from the ideal R vectorized/functional programming metaphor — gnarled with procedurally oriented nested loops and lists. And the limit of 2500,000 iterations reflected processor performance at that time. So, I decided to modernize the code a bit, adding a visualization that showed pi as derived from the ratio of an embedded circle to an enclosing square. The point of departure for this exercise is a circle of diameter 2 centered at coordinates (1,1), embedded within a square of side length 2. A uniform random sample of x’s and y’s <= 2 is generated, their distance from the circle center calculated, and a determination made of whether each (x,y) point is within or outside the circle. The ratio of “in” to total points estimates the ratio of the area of the circle to the area of the square. And since the area of the square is 4, the area of the circle is estimated by 4*(in/total). More, the radius of the circle is 1, so 4*(in/out) estimates pi as well. Pretty nifty. What follows is the MC script code, partitioned into Jupyter Notebook cells. The technology used is Wintel 10 with 128 GB RAM, along with JupyterLab 1.2.4 and R 3.6.2. The R data.table, ggplot, and knitr packages are featured. Set options, import packages, and load a personal library. The private functions used are blanks, freqsdt, meta, mykab, and obj_sz. freqsdt is a general-purpose, multi-attribute frequencies function for data.tables. meta displays metadata and individual records from data.tables. mykab is a printing function that uses knitr kable, and obj_sz returns the size of R objects. In [1]: options(warn=-1) options(scipen = 10) options(datatable.print.topn=100) options(datatable.showProgress=FALSE) options(stringsAsFactors=TRUE) usualsuspects <- c( 'tidyverse', 'data.table', 'pryr', 'rvest', 'magrittr','lubridate', 'fst','feather', 'knitr', 'kableExtra', 'ggplot2','RColorBrewer' ) suppressMessages(invisible(lapply(usualsuspects, library, character.only = TRUE))) funcsdir <- "/steve/r/functions" funcsfile <- "rfunctions.r" setwd(funcsdir) source(funcsfile) lsf.str() blanks(2) allfreqs : function (dtn, catlim = 100) blanks : function (howmany) freqsdt : function (DTstr, xstr) freqsonly : function (DTstr, xstr) meta : function (df, data = FALSE, dict = TRUE) mykab : function (dt) obj_sz : function (obj) First up, an updated procedural code estimate of pi for 8 different simulations to assess how the estimates vary with sample size. A data.table with columns denoting the pi estimate and sample size is output. In [2]: set.seed(531) HOWMANY <- c(500,2500,12500,62500,312500,1562500,7812500,39062500) pisim <- data.table(howmany=HOWMANY,piest=NULL) for (i in 1:length(HOWMANY)) { h <- HOWMANY[i] x<-runif(h,max=2) y<-runif(h,max=2) d<-((x-1)**2+(y-1)**2)**.5 inout<-factor(ifelse(d<=1,"in","out")) pisim[i,piest:= 4*sum(inout=='in')/length(inout)] } mykab(pisim) blanks(2) |howmany | piest | |: — — :|: — — :| | 500 |3.128000| | 2500 |3.158400| | 12500 |3.134400| | 62500 |3.152512| | 312500 |3.136166| |1562500 |3.140460| |7812500 |3.142583| |39062500|3.141737| The same calculation, this time using a more current functional approach. The results are, fortunately, identical. In [3]: set.seed(531) HOWMANY <- c(500,2500,12500,62500,312500,1562500,7812500,39062500) mksim <- function(h) { x<-runif(h,max=2) y<-runif(h,max=2) d<-((x-1)**2+(y-1)**2)**.5 inout<-factor(ifelse(d<=1,"in","out")) data.table(howmany=h,piest=4*sum(inout=='in')/length(inout)) } pisim <- rbindlist(lapply(HOWMANY,mksim)) mykab(pisim) blanks(2) |howmany | piest | |: — — :|: — — :| | 500 |3.128000| | 2500 |3.158400| | 12500 |3.134400| | 62500 |3.152512| | 312500 |3.136166| |1562500 |3.140460| |7812500 |3.142583| |39062500|3.141737| Graph the pi estimates above as a function of sample size. Note the convergence to the R constant pi. In [4]: options(repr.plot.width=10, repr.plot.height=10) bpal <- brewer.pal(9,"Blues") gpal <- brewer.pal(9,"Greens") g <- ggplot(pisim, aes(x=howmany,y=piest)) + geom_point(size=5) + geom_line() + theme(plot.background = element_rect(fill = bpal[2]), panel.background = element_rect(fill = bpal[2])) + geom_hline(aes(yintercept=pi), na.rm = FALSE, show.legend = NA,col="black",size=.3,linetype=2) + theme(axis.text = element_text(size=15)) + theme(legend.position="none") + ylim(3.1,3.2) + scale_x_log10(breaks=pisim$howmany) + theme(axis.text.x = element_text(angle=45)) + labs(title="Simulation Estimate of pi by Sample Size ", y="Estimate of pi ", x=" Sample Size (log scale) ") + theme(plot.title = element_text(size=25,face = "bold")) + theme(axis.text = element_text(size=12)) + annotate("text", x = 100, y = pi+.00100, label = paste("",round(pi,5),sep=""), size=5) + theme(text = element_text(size=rel(4))) print(g) blanks(2) In [ ]: Move on to a related process for showing pi visually. Create a data.table with two random uniform columns in the range of (0,2), an attribute that measures the distance between the two columns from circle center (1,1), and a factor that specifies whether each point is within or outside the circle of radius 1. A sample size of 1,000,000 is used for this simulation. In [5]: set.seed(345) howmany <- 1000000 simpoints <- data.table(x=runif(howmany,max=2),y=runif(howmany,max=2))[,distance1_1:=((x-1)**2+(y-1)**2)**.5][ ,inout:=factor(ifelse(distance1_1<=1,"in","out"))] meta(simpoints) blanks(2) | name | class | rows |columns| size | |: — — -:|: — — — — — — — — — :|: — –:|: — –:|: — –:| |simpoints|c(“data.table”, “data.frame”)|1000000| 4 |26.7 MB| Classes ‘data.table’ and ‘data.frame’: 1000000 obs. of 4 variables: $ x : num 0.433 0.55 0.78 1.311 0.872 … $ y : num 0.689 1.352 1.868 0.773 0.355 … $ distance1_1: num 0.647 0.572 0.896 0.385 0.658 … $ inout : Factor w/ 2 levels “in”,”out”: 1 1 1 1 1 1 1 2 1 1 … – attr(*, “.internal.selfref”)= NULL Ratio of points in the circle to points in the enclosing square — i.e. of the area of the circle to the area of the square. The area of the square is 4, which implies the area of the circle is arearatio*4. And with a radius of 1, the area of the circle and estimate of pi are identical. Look familiar? In [6]: f <- freqsdt("simpoints","inout") mykab(f) arearatio <- f[inout=='in',percent]/100 blanks(1) print(arearatio) pisim <- 4*arearatio blanks(1) print(pisim) blanks(2) |inout|frequency|percent| |: — :|: — — -:|: — –:| | in | 785094 |78.5094| | out | 214906 |21.4906| [1] 0.785094 [1] 3.140376 Visualize the above simulation/computation in ggplot — graphing 1,000,000 points. The result is rather artistic. In [7]: start <- proc.time() options(repr.plot.width=10, repr.plot.height=10) bpal <- brewer.pal(9,"Blues") gpal <- brewer.pal(9,"Greens") rpal <- brewer.pal(9,'Reds') myColors <- gpal[c(5,9)] names(myColors) <- levels(simpoints$inout) tit <- "Area of Square and Circle" subtit <- paste("Simulation pi: ", round(pisim,6)," Actual pi: ", round(pi,6),sep="") g <- ggplot(simpoints, aes(x=x,y=y,col=inout)) + geom_point(size=.5) + theme(plot.background = element_rect(fill = bpal[2]), panel.background = element_rect(fill = bpal[2])) + theme(legend.position="none") + ylim(-1,3) + xlim(-1,3) + labs(title=tit,subtitle=subtit, y="Height ", x=" Length") + theme(plot.title = element_text(size=22,face = "bold")) + theme(plot.subtitle = element_text(size=15,face = "bold")) + theme(axis.text = element_text(size=15)) + scale_color_manual(values = myColors) + theme(text = element_text(size=rel(4))) print(g) end <- proc.time() print(end-start) blanks(2) user system elapsed 10.19 20.08 30.27 That’s it for now. More R/Python-Pandas next time. In [ ]:
https://medium.com/swlh/pi-and-simulation-art-in-r-92098b7463b2
['Odsc - Open Data Science']
2020-03-18 16:17:02.172000+00:00
['Data Science', 'R', 'Artificial Intelligence', 'Jupyter Notebook', 'Mathematics']
How To Make Scalable APIs Using Flask and FaunaDB
What does Serverless have to do with this tutorial? The main reason serverless is being mentioned here is because FaunaDB is a NoSQL database that is made for serverless in mind. The pricing on this database is request based, precisely what serverless apps need. Using a service like FaunaDB can help cut costs so much that the hosting capabilities of the app would be virtually free. Excluding the development costs of course. Thus, using a monthly billed database for serverless apps kind of kills the point. A free stack example would be a combination of Netlify, Netlify Functions, and FaunaDB. Though it would only be ‘free’ for a certain amount of requests. Unless you are making an app that gets thousands of users on day zero of deployment I don’t think it would be much of a problem. In my opinion, using a monthly billed database for serverless apps kind of kills the point Flask on the other hand is a microframework written in Python. It is a minimalistic framework with no database abstraction layers, form validation, or any other particular functions provided by other frameworks. Flask is by large serverless compatible. You can make a serverless Flask app using AWS Lambda. Here is an official guide to Flask serverless from serverless.com.
https://towardsdatascience.com/how-to-make-scalable-apis-using-flask-and-faunadb-f6005d4a8065
['Agustinus Theodorus']
2020-10-27 23:50:13.217000+00:00
['Software Engineering', 'Programming', 'Software Development', 'Microservices', 'Serverless']
Practical Data Analysis with Pandas and Seaborn
Practical Data Analysis with Pandas and Seaborn Exploratory data analysis on a bank customer dataset Photo by Joshua Hoehne on Unsplash Whether we are creating a dashboard, doing predicting analytics, or working on any other machine learning task, we first need to explore the data at hand. We should obtain a thorough understanding of the data and the relationships among variables. There are many tools and packages that can be used to analyze data. What they all have in common is that the best way to learn them is through practice. In this practical article, we will explore a dataset that contains information about the customers of a bank. The ultimate task is to predict whether a customer will leave the credit card services of the bank. We will be using Pandas for data analysis and manipulation and Seaborn to create visualizations. The first step is to import the libraries. import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns sns.set(style='darkgrid') Let’s create a dataframe by reading the provided csv file. churn = pd.read_csv("/content/BankChurners.csv", usecols=list(range(21))) I have excluded the first column and the last two columns by providing a list of indices of columns to be included in the dataframe. The usecol parameter is used to select only certain columns. We can pass the names or indices of the columns to be included. The first column is client number which does not add any value to the analysis. The last two columns were not relevant as indicated by the dataset provider. The shape method returns the size of the dataframe in terms of the number of rows and columns. print(churn.shape) (10127, 20) (image by author) There are 20 columns. The screenshot above only includes 7 columns for demonstration purposes. We can view the entire list of columns by using the “columns” method. Before starting the analysis, we should check if there is any missing value in the columns. The isna function of Pandas returns true if a value is missing. We can apply sum functions to count the number of missing values in each column or entire dataframe. churn.isna().sum().sum() 0 There is no missing value in the dataset.
https://towardsdatascience.com/practical-data-analysis-with-pandas-and-seaborn-8fec3cb9cd16
['Soner Yıldırım']
2020-12-22 17:58:15.541000+00:00
['Data Science', 'Python', 'Artificial Intelligence', 'Data Analysis', 'Pandas']
Upgrade to Latest Version ASAP — No Thanks
Background At the moment k8s 1.16, 1.17, 1.18 are officially supported; the support for 1.15 has ended. But in AWS EKS, the latest version is still 1.16, and at the writing of the twitter above, even 1.16 in EKS hasn’t been released yet. Do Not Upgrade To Latest Version ASAP Photo by Michael Dziedzic on Unsplash This may seem controversial, but while I do think we should move to 1.16 at least, when we are talking about security and stability, I am with Mozilla: not “upgrading to latest version asap”. I think many people are like me. Take a very simple example: I guess not all software engineers have already upgraded their MacOS to 10.15.4 (I did). You might say that this is only because you are lazy and don’t want to be interrupted by the download and restart, but there are actually very good reasons for “not always upgrade to the latest version as soon as possible”. The most important one is: Bugs Diminishing Model Photo by Markus Spiske on Unsplash The number of bugs discovered for a given version, given software is diminishing over time. Thus, by a “delayed” upgrading strategy, you have much less trouble and potential security issues in production and reduce your maintenance in production. Of course, like everything else, there is exception. For example, if there is a major security issue that can’t be patched, instead, must release a new version, you should definitely upgrade ASAP. But the norm would be to NOT upgrade ASAP so that you expose less potential security issues and bugs and stability issues in production environment. Example — K8s 1.15 Taking k8s 1.15 as an example: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.15.md If you have a detailed look at the release notes, you will find out that, the number of bugs/issues fixed in each version is much less in the end than in the beginning. It is universal for literally any piece of software. Example — Firefox ESR 52 Take another example from Firefox ESR: https://www.mozilla.org/en-US/security/known-vulnerabilities/firefox-esr/ In the beginning month of Firefox 52, there are 31 bugs (can you believe that, a major release by a major company, one bug per day). But only Half a year later, the number dropped to 7 per month, and it maintained that level ever since (some months even much lower). Apparently, Mozilla also although about this “bug diminishing model” when they provide enterprise edition of Firefox (extended support release, ESR, or Firefox for enterprise). The ESR major version is always 3 less than (or one year behind) latest version. At the moment, latest version of Firefox is 71, ESR is only 68. Which one am I using? 68. I believe the bug diminishing model also works here, and I also believe the developers of the software know what they are doing when they are talking about enterprise security. Conclusion As of today, since EKS 1.16 has been released, I would upgrade to k8s 1.16 immediately (already did both in my work project and personal project), since 1.15 support ended. This makes perfect sense. In a not-so-perfect real world (like literally 1 month ago when 1.16 EKS hadn’t been released), given the two choices: A), using 1.15 for a few weeks/months before upgrading, and B), using 1.18 immediately; I’d definitely choose the former one for production, latter one for development env.
https://medium.com/devops-dudes/upgrade-to-latest-version-asap-no-thanks-a6cb99d739b3
['Tiexin Guo']
2020-05-31 16:21:27.359000+00:00
['Version Control', 'Software Engineering', 'Kubernetes', 'Software Development', 'Security']
NAIC Principles for the Use of Artificial Intelligence in the Insurance Industry
NAIC Principles for the Use of Artificial Intelligence in the Insurance Industry Raffaella Aghemo Follow Jul 21 · 3 min read The NAIC (National Association of Insurance Commissioners) is the National Association of Insurance Commissioners operating in the United States: on June 30, it approved the guiding principles, exactly version 5, to be applied in the use of Artificial Intelligence. Let’s see what they are: in the document it is recommended to all operators, who work in the insurance field, but also to third parties, such as rating and consulting organizations, defined in the text as “AI actors”, to adhere to these fundamental principles, which are complementary to each other and functional to a “trustworthy” Artificial Intelligence. The AI system should be: FAIR AND ETHICAL : Operators will have to respect the rule of law, especially with regard to commercial practices, unfair discrimination, access to insurance, underwriting, privacy, consumer protection and eligibility practices, instalment standards, advertising decisions, claims practices and solvency; and shall act proactively, subjecting the use of Artificial Intelligence to supervision, so that such systems are not designed to harm or deceive people and are implemented in a way that minimises negative outcomes for consumers, avoiding harmful or undesirable consequences. : Operators will have to respect the rule of law, especially with regard to commercial practices, unfair discrimination, access to insurance, underwriting, privacy, consumer protection and eligibility practices, instalment standards, advertising decisions, claims practices and solvency; and shall act proactively, subjecting the use of Artificial Intelligence to supervision, so that such systems are not designed to harm or deceive people and are implemented in a way that minimises negative outcomes for consumers, avoiding harmful or undesirable consequences. RESPONSIBLE : on the same path, constant surveillance of the AI system will be required so that it does not harm or create prejudice to consumers, and if there is no negligence in its creation, monitoring or implementation, the remedy for a possible error must be its correction! An appropriate methodology will have to be put in place, in order to be able to request the review or explanation of the process that led to a given decision, in clear and simple language, also within the reach of non “vertical” consumers on technology. : on the same path, constant surveillance of the AI system will be required so that it does not harm or create prejudice to consumers, and if there is no negligence in its creation, monitoring or implementation, the remedy for a possible error must be its correction! An appropriate methodology will have to be put in place, in order to be able to request the review or explanation of the process that led to a given decision, in clear and simple language, also within the reach of non “vertical” consumers on technology. COMPLIANT AND CONFORM : always keeping in mind all state and, in the United States, federal regulations, as well as all privacy regulations. : always keeping in mind all state and, in the United States, federal regulations, as well as all privacy regulations. TRANSPARENT : AI actors must commit to ensuring transparency and responsible disclosure of AI systems to stakeholders, including consumers. Actors must have the ability to protect the confidentiality of proprietary algorithms and adherence to the laws and regulations of individual states. Such proactive disclosures include disclosure of the type of data used, the purpose of the data in the IA system and the consequences for all stakeholders. : AI actors must commit to ensuring transparency and responsible disclosure of AI systems to stakeholders, including consumers. Actors must have the ability to protect the confidentiality of proprietary algorithms and adherence to the laws and regulations of individual states. Such proactive disclosures include disclosure of the type of data used, the purpose of the data in the IA system and the consequences for all stakeholders. SECURE, RELIABLE AND STRONG: Actors should, based on their role, context and ability to act, apply a systematic approach to risk management, at every stage of the IA system’s life cycle, to address risks such as privacy, digital security and unfair discrimination. These five proposed principles will be considered by the NAIC’s Innovation and Technology Task Force on July 23rd, and although they will not have the same force of law after publication, they will be a good guide for future initiatives and regulations in the insurance industry! The insurance industry has significant regulatory aspects, involving privacy, risk management, customer assistance, also through chatbots; therefore, the upcoming introduction of Artificial Intelligence, although aimed at implementing market performance, will involve and require a useful and greater involvement of legal professionals, oriented towards an increasingly technological and digital market. All Rights Reserved Raffaella Aghemo, Lawyer Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/naic-principles-for-the-use-of-artificial-intelligence-in-the-insurance-industry-fce6c940b6a4
['Raffaella Aghemo']
2020-07-21 16:14:15.754000+00:00
['Insurance', 'Artificial Intelligence', 'Chatbots', 'AI', 'Naic']