title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
Six Keys to Better Jobs, Wider Prosperity | Six Keys to Better Jobs, Wider Prosperity
MIT Work of the Future report finds growing workplace inequities that can, and must be addressed
By Peter Dizikes, MIT News Office
Decades of technological change have polarized the earnings of the American workforce, helping highly educated white-collar workers thrive, while hollowing out the middle class. Yet present-day advances like robots and artificial intelligence do not spell doom for middle-tier or lower-wage workers, since innovations create jobs as well. With better policies in place, more people could enjoy good careers even as new technology transforms workplaces.
That’s the conclusion of the final report from MIT’s Task Force on the Work of the Future, which summarizes over two years of research on technology and jobs. The report, “The Work of the Future: Building Better Jobs in an Age of Intelligent Machines,” was released today [November 17], and the task force is hosting an online conference on November 18, the “ AI & the Future of Work Congress” to explain the research.
At the core of the task force’s findings: A robot-driven jobs apocalypse is not on the immediate horizon. As technology takes jobs away, it provides new opportunities; about 63 percent of jobs performed in 2018 did not exist in 1940.
Rather than a robot revolution in the workplace, we are witnessing a gradual tech evolution. At issue is how to improve the quality of jobs, particularly for middle- and lower-wage workers, and ensure there is greater shared prosperity than the U.S. has seen in recent decades.
“The sky is not falling, but it is slowly lowering,” says David Autor, the Ford Professor of Economics at MIT, associate head of MIT’s Department of Economics, and a co-chair of the task force. “We need to respond. The world is gradually changing in very important ways, and if we just keep going in the direction we’re going, it is going to produce bad outcomes.”
That starts with a realistic understanding of technological change, say the task force leaders.
The task force aimed “to move past the hype about what [technologies] might be here, and now we’re looking at what can we feasibly do to move things forward for workers,” says Elisabeth Beck Reynolds, executive director of the task force as well as executive director of the MIT Industrial Performance Center. “We looked across a range of industries and examined the numerous factors — social, cognitive, organizational, economic — that shape how firms adopt technology.”
“We want to inject into the public discourse a more nuanced way of talking about technology and work,” adds David Mindell, task force co-chair, professor of aeronautics and astronautics, and the Dibner Professor of the History of Engineering and Manufacturing at MIT.
“It’s not that the robots are coming tomorrow and there’s nothing we can do about it. Technology is an aggregate of human choices.”
The report also addresses why Americans may be concerned about work and the future. It states: “Where innovation fails to drive opportunity, it generates a palpable fear of the future: the suspicion that technological progress will make the country wealthier while threatening the people’s livelihoods. This fear exacts a high price: political and regional divisions, distrust of institutions, and mistrust of innovation itself. The last four decades of economic history give credence to that fear.”
“Automation is transforming our work, our lives, our society,” says MIT President L. Rafael Reif, who initiated the formation of the task force in 2017. “Fortunately, the harsh societal consequences that concern us all are not inevitable. How we design tomorrow’s technologies, and the policies and practices we build around them, will profoundly shape their impact.”
Reif adds: “Getting this right is among the most important and inspiring challenges of our time — and it should be a priority for everyone who hopes to enjoy the benefits of a society that’s healthy and stable, because it offers opportunity for all.”
Six Takeaways
The task force, an Institute-wide group of scholars and researchers, spent over two years studying work and technology in depth. The final report presents six overarching conclusions and a set of policy recommendations. The conclusions:
1) Technological change is simultaneously replacing existing work and creating new work. It is not eliminating work altogether.
Over the last several decades, technology has significantly changed many workplaces, especially through digitization and automation, which have replaced clerical, administrative, and assembly-line workers across the country. But the overall percentage of adults in paid employment has largely risen for over a century. In theory, the report states, there is “no instrinsic conflict between technological change, full employment, and rising earnings.”
In practice, however, technology has polarized the economy. White-collar workers — in medicine, marketing, design, research, and more — have become more productive and richer, while middle-tier workers have lost out. Meanwhile, there has been growth in lower-paying service-industry jobs where digitization has little impact — such as food services, janitors, and drivers.
Since 1978, aggregate U.S. productivity has risen by 66 percent, while compensation for production and nonsupervisory workers has risen by only 10 percent. Wage gaps also exist by race and gender, and cities do not provide the “ escalator “ to the middle class they once did.
While innovations have replaced many receptionists, clerks, and assembly-line workers, they have simultaneously created new occupations. Since the middle of the 20th century, the U.S. has seen major growth in the computer industry, renewable energy, medical specialties, and many areas of design, engineering, marketing, and health care. These industries can support many middle-income jobs as well — while the services sector keeps growing.
As the task force leaders state in the report, “The dynamic interplay among task automation, innovation, and new work creation, while always disruptive, is a primary wellspring of rising productivity. Innovation improves the quantity, quality, and variety of work that a worker can accomplish in a given time. This rising productivity, in turn, enables improving living standards and the flourishing of human endeavors.”
However, a bit ruefully, the authors also note that “in what should be a virtuous cycle, rising productivity provides society with the resources to invest in those whose livelihoods are disrupted by the changing structure of work.”
But this has not come to pass, as the distribution of value from these jobs has been lopsided. In the U.S., lower-skill jobs only pay 79 percent as much when compared to Canada, 74 percent compared to the U.K., and 57 percent compared to Germany.
“People understand that automation can make the country richer and make them poorer, and that they’re not sharing in those gains,” Autor says. “We think that can be fixed.”
2) Momentous impacts of technological change are unfolding gradually.
Time and again, media coverage about technology and jobs focuses on dramatic scenarios in which robots usurp people, and we face a future without work.
But this picture elides a basic point: Technologies mimicking human actions are difficult to build, and expensive. It is generally cheaper to simply hire people for those tasks. On the other hand, technologies that augment human abilities — like tools that let doctors make diagnoses — help those workers become more productive.
Apart from clerical and assembly-line jobs, many technologies exist in concert with workers, not as a substitute for them.
Thus, workplace technology usually involves “augmentation tasks more than replacement tasks,” Mindell says. The task force report surveys technology adoption in industries including insurance, health care, manufacturing, and autonomous vehicles, finding growth in “narrow” AI systems that complement workers. Meanwhile, technologists are working on difficult problems like better robotic dexterity, which could lead to more direct replacement of workers, but such advances at a high level are further off in the future.
“That’s what technological adoption looks like,” Mindell says. “It’s uneven, it’s lumpy, it goes in fits and starts.” The key question is how innovators at MIT and elsewhere can shape new technology to broad social benefit.
3) Rising labor productivity has not translated into broad increases in income because societal institutions and labor market policies have fallen into disrepair.
While the U.S. has witnessed a lot of technological innovation in recent decades, it has not seen as much policy innovation, particularly on behalf of workers. The polarizing effects of technology on jobs would be lessened if middle- and lower-income workers had relatively better support in other ways. Instead, in terms of pay, working environment, termination notice time, paid vacation time, sick time, and family leave, “less-educated and low-paid U.S. workers fare worse than comparable workers in other wealthy industrialized nations,” the report notes. The adjusted gross hourly earnings of lower-skill workers in the U.S. in 2015 averaged $10.33, compared to $24.28 in Denmark, $18.18 in Germany, and $17.61 in Australia.
“It’s untenable that the labor market has this growing gulf without shared prosperity,” Autor says. “We need to restore the synergy between rising productivity and improvements in labor market opportunity.” He adds: “We’ve had real institutional failure, and it’s within our hands to change it. … That includes worker voice, minimum wages, portable benefits, and incentives that cause companies to invest in workers.”
Looking ahead, the report cautions, “If those technologies deploy into the labor institutions of today, which were designed for the last century, we will see similar effects to recent decades: downward pressure on wages, skills, and benefits, and an increasingly bifurcated labor market.” The task force argues instead for institutional innovations that complement technological change.
4) Improving the quality of jobs requires innovation in labor market institutions.
The task force contends the U.S. needs to modernize labor policies on several fronts, including restoring the federal minimum wage to a reasonable percentage of the national median wage and, crucially, indexing it to inflation.
The report also suggests upgrading unemployment insurance in several ways, including: using very recent earnings to determine eligibility or linking eligibility to hours worked, not earnings; making it easier to receive partial benefits in case of events like loss of a second job; and dropping the requirement that people need to seek full-time work to receive benefits, since so many people hold part-time positions.
The report also observes that U.S. collective bargaining law and processes are antiquated. The authors argue that workers need better protection of their current collective bargaining rights; new forms of workplace representation beyond traditional unions; and legal protections allowing groups to organize that include home-care workers, farmworkers, and independent contractors.
5) Fostering opportunity and economic mobility necessitates cultivating and refreshing worker skills.
Technological advancement may often be incremental, but changes happen often enough that workers’ skills and career paths can become obsolete. The report emphasizes that U.S. workers need more opportunities to add new skills — whether through the community college system, online education, company-based retraining, or other means.
The report calls for making ongoing skills development accessible, engaging, and cost-effective. This requires buttressing what already works, while advancing new tools: blended online and in-person offerings, machine-supervised learning, and augmented and virtual reality learning environments.
The greatest needs are among workers without four-year college degrees. “We need to focus on those who are between high school and the four-year degree,” Reynolds says. “There should be pathways for those people to increase their skill set and make it meaningful to the labor market. We really need a shift that makes this a high priority.”
6) Investing in innovation will drive new job creation, speed growth, and meet rising competitive challenges.
The rate of new-job creation over the last century is heavily driven by technological innovation, the report notes, with a considerable portion of that stemming from federal investment in R&D, which has helped produce many forms of computing and medical advances, among other things. As of 2015, the U.S. invested 2.7 percent of its GDP in R&D, compared to 2.9 percent in Germany and 2.1 percent in China. But the public share of that R&D investment has fallen from 40 percent in 1985 to 25 percent in 2015. The task force calls for a recommitment to this federal support.
“Innovation has a key role in job creation and growth,” Autor says.
Given the significance of innovation to job and wealth creation, the report calls for increased overall federal research funding; targeted assistance that helps small- and medium-sized businesses adopt technology; policies creating a wider geographical spread of innovation in the U.S.; and policies that enhance investment in workers, not just capital, including the elimination of accelerated capital depreciation claims, and an employer training tax credit that functions like the R&D tax credit.
Global Issues, U.S. Suggestions
In addition to Reynolds, Autor, and Mindell, MIT’s Task Force on the Work of the Future consisted of a group of 18 MIT professors representing all five Institute schools and the MIT Schwarzman College of Computing; a 22-person advisory board drawn from the ranks of industry leaders, former government officials, and academia; a 14-person research board of scholars; and over 20 graduate students. The task force also consulted with business executives, labor leaders, and community college leaders, among others. The final document includes case studies from specific firms and sectors as well, and the Task Force is publishing nearly two dozen research briefs that go into the primary research in more detail.
The task force observed global patterns at play in the way technology is adopted and diffused through the workplace, although its recommendations are focused on U.S. policy issues.
“While our report is very geared toward the U.S. in policy terms, it clearly is speaking to a lot of trends and issues that exist globally,” Reynolds said. “The message is not just for the U.S. Many of the challenges we outline are found in other countries too, albeit to lesser degrees. As we wrote in the report, ‘the central challenge ahead, indeed the work of the future, is to advance labor market opportunity to meet, complement, and shape technological innovations.’”
The task force intends to circulate ideas from the report among policymakers and politicians, corporate leaders and other business managers, and researchers, as well as anyone with an interest in the condition of work in the 21st century.
“I hope people are receptive,” Reynolds adds. “We have made forceful recommendations that tie together different policy areas — skills, job quality, and innovation. These issues are critical, particularly as we think about recovery and rebuilding in the age of COVID-19. I hope our message will be picked up by both the public sector and private sector leaders, because both of those are essential to forge the path forward.” | https://medium.com/mit-initiative-on-the-digital-economy/six-keys-to-better-jobs-wider-prosperity-fab2a7c6ed79 | ['Mit Ide'] | 2020-11-23 00:35:05.624000+00:00 | ['Automation', 'Robots', 'Future Of Work', 'Productivity', 'AI'] |
Data Visualization with Python HoloViz: Interactive Plots and Widgets in Jupyter | The Goal of the Visualization
With data representing a ground “truth” of binary classification, and predicted values (floats ranging from 0 to 1.0), I’m going to put together a dashboard in order to:
Generate Hard Predictions Show a confusion matrix Evaluate the Classifier through the AUC curve and a Precision-Recall Curve
The “Data”
Basically, I created an artificial set of binary categories (85% / 15%), threw random data at each bin, and then set the values between [0,1]. This should look a lot like the result of a binary classifier from scikit-learn.
Generating Hard Predictions
This is the key part of the interactive portion of this visualization. At the top of the dashboard, there will be a handy slider whose values will represent cut-off values (above that value, assume category 1, below, assume 0).
By default, I initialize the cutoff value to maximize the F1-Score. In theory, if we could have perfect precision and recall, this quantity should be 1.
As the slider moves through the various cutoff values, the rest of the visualization should convey changes in the various other metrics. One of the best ways to capture this relationship is through a confusion matrix, or a 2x2 table showing the results of the prediction against the actual values.
The Confusion Matrix
To achieve this visual element, I’ll be using the hv.HeatMap plot, along with some tricks to make it behave. Getting customized axes and tick marks proved to be rather difficult, so instead, I’ll also use hv.Labels to make it explicitly clear what the confusion matrix is showing:
A Confusion Matrix! Hopefully the diagonal has the big values.
The tricky part here was disabling the axes and positioning things correctly. The heat map is really a 2x2, with ranges (0,1) on both x and y. So in order to place something in the top left quadrant, you need to refer to it with a tuple corresponding to (x,y): (0,1,VAL) where VAL is the actual value of the heat map or the corresponding label. I created two lists and used a map to sort things in the right order.
AUC and Precision-Recall Curves
The code for generating these curves is pretty simple as I’ll continue leaning on HoloViews:
# for the AUC, we need only plot our FP vs TP
hv.Curve(data[:,[3,1]]). \
opts(xrotation=45,
xlabel='False Positive',
ylabel='True Positive')
AUC Curve… My fake data is a little too easy to “classify”
# for the PR curve, we need only plot recall vs precision
hv.Curve(data[:,[6,5]]). \
opts(xlim=(0,1),
ylim=(0,1))
Very Precision, Much Recall.
A Layout and Putting it Together
The individual components are pretty easy to slap together, but now I’ll bring it all into a single view. The slider will now iterate through the various cut-off values while the rest of the plots update.
In this way, we can see different F1 Scores, changes to the confusion matrix, and where on the PR curve each cutoff will land.
For this section, I’ll be introducing two complications. The big one, wrapping everything in a class, is a practice I use to keep things organized. With the second, I’ll be using a layout instead of running each widget in a separate cell. The layouts that come with panel are fairly simple and do a good job of letting you track widgets as you add them. For this dashboard, I’ll often refer to a widget by it’s position in the layout rather than directly:
Hey would you look at that, an interactive dashboard!
The class code shouldn’t be too horrible. There are basically five components:
(During Initialization) Defining all of the data I’ll be using. I try to pre-calculate everything I’ll need beforehand (or optimize its calculation) to smooth out the user experience. (During Initialization) Initialize the widgets with default values or some initial plot. (During Initialization) Assign watchers. Because everything is wrapped in a class, the actual callbacks can come later. Define callbacks. Remember, these are the functions that are triggered by interacting with some widget and subsequently, modify other widgets or layouts. Create plotting functions. These functions basically create a plot when called. During a callback, plots will get created for each change induced by the watchers.
The actual plotting code should be fairly straightforward. Personally, the hardest part of plotting is getting the display options to look right. HoloViews does a pretty good job laying out the options either in the docstrings or the help function, invoked with: hv.help(hv.Curve) # or any hv plot
You may notice under the ### layouts ### section, I actually use several layouts. You can use a pn.gridspec to make one super layout, but I find it’s simplest to think in rows and columns. The pn.layout.row and pn.layout.column also do a great job at centering and dealing with margins, saving a lot of headache. Using these also makes referring to or updating widgets in those layouts a lot easier.
Lastly, I want to point out that if you intend to work primarily in a notebook, you do not need to use classes or layouts. Widgets in different cells will still update as long as the linking code (callback/watcher) is working properly.
Again, follow along with the notebook at: https://github.com/ernestk-git/holoviz_extended/blob/master/Panel_Interactive.ipynb | https://towardsdatascience.com/data-visualization-with-python-holoviz-plotting-4848e905f2c0 | ['Ernest Kim'] | 2019-09-05 14:27:35.617000+00:00 | ['Data Science', 'Data Visualization', 'Python', 'Dashboard', 'Bokeh'] |
Demystifying Uncertainty Principle | I used MATLAB software to plot the curves shown in figures 1, 2 and 3
When both slits are open (fig. 2), it looks like the electron can hit the screen by going either from slit 1 or slit 2. It is understood that opening the second slit increases the number of electrons, and hence the probability of electrons striking the screen, but when a plot is drawn between the probability of hitting the screen when both slits are open versus the distance along the screen then there were regions where we get higher number of electrons (or higher probability than the previous case) but at other regions, we get lower number of electrons. In some regions, the probability of electrons striking the screen came out to be zero. This is quite intimidating. In Fig. 2 shown above, the points where the plot touches the horizontal axis are the points of zero probability.
Feynman’s Water Wave Slit Experiment | Source: The Feynman Lectures on Physics Vol. 1
Feynman’s Explanation
In fact, Feynman wrote, “The double-slit experiment contains all the mysteries of quantum mechanics.” It is observed that the pattern we got when both slits are open is similar to the plot when waves (shown above) were taken instead of particles. This concludes that electrons also interfere and behave as waves as well. This arises the regions of zero probability (destructive interference) when both slits were open. That means those electrons would have hit the screen in the zero probability region if only one slit was open and don’t strike the same region when both slits are open but here, another problem arises — Electron gun was firing one electron at a time so how does an electron know how many slits were open? “It seems as if, somewhere on their journey from source to screen, the particles (electrons) acquire information about both slits”, Hawking wrote in ‘Grand Design’.
Many possible explanations were given by many physicists to explain this quantum behavior. Feynman said that the electrons take every possible path connecting those two points i.e., from source to screen, an electron can take a straight line path or a path to Mars and comes back on Earth to strike the screen. It looks like a science-fiction movie but it isn’t.
According to him, when both slits are open, the electron goes through slit 1 and interferes with the path in which it goes through slit 2 which causes interference. I have a different perspective to comprehend this phenomenon: When both the slits are open then, electron divides into two halves which go through each slit, and interfere with each other (Interfering with itself doesn’t produce any effect). When only one slit is open, it won’t split into two. Now, the same question arises: how does an electron know about the slits? The answer is simple but intriguing. It’s because of the uncertainty in the position of an electron. It can acquire information about the slits before it reaches there.
Consider that the electron is about to enter slit 1 but due to uncertainty in its position it could be anywhere near this slit’s vicinity. For example, the electron may be behind the slit or it may be crossing the slit with a certain velocity or it may be about to enter the slit. So, before the electron reaches slit 1, it can get the information about the slit and can act accordingly.
Figure 3: When electrons are observed
The Need for Uncertainty Principle
You might be thinking that we should make an apparatus that can mark the slit whenever an electron goes through that particular slit and then we can say that the electron goes through either slit 1 or slit 2. Feynman pointed out such an apparatus. We can put the lights near the slits so, when the sound from the buzzer is heard, we see a flash either near slit 1 or slit 2. But when Feynman made such an apparatus, it changes the outcome and he got a different plot (Fig. 3) which shows no interference pattern. It seemed like observing the electron changes its behavior.
Actually, the photon which is emitted from the light source hits the electron and gets scattered. This photon gives an impulse to the electron which changes its path in such a way that the electron didn’t strike the region where it supposed to when there was no light source and hence, it shows no interference pattern. Now, to decrease the impulse given to the electron by the photon, the momentum of the photon should be decreased. The momentum of the photon(p) is inversely proportional to its wavelength(λ): p ∝ 1/ λ.
When the wavelength becomes more than the separation of the slits, the impulse gets decreased, and the plot obtained looks similar to the plot (Fig. 2) when no light was used. But by decreasing the momentum or increasing the wavelength, the flash becomes a big fuzzy when the light was scattered from electron, and it’s hard to distinguish from which slit the flash is coming.
Therefore, this experiment concludes that if we try to decrease, without disturbing the outcome, p (or increase λ) then, x (position) increases, and vice-versa. It means, if we are successful in locating the position of a microscopic particle (like an electron) then we can’t tell how much fast or slow the particle is going (uncertainty in velocity or momentum Δp) or if we successfully measure its velocity then we are unable to find its position in space (uncertainty in position Δx). We can’t know both things simultaneously with great certainty. So, it’s hard to build such an apparatus that can locate the electron without disturbing the result.
Fortunately, Quantum Mechanics’ laws don’t apply for macroscopic objects (like soccerball) otherwise we’d see the soccer ball moving in the zig-zag path when we kick it. But these laws successfully explain the phenomena where other laws failed to explain like the photoelectric effect. To save the existence of Quantum Mechanics and explain this absurdity, Heisenberg suggested there should be some limitation to make the laws consistent, and gave the Uncertainty Principle. Since Quantum Mechanics is such a powerful theory used in many upcoming technologies like Quantum Computing, we need uncertainty principle to comprehend quantum behavior of atomic and sub-atomic particles.
We Are Not Sub-Atomic Particles!
Heisenberg’s Uncertainty Principle applies to every particle but it can’t be observed on a macroscopic level because λ is very large and hence, uncertainty in position is very small. In the TV scripted series, Genius, Albert Einstein, and Niels Bohr were talking to each other about the proof of Quantum Mechanics while walking. There was a moment when they were crossing the road and Einstein intentionally threw himself toward a car but Bohr pulled him back before the car hit him. When Bohr asked him to be more careful from the next time, Albert smiled hysterically and said, “Why should I? Why should either of us? According to you if that automobile was a particle but we didn’t see it, it wouldn’t have been there at all. We would be perfectly safe”. To his defense, Bohr replied, “That principle is applied only to sub-atomic particles and automobiles are not sub-atomic particles”. In conclusion, Heisenberg’s Uncertainty Principle might save the perilous existence of Quantum Mechanics in the future but it won’t save you from the car so, be careful!
The Uncertainty Principle | Genius | https://medium.com/swlh/the-soul-of-quantum-mechanics-2dc215b390da | ['Dishant Varshney'] | 2020-06-25 03:25:50.207000+00:00 | ['Physics', 'Knowledge', 'Science', 'Future', 'Quantum Mechanics'] |
The Singularity of Knowledge | Synergy
Beyond imagination, the human mind is also gifted with the ability to decipher patterns — to make sense out of nonsense.
Especially, we seem to love common denominators.
So much so that the biggest breakthroughs in science have revolved around unification as much as they have around discovery. Routinely, we’ve made paradigm-shattering discoveries by simply tying loose ends together, and we continue to operate under this ambition (it can be said that our next target in line is dark matter).
The greatest minds in history have understood this need for unification to be the ultimate prerogative. Some, like Nikola Tesla, had subsequently failed in their connecting of certain dots while others, like James Clerk Maxwell, had become famous for it.
The problem is that it’s not easy. Far from it.
As clever as we are, we’ve compartmentalized our systems of knowledge into such distinct and divided segments of study that it’s near impossible for one student to embark upon two opposing streams of belief, something that had been the norm only a hundred years ago.
The noösphere promises us a rekindling of this comprehensive approach to understanding our world. With its synergetic potential and it’s touch-point responsiveness, it holds the ability to take all that we’ve chopped up and bring it back together, even if for a moment, just to see if anything blends together comfortably, anything that we hadn’t had, or couldn’t have had, previously considered.
Because, and this is the main point to digest, the noösphere is able to do something that we ourselves have a hard time doing. It can discern and catalogue, cross-boundaries and synthesize streams of information. It can employ numerous algorithms that would take us an absurdly long time to match in terms of efficacy.
Sounds like A.I. doesn’t it?
It doesn’t necessarily have to be, though artificial intelligence will certainly be an integral part of its picture, as it currently is.
The noösphere is the environ. We are the data points.
Twitter lets political discourse unfold in real time. Instagram lets people share their experiences with a taste of immediacy. TikTok, well, it may serve useful in some respect one day.
Quora, Reddit, Wikipedia. All far from perfect, but we’re getting there.
Once we’re able to communicate faster and better and once we’re able to contextualize and idealize more comprehensively than ever before, we’ll see the connecting of a new array of dots that we hadn’t previously thought possible.
Knowledge will come together, under a real singularity, and harmonize itself to a point whereby we’ll have as comprehensive of an outlook as we can imagine.
Whatever this really means (and it may mean many very different things), it will be the milestone of our civilization.
Technologically, socially, environmentally, astronomically, biologically — information will reach the apex of interconnectedness; in so doing, we’ll have the most informed understanding that there can possibly be (correlating to our rate of new discoveries) at any given time.
Our segregation of various fields of study will no longer be isolating; our subjective experiences and insights will no longer be so subjective; our vision will no longer be obstructed by division.
The singularity of knowledge — it’s already happening, but it’s about to speed up to rates we won’t even realize until we’re able to look back on it.
Our only obligation, it seems, is to nurture this process rather than standing back and watching it unfold on its own under the presumption of a far-and-away singularity that we don’t have enough time or imaginative power to consider.
In essence, we are the singularity. | https://medium.com/predict/the-singularity-of-knowledge-5b60b04892a6 | ['Michael Woronko'] | 2020-12-02 15:20:37.627000+00:00 | ['Philosophy', 'Technology', 'Future', 'Knowledge', 'Science'] |
FrankMocap — New SOTA for Fast 3D Pose Estimation | FrankMocap is a new state-of-the-art neural network for 3D body and hand movement recognition that was recently developed and published by researchers at Facebook Artificial Intelligence Research (FAIR).
Egocentric Hand Motion Capture. Source: FAIR Github
The model accepts video footage from one RGB camera as input. At the output, the model gives the predicted body and arm poses. FrankMocap's main goal is to make it easier to access 3D posture estimation techniques. FrankMocap processes predictions at 9.5 frames per second on inference. At the same time, the system bypasses analogs in the accuracy of predictions. | https://medium.com/swlh/frankmocap-sota-3d-pose-estimation-87b679419e74 | ['Mikhail Raevskiy'] | 2020-12-13 21:12:46.706000+00:00 | ['Machine Learning', 'Data Science', 'Deep Learning', 'Artificial Intelligence', 'AI'] |
Tools for using Kubernetes | Tools for using Kubernetes
Tools for a team of any level to realize a container architecture.
Kubernetes, the container orchestration tool originally developed by Google, has become a defacto for Agile and DevOps teams. With the advance of ML, Kubernetes has become even more important for an organization.
Here, we have summed up a list of tools which can be used to realize a container architecture for different phases and maturity levels for enterprise organizations.
Kubectl
The most important area for Devops is command line. Kubectl is the command line tool for Kubernetes that controls the Kubernetes cluster manager. Under Kubectl, there are several subcommands for more precise cluster management control, such as converting files between different API versions, or executing container commands. It is also the basis of many other tools in the ecosystem.
kuttle: kubectl wrapper — Kubernetes wrapper for sshuttle
kubectl sudo — kubernetes cmd with the security privileges of another user
mkubectx — single command across for all your selected kubernetes contexts
Kubectl-debug — Debugging the pod by a new container with troubleshooting tools pre-installed
Minikube
The next important area is development. Minikube is a great Kubernetes tool for development and testing. Minikube is used by teams to get started on and build POCs using Kubernetes. It can be used to run a single-node Kubernetes cluster locally for development and testing. There are plenty of Kubernetes features supported on Minikube, including DNS, NodePorts, ConfigMaps and Secrets, Dashboards, Container Runtime (Docker, rkt, and CRI-O), enabling CNI’s, and ingress. This step-by-step guide for a quick and easy installation.
KubeDirector
Once the team has build extensively, it will need to scale out the clusters. It brings Enterprise level capabilities for Kubernetes. KubeDirector uses standard Kubernetes facilities of custom resources and API extensions to implement stateful scaleout application clusters. This approach enables transparent integration with user/resource management and existing clients and tools.
Prometheus
Each team has a need for operational metrics to define operational efficiency and ROI. Prometheus can be leveraged for providing alerting and monitoring infrastructure to Kubernetes native applications. Prometheus, a Cloud Native Computing Foundation project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true.
Prometheus provides the infrastructure but for metric analytics, dashboards and monitoring graphs, Grafana is used on top of Prometheus.
Skaffold
Once the team has spent time building a repeatable process for containerization with metrics and alerting, having CI/CD becomes the next phase of development. Skaffold is a command line tool that facilitates continuous development for Kubernetes applications. It helps the team to iterate on the application source code locally then deploy to local or remote Kubernetes clusters. Skaffold handles the workflow for building, pushing and deploying your application. It also provides building blocks and describes customization for a CI/CD pipeline.
CI/CD will require test automation as well. The test-infra repository contains tools and configuration files for the testing and automation needs of the Kubernetes project.
KubeFlow
Once the products gather huge amounts of data, data pipelines and data products could be build for these applications. Kubeflow is a Cloud Native platform for machine learning based on Google’s internal machine learning pipelines. | https://medium.com/acing-ai/tools-for-using-kubernetes-84d47a73ef2e | ['Vimarsh Karbhari'] | 2020-06-11 13:29:53.152000+00:00 | ['Containers', 'Data Science', 'Artificial Intelligence', 'Technology', 'Data Engineering'] |
Stories on ILLUMINATION-Curated — All Volumes | Archives of Collections — Volumes
Stories on ILLUMINATION-Curated — All Volumes
Easy access to curated and outstanding stories
Photo by Syed Hussaini on Unsplash
ILLUMINATION-Curated is a unique collection, consists of edited and high-quality stories. Our unique publication hosts outstanding and curated stories from experienced and accomplished writers of Medium.
We compile and distribute stories submitted to ILLUMINATION-Curated daily. Our top writers make a great effort to create outstanding stories, and we help them develop visibility for their high-quality content.
The purpose of this story is to keep volumes in a single link for easy access. As a reference material, we also provide a link to all editorial resources of ILLUMINATION-Curated in this post.
Our readers appreciate the distribution lists covering stories submitted to ILLUMINATION-Curated daily. The daily volumes make it easy to access the articles and discover our writers. Some readers are closely following specific writers that they found in these circulated lists.
This archive version can be a useful resource for researchers and those who are studying specific genres. We cover over 100 topics.
This story allows our new writers to explore stories of our experienced writers and connect with them quickly and meaningfully. ILLUMINATION-Curated strives for cross-pollination. Writers learn from each other by collaborating. Our writers do not compete; instead, they enhance and extend each other’s messages.
Customised Image courtesy of Dew Langrial
07 December 2020
06 December 2020
05 December 2020
04 December 2020
03 December 2020
02 December 2020
01 December 2020
30 November 2020
29 November 2020
28 November 2020
27 November 2020
26 November 2020
25 November 2020
24 November 2020
23 November 2020
22 November 2020
21 November 2020
20 November 2020
19 November 2020
18 November 2020
17 November 2020
16 November 2020
15 November 2020
14 November 2020
13 November 2020
12 November 2020
11 November 2020
10 November 2020
09 November 2020
08 November 2020
07 November 2020
06 November 2020
05 November 2020
04 November 2020
03 November 2020
02 November 2020
01 November 2020
30 October 2020
29 October 2020
28 October 2020
27 October 2020
26 October 2020
25 October 2020
24 October 2020
23 October 2020
22 October 2020
21 October 2020
20 October 2020
19 October 2020
18 October 2020
17 October 2020
16 October 2020
15 October 2020
14 October 2020
13 October 2020
12 October 2020
11 October 2020
10 October 2020
09 October 2020
08 October 2020
07 October 2020
06 October 2020
05 October 2020
04 October 2020
03 October 2020
02 October 2020
01 October 2020
30 September 2020
29 September 2020
28 September 2020
27 September 2020
26 September 2020
25 September 2020
24 September 2020
23 September 2020
22 September 2020
21 September 2020
20 September 2020
19 September 2020
Editorial Resources About ILLUMINATION Curated | https://medium.com/illumination-curated/stories-on-illumination-curated-627b289571b4 | [] | 2020-12-08 18:04:45.367000+00:00 | ['Business', 'Technology', 'Self Improvement', 'Science', 'Writing'] |
7 Painless Writing Tips That Make a Powerful Impact | 1. Give me that title*
“What does an algorithm know about creating intriguing titles?” I snorted when I first discovered the headline-analyser co-schedule somewhere in the ether. As an experiment, I ran my most and least successful blog titles through the search bar and was horrified to discover co-schedule judged them accurately.
It’s actually a useful tool. OK, so it won’t get your puns and it’s not infallible, but it does analyse your title for a metrics you’re probably not aware of:
Sentiment: decides if you’re being optimistic or a killjoy; titles with a positive sentiment typically do best.
Keywords: measure your choices against words most commonly searched for by inquisitive Googlers.
Length analysis: tells you off for being too wordy or too terse; there’s both an optimal character- and word-count for clicks.
Word balance: takes a cold, hard look at the readability and attractiveness of your title.
*Meghan Trainor, ‘Title’
When 2. becomes one*
I had no idea how often I was writing “very embarrassed” when I meant “mortified” or “anxious and upset” when I meant “traumatised”. And while I’m on the subject, “I won’t” will always feel more natural than “I will not”. Two words are rarely better than one: be concise, make an impact.
*The Spice Girls, ‘When Two Becomes One’
3. I’d do anything for love (But I won’t do that)*
I’ve said it once and I’ll say it again (and again until the end of time): go through your writing and remove every excess “that”. You’ll find an alarming amount and you’ll thank me.
*Meat Loaf, ‘I’d Do Anything for Love’
4. I came in like a wrecking ball*
Begin with ferocity. It takes 2 minutes of rewriting or swapping sentences around to make sure your first line is a dagger to the heart. Edit with ferocity. “I stood there and watched while the building burned down to the ground” becomes “I watched the building burn”. Fire. Literally.
*Miley Cirus, ‘Wrecking Ball’
5. Let’s twist Again*
I’ve read advice to “never use common phrases or cliches”. It’s a (partial) lie; you should absolutely use both, just remember to give them your own twist — however small. “through thick and extremely thin” will always be more interesting than “through thick and thin”.
*Chubby Checker, ‘Let’s Twist Again’
6. Why. do. we. crucify ourselves?*
You never need to apologise for what you’re saying. This isn’t because you’re always right (you’re not) but because qualifiers make your work less impactful and we should believe what we write. You don’t need to say “I believe, in my humble opinion, xyz”: we know it’s in your opinion because you’re writing it — and we’re under no illusions you’re humble.
*Tori Amos, ‘Crucify’
7. Stop! In the name of love*
This is my favourite all-time tip because it means doing absolutely nothing. After you finish writing, stop. Breathe. Step away from the laptop. Drink tea. Cuddle the cat.
It doesn’t matter how long you’ve been writing, no one can get away without editing after a break to clear their head. I should know: I once published an article that began “I once lived in a house in a house.” Sounds magical but is misleading.
*The Hollies, ‘Stop in the Name of Love’ | https://medium.com/blankpage/7-painless-writing-tips-that-make-a-powerful-impact-13c39d8ea7c7 | ['Jessica A'] | 2020-12-21 15:54:05.601000+00:00 | ['Writing', 'Writing Tips', 'Writing Life', 'Creativity', 'Creative Process'] |
How to Cultivate Patience, the Ancient Virtue We All Need Right Now | How to Cultivate Patience, the Ancient Virtue We All Need Right Now
The way we live now discourages patience. It’s time to reprioritize this lost virtue.
Two days before the Associated Press declared him the winner of the 2020 presidential election, Joe Biden tried to settle his nation’s rattled nerves. “[Democracy] sometimes requires a little patience,” he remarked. “Stay calm . . . the process is working.”
For many, it wasn’t working fast enough. Every hour that passed seemed to turn up the tension and frustration of the U.S. electorate. Protests and counterprotests broke out. After just a few days of waiting, America seemed poised to lose its collective shit. Contrast this state of affairs with the 2000 contest between George W. Bush and Al Gore, which remained in limbo for five weeks following Election Day. If you can’t imagine today’s America putting up with that kind of delay, experts can’t either.
“Patience is a character strength that our society has definitely neglected,” says Sarah Schnitker, PhD, an associate professor of psychology at Baylor University. “Over the past 20 years in particular, as our technology has advanced at a very fast pace, I think it’s changed our expectations about when and how much we should have to wait as well as our general ideas about suffering.”
Much of Schnitker’s research has centered on patience. She says that many of history’s great philosophers, from Aristotle to Thomas Aquinas, regarded patience as one of humanity’s noblest attributes. Likewise, most of the major Eastern and Western religions — from Judaism and Christianity to Islam and Buddhism — describe patience as a fundamental virtue to be admired and cultivated.
“Patience is a character strength that our society has definitely neglected.”
But since the Industrial Revolution ushered in a new era of speed, production, and consumption, patience has lost its appeal, Schnitker says. “Our culture is all about quick wins and solving problems fast,” she adds. “If you’re patient, there’s this misconception that you’re kind of a doormat — that patience is not something we think of as winners having.”
There are economic, political, and environmental reasons to believe that America’s disdain for patience will eventually cost it (and the world) dearly. But setting aside those concerns, patience also seems to be really important when it comes to mental health and well-being, Schnitker says. “It’s positively associated with life satisfaction, with hope, with self-esteem, and with regulated behavior, and it’s negatively associated with loneliness, depression, and anxiety,” she says.
Patience can alleviate the pressure to advance and achieve that many of us feel so urgently, and patience may replace the shallow gratifications that many of us now demand — and often come to depend on — from the stuff we buy, watch, and otherwise consume. “I think that this year — both with the pandemic and with the political situation — has shown us that we need to develop more patience,” Schnitker says.
Fortunately, there are some evidence-backed ways to do that.
Understanding what patience looks like
Situations that demand patience tend to come in three types.
“There’s daily hassle patience,” Schnitker says. This type includes waiting in line at the store, waiting for a web browser to load, and other quotidian sources of delay or frustration. The next type she terms “hardship patience,” which refers to open-ended situations like living with an illness or enduring other sources of persistent concern or uncertainty. Finally, there’s “interpersonal patience,” which is the type a person requires when dealing with an obstreperous child, an obnoxious coworker, or some other difficult person.
Speaking with Elemental the day before Biden was announced as the winner, Schnitker said, “The current moment is interesting because the election really involves all three types of patience. It’s waiting for an outcome, and maybe it’s dealing with relatives who don’t agree with you, and it’s also dealing with thoughts about long-term polarization and the need to find more unity.”
She says that “patience” (like the word “patient”) is derived from the Latin word for suffering. And people who possess patience are those who are able to endure something unpleasant without letting it influence their emotions or behavior.
Spend some time thinking about that definition, and you begin to realize how central patience (or its opposite) is to anxiety, depression, anger, and other negative emotional states as well as to compulsive behavior. All of these ills are tightly bound up with an inability to tolerate a person or a situation. It could even be said that the current moment’s fixation with happiness — with finding more of it and making it last — is driven in part by impatience; we don’t want to have to wait long for our next moment of joy or pleasure or bliss.
Why are we all so impatient these days? Again, Schnitker says that many elements of contemporary life prioritize speed and ease over patience and endurance. “We are all about instant gratification, and I think the advertising and technology industries push us in this direction,” she says. Whatever it is that a person wants — food, entertainment, information, stuff, sex, money, enlightenment — the fastest route to each is continually pitched to us as the best route despite evidence to the contrary.
Haste and urgency, for example, are associated with stress and arousal. “When we speed everything up — when we have this feeling of go go go — that’s all sympathetic nervous activity,” says Peter Payne, a researcher at Dartmouth College who studies meditative movement and the health benefits of practices such as qigong and tai chi. While sympathetic nervous system activity is fine in moderation, chronic overactivity of this system is associated with anxiety, depression, headaches, poor sleep, and diseases of the heart, gut, and immune system. Rushing all the time seems to promote this kind of overactivity and its many detriments.
“It’s positively associated with life satisfaction, with hope, with self-esteem, and with regulated behavior, and it’s negatively associated with loneliness, depression, and anxiety.”
Impatience may also rob people of experiences that give life meaning. Researchers have found that effort seems to be an essential ingredient in satisfaction, contentment, and other positive emotions. “A lot of happiness lies in the doing, not in the having done,” says Barbara Fredrickson, a distinguished professor of psychology at the University of North Carolina. She says that the expenditure of effort can contribute to a sense of purpose, meaning, and interconnectedness — all of which are sources of self-esteem and other positive states.
The message here is not that everything fast or easy is bad. Rather, it’s that fast and easy are not always optimal. When people lose the ability to be patient, they may also be losing access to the things that make life most satisfying and enjoyable while also raising their risks for all the health problems associated with stress.
How to cultivate patience
The more people exercise their patience muscles, the stronger those muscles become. “There are a lot of ways to practice waiting in life, and doing this can really help us build up our patience,” Schnitker says.
For example, whenever you encounter a wait — whether it’s in line at the store or sitting in traffic — those are good opportunities to practice patience. “Not using that time to reach for our phones and check our social or news feeds — I think can really help,” she says. To her point, research from Temple University has found that frequent smartphone use is associated with both heightened impatience and impulsivity.
During periods of waiting or frustration, Schnitker says it can be helpful to practice a technique known as cognitive reappraisal or “reframing,” which basically means looking at something as an opportunity rather than as a hardship. “When people are able to reframe what could be considered a threat or a source of suffering as a useful challenge, we know that helps,” she says. “So if you tell yourself that patience is good for my mental health and I need to develop it, then you can reframe those periods of waiting as great opportunities to help yourself.”
She says that reframing is also helpful when dealing with people who get on your nerves or during situations that entail extended periods of waiting. “So with this election, I could tell myself that this waiting should restore some of my faith in the system because it’s showing me that we care about our democracy and making sure everyone’s vote counts,” she says. In interpersonal contexts, reframing could entail changing your thoughts from “this person is so annoying” to “being around this person is an opportunity for me to practice my patience.” It could also entail making an effort to see the situation from another person’s point of view.
Finally, Schnitker says that mindfulness training and similar forms of meditation are helpful because they pump up your awareness of your own thoughts and feelings. It’s this awareness that allows you to make helpful tweaks — to your habits and also to your appraisals of people and situations — that will bolster your patience.
“Right now, we don’t have a lot of cultural narratives that help us make sense of waiting or suffering,” she says. Rediscovering and reprioritizing patience may be one way to create more-helpful narratives — and to push back against so much that feels wrong with the world today. | https://elemental.medium.com/how-to-cultivate-patience-the-ancient-virtue-we-all-need-right-now-afd144abb507 | ['Markham Heid'] | 2020-11-12 06:32:23.877000+00:00 | ['The Nuance', 'Patience', 'Lifestyle', 'Mental Health', 'Health'] |
How to Survive Life in the NICU | Life in the NICU can be stressful for baby and family
When I was pregnant with my son and living on the antenatal unit (for moms with high-risk pregnancies), I was given a tour of the NICU. I went on this tour out of curiosity, knowing my son would not end up there as he was being born via c-section at 37 weeks. I thought the NICU (neonatal intensive care unit) was only for preemies. Not my child.
Boy was I wrong.
Shortly after my son was born he decided to stop breathing. He was quickly whisked away to the NICU to be assessed by a team of highly trained doctors and nurses.
Still recovering from my general anaesthetic, I woke up a few hours later to find out my son was hooked up to the CPAP — a Darth Vader type respiratory device, to help my son breathe. When he was six hours old, I was wheeled on a stretcher to the side of his incubator and was only able to see his toes. No touching allowed.
The next morning I began my 14 day vigil, being glued to the side of his incubator, carefully watching the monitors beeping away his vital signs.
While the early days were a complete daze, I eventually found my footing and figured out the routine of the NICU. It is this routine, and lessons learned, I want to share with you.
It’s okay to be overwhelmed. Everything about the NICU is intimidating. From the masked medical staff, locked doors, hushed voices and incubators holding the smallest babies you’ve ever seen, it is a lot to take in. Even on your last day, you can still feel as though you are in another world. And that’s because you are. The NICU is a unique area in the hospital, caring for the tinniest patients. Ask questions. Repeatedly. You are sleep deprived, emotional and have just been through childbirth. There is no way you can possibly remember all the information being thrown at you. So don’t be afraid to ask a lot of questions and get the medical staff to write down important information for you. Some key questions to ask:
What time does the doctor/nurse practitioner do their daily rounds? Is it the same time every day or does it vary?
Are you able to talk to this person during their rounds and ask questions specific to the care of your baby?
What is the nurse/baby ratio for the nurse caring for your baby? The amount of babies in the nurse’s care will depend on the medical conditions of the babies (very ill babies have a nurse fully dedicated to them).
When is shift change for the nurses? This is important to know so you can check in with the new nurse at the beginning of his/her shift and learn if there are any updates in your child’s care plan for the next 12 hours.
Where are the breast pump supplies kept? This includes the breast pump, bottles, sterilization equipment and other supplies.
Where can you pump? At baby’s bedside (which is ideal as seeing your baby will increase your milk supply)? A quiet room?
What should you do with your milk after it’s pumped? Usually there is a consistent place to put your milk, with stickers with your baby’s information.
3. Talk to other parents. I know you just want to curl up in a ball right now and be left alone. But trust me. Talking to other parents helps. A lot. I learned so much about life in the NICU by talking to other moms. This included where to store any food people brought me, where to get free coffee (some larger hospitals have stocked kitchens for parents), what questions to ask the nurse and, most importantly, someone to talk to who knew what I was going through. This is a huge benefit as your friends and family likely don’t understand why you won’t leave your baby’s side.
4. Get outside. Even if it’s just for 15 minutes. You need fresh air, a short walk, and a break from the NICU. This is so important for your mental health. These short breaks will also give you the energy to continue. I tried to take a break every four hours.
5. Sleep. Preferably in a real bed. This was a big mistake I made. I thought I needed to be with my son around the clock. It was exhausting. Even though we had a room at the Ronald McDonald House, I never slept more than four hours at a time, afraid if I was away from my son something would happen. Don’t be like me. Try to get at least six hours uninterrupted sleep each night. And yes, you can give the medical staff your cell phone number and tell them to call you if there is a problem.
6. Breathe. Just breathe. Take it day by day. Don’t start thinking too far in advance. Just be in the moment and know you are doing the best you can for your child.
To learn more about patient advocacy visit my website www.learnpatientadvocacy.com. | https://cynthialockrey.medium.com/how-to-survive-life-in-the-nicu-636fbecde267 | ['Cynthia Lockrey'] | 2018-07-17 17:59:11.028000+00:00 | ['Pregnancy', 'Nicu', 'Parenting', 'Health', 'Mental Health'] |
A Self-Editing Checklist From an Editor-in-Chief | In newsrooms, editors often talk about the text that writers file using a hygiene metaphor: “Clean” copy is grammatically correct, solidly written, and generally needs only light editing to be publishable. If you’re working with an editor, filing clean copy will make them love you — and want to work with you more. If you’re publishing directly, it’s even more important that your copy is spotless!
The best way to make sure you file (or self-publish) the crispest, cleanest copy possible is to create your own process of self-editing — catching errors, fact-checking, and smoothing the language.
My favorite way to self-edit is to examine my article with a series of different “lenses.” Think of the machine an optometrist uses to check your vision: She’ll swap in different lenses for you to look through, one by one. Similarly, you can look at your writing with “lens” after “lens.”
You might first read it with a data-accuracy lens, for example, and then reread it with a lens on how the quotes flow. If you know you have a tendency to overuse the passive voice, read it over with a passive voice lens, making sentences more active as you go through. (Personally, I always make sure to read with a wordiness lens — deleting needless adjectives and clauses to make every sentence simpler and more succinct.)
The Self-editing checklist
Think of these questions each as a “lens” to look at the story through. Not every lens will apply to every story. And make sure to create lenses that account for your own writing habits and tics.
Did I tell the right story?
What is my story focus/theory/angle?
Is it clearly and succinctly stated at the top of the story?
So what? Why should readers care about this story?
Have I told it the right way?
Is the story clear? Compelling? Engaging?
Is this the best “lede” for the story? Why? (Your lede, the first lines of a story, should essentially tell the story, either in anecdotal or straight form.)
Does the “nut graf” (the paragraph explaining what readers are in store for) clearly and directly lay out the story’s focus/theory/angle, tell the who/what/where/when/why/how, and show the reader why they should care about it?
the story’s focus/theory/angle, tell the who/what/where/when/why/how, and show the reader why they should care about it? Do the quotes help tell the story? Are they vivid and colorful, and do they express emotions as necessary? Do they tell dull information that would be better paraphrased? Are they presented well, with clear transitions and setups?
Does every scene, detail, and anecdote function to help the reader understand the story? (No matter how fascinating the scene is or how eloquent the quote, if the answer is no, cut it.)
Would more details or visual descriptions help bring the story to life?
Does the piece provide adequate context? Have you included history, previous news, supporting statistics, data, explanations?
Are expert voices included where necessary, and are their comments useful in telling the story?
Is the last line or “kicker” structured for maximum impact? Does it relate back to the lede, or the story focus, or does it look forward?
Is everything true, and are all the necessary perspectives included?
Develop your own system for “skeptical editing”: Double-check all names, facts, dates, spellings, quotes.
Are numbers, statistics, and data clear and accurate? Is additional data needed to substantiate the story?
Weed out assumptions and vague statements.
Make sure terms are explained, acronyms spelled out on first use.
Check the background of every source or person cited in the story and for each ask: Are they credible? What is their agenda? What biases do they bring?
Whose perspective is missing from the story? How might you include that missing perspective?
What are the factual holes in the story? Instead of “writing around” them, do the reporting or research to fill them.
Are the mechanics correct?
Check spelling, punctuation, and style.
Check that the story is the length it needs to be.
Check for passive voice, gerunds, wordiness, clichés, or whatever your grammatical crutches are.
Get rid of fancy words when simple ones will do.
Does the story do enough “hand-holding” for the reader? Is its logic easy to follow? Are the transitions clear and does the story flow sensibly?
Does it sound okay? (Or more to the point, does the story sound awesome?)
Read your copy aloud to see if the story flows. Listen to the language.
Make sure there’s a mix of shorter and longer sentences and that each sentence is clear and straightforward.
Is the story due now? | https://medium.com/creators-hub/a-self-editing-checklist-from-an-editor-in-cheif-e55abb475e61 | ['Indrani Sen'] | 2020-11-02 10:42:36.997000+00:00 | ['Writing', 'Creativity', 'Editing', 'Tips For Writers', 'Resources'] |
Writing is My Bridge | Writing is My Bridge
How I use writing to balance my mind.
@oplattner unsplash.com
When I was in high school, I wanted to be a writer.
I didn’t know why. I lost the battle of choosing college majors with my parents because I just couldn’t explain what I intuitively knew:
Writing was my salvation.
We often ask people we meet: “Are you a creative person?”, “Are you an analytical person?” We don’t realize that so many of us are both.
I grew up in the Asian culture of overachievement in science and mathematics. That means the analytical side of me flourished while the creative side suppressed. Creativity is frowned upon by strict Asian parents as the gateway to disobedience.
It wasn’t until I quit my Wall Street technology job that I realized what was lacking in my life.
Up until then, my life had been dedicated solely to analytical pursuits that I forgot to take care of my emotional needs and creative needs.
The cost of that was a couple of years of anxiety and depression. It took years of reevaluating myself, my connections and my life to really unleash the emotional and the creative side of myself again.
The motivation was the birth of my son. Following my son’s amazing development from infancy to toddlerhood allowed me to peek into my own childhood.
It reminded me of the humanity, the creativity and the sensitive self that existed in me from the beginning.
For once, to be a better mother to my son, I had to take a leap of faith. I had to come back completely to the essence of myself. I had to make my own life fulfilling by balancing out all my needs: analytical needs, emotional needs, and creative needs.
Making a career change is never easy. For me, the trigger was the deadening feeling of working on a piece of data analysis code and not loving it anymore. It was hard to accept when things are simply not enough. I felt guilty. I had worked very hard at what I “supposedly” did best. I loved it for many years. I was given great opportunities. But, I just wasn’t excited about it all anymore.
I felt like the wife stuck in a dead marriage with the guy who all the neighborhood ladies wanted as a husband.
The one thing about motherhood is that: it’s fast, it’s furious and it waits for no one. I had no time, energy nor the strength to fight with myself about the decisions I made.
I just did it all.
I changed loads upon loads of diapers. I reveled in my “free time” as my infant son stared up at me from his baby blanket. I laminated printouts for his activities. I read parenting books. I set up playdates. I learned to discipline him.
It felt like a huge tidal wave. I surfed it without having any knowledge of how to do it from the start.
Then, one night the truth hits me like a ton of bricks.
What would my ideal job be now that I don’t have a career safety net?
I couldn’t answer the question. So, I started to write. I wrote about parenting issues. I journaled. I researched. Then, I wrote some more.
Pretty soon, I started a blog. Then, I learned all about SEO, Wordpress, Pinterest, Instagram, Twitter, and Facebook. I learned about taking engaging photographs. I learned to create memes for my audience. I learned to skip Photoshop and go directly to Canva. I learned to check my grammar.
I’m still learning every day. It’s exhilarating to get years of materials out. Through the process, I slowly opened up my creative funnel.
The thing about the creative funnel is that once you turn it on, it’s hard to turn it off.
The other day, I came across a piece of data visualization while researching freelance writing jobs. It was mesmerizing to me. I wanted to critique the analysis and get my hands on that dataset.
There you go, my friends! For me, the only way back to being a balanced individual is to write my way back to my emotional, creative and analytical self.
If writing isn’t a bridge, I don’t know what is.
Writing ties together my left brain and my right brain. — Picture from Pexels.com
It’s a bridge that connects my left brain and my right brain. It’s a bridge that opens up the possibility of having a career that is not limited to one profession. It leads me to my new path of pursuing many different projects across a variety of fields.
Writing brings everything together. — original
Do you want to hear about my latest projects? Ask me after I get through analyzing my first dataset in three years. | https://medium.com/jun-wu-blog/writing-is-my-bridge-d37dbcf9cb1d | ['Jun Wu'] | 2019-11-28 00:14:28.123000+00:00 | ['Creativity', 'Writing Tips', 'Writing', 'Blogging', 'Writing On Medium'] |
3 Books to Improve Your Coding Skills | Code Complete by Steve McConnell
When I finished this book, I was surprised by why nobody had explained such basic but crucial things to me until now. You might be asking, “What are they?” Let me bring to you a few examples.
For instance, declaring and initializing a variable only in a place where it is going to be used. There is no need to declare a variable and only assign it somewhere in the code. The variable should have the least visible scope possible. The benefit of this is that code readability improves a lot and your teammates will be thankful for that.
Another example is how to use if conditions efficiently. They are simple, but they can reduce code readability dramatically. Check the following example:
The example above has too many nested if conditions, making it hard to follow and test the logic. While learning programming, we focus on how the if condition works and when to use it. But nobody tells us how it could be misused. The book gives some advice for this case: Avoid too many nested blocks, consider splitting the code into functions, and check if the switch..case statement is suitable (if the language supports it).
Those and many other examples are covered in this book. | https://medium.com/better-programming/3-books-to-improve-your-coding-skills-afa67621192 | ['Dmytro Khmelenko'] | 2020-10-01 17:35:54.293000+00:00 | ['Professional Growth', 'Software Development', 'Books', 'Software Engineering', 'Programming'] |
What Miley Cyrus Did To Win Over a Booing Crowd | To be honest, I was just as offended as any Chris Cornell fan when I saw that Miley Cyrus was taking on what may be the most technically difficult song he sang — Say Hello To Heaven. Probably like most people in the audience, who even had a clue who Miley is, I had already made up my mind about her covering a classic Temple of the Dog song. It was going to suck.
Thinking that she’d butcher the song the same way as she did with Nirvana's Smells Like Teen Spirit years earlier, I clicked on the YT video that captured her performance at a tribute show in honor of the late Chris Cornell.
Judging from the other related videos of performances that seem to take place at the same show, I can only assume that she was the only woman and the one that reduced the average age of the performers by at least 20 years. It’s safe to say that she stood out.
You know when you look at a list and it says “What doesn’t fit?”. I know I was the one that didn’t fit. — Miley Cyrus
Most of the remaining giants of the grunge era were there to pay their respect and perform with members of Temple of the Dog, Soundgarden, and Audioslave. I could not for the life of me figure out why the hell a country-pop star would attempt (or even want to) join that gang.
A few seconds into the video I realized just how far off she was from her comfort zone when she entered the stage. Hearing her being called up felt like a bit of a joke, and that’s probably why she was met with confusing chuckles, impulsive booing, and half-ass claps.
She walked up firmly while awkwardly mumbling something about blowing the surprise due to her mic going live before she entered the stage. Miley inquired “Shall we do this?” as if she wanted to get this over with as soon as possible. Maybe she just wanted to start singing asap to counteract the initial response she got. She knew that that the audience wasn’t going to welcome her, and she had already accepted it in spite of the discomfort.
The musicians synced and started playing the song. I had a hard time recognizing her with an outfit that covered more than it revealed. Ironically, that, her hairstyle and her awkwardness gave her a grungy presence. She focused on the band and the music purposefully and looked away from the prejudiced looks of the audience.
Once she opened her mouth to sing, everybody just shut up. But jaws were left dropped. Her singing was impeccable in spite of not hitting those extra high notes that only Chris could pull off in the 90s.
A lot of my covers I kind of customize for myself, and that one was just in its original form and I really did all his little ad-libs and runs, and it was just a really intense experience as a performer. — Miley Cyrus
It was also apparent, that she had broadened her lower vocal registry, enabling her to effortlessly hit unusual depth, unlike many mainstream female vocalists. Compared to the other vocalists that night, she was technically the best and damn close to the singing abilities of Chris.
Not only did she look different, but she also moved and sounded differently. Gone was the crazy choreography, which had been replaced with intuitive movements to the music that would power her vocals. She had reinvented herself, and it couldn’t get more grungy than that.
The rawness and the freshness of her performance were far more persuasive and it made the imperfection perfect. She knew what the audience would value the most.
That was one of the moments where you realize it’s not about you. It’s about that audience, and when you’re in a room with what is unifying people is their love for Chris Cornell and his talent, it really changed the way that I was performing. — Miley Cyrus
In an interview, where Howard Stern addresses this particular performance and asks Miley about the experience, she stated that the performance sounded nothing like it did at soundcheck because she was so moved by the amount of love that was there for Chris Cornell.
It really didn’t come from me, that performance. — Miley Cyrus
In the end, Miley had struck an emotional chord with everyone who saw that performance. She showed with her performance that she could sing his pain in a way that would make the fans relive his past performances one last time.
And that’s what tributes are for. | https://medium.com/illumination/what-miley-cyrus-did-to-win-over-a-booing-crowd-28e2a702954b | ['Sara Kiani'] | 2020-12-28 14:39:37.294000+00:00 | ['Music', 'Leadership', 'Change', 'Marketing', 'Culture'] |
Freelance Writer? How to Know When It’s Time to Fire a Client | 1.) If a freelance writing client tries to tell you how to run your business, they might not be a good fit for your writing services. Some clients think you should adjust your rates based on their needs. This isn’t a smart way to run your writing business.
Your rates are what they are based upon your expertise, writing talent, and demand for your services. If a client doesn’t want to pay your rates or thinks you should adjust your business’ modus operandi to suit the state of their business, it might be time to cut them loose as a client.
2.) If one of your freelance writing clients thinks it’s okay to talk down to you or berate you in discussions about their content creation needs/requirements, this is a clear sign it’s time to fire them as a client. You’re running a business. How you run your business isn’t up for discussion or debate. If a client doesn’t respect you enough to speak to you as a fellow businessperson, they’re not worthy of having access to your writing services.
There are plenty of other business owners around the world who would be thrilled to have an experienced freelance writer at their disposal. Terminate your business relationship with a client who belittles you and focus your efforts on replacing them with more profitable writing clients.
3.) If a freelance writing client thinks you should drop all your other work just to attend to their last-minute request for content, and then wants you to offer a reduced rate because they’re a regular client, you might want to think twice about whether they’re adding to your business’ bottom line. You started your writing business to turn a profit, not to be held hostage to the whims of penny-pinching clients who think you’re at their beck-and-call.
Focus on connecting with clients who appreciate your talents, are willing to pay top rates to have access to your services, and who understand they need to pay extra if they want a last-minute piece of content. You might even want to consider offering access to your services on a retainer business model to ensure your regular clients have easy access to your services, yet still allowing you to turn a profit. | https://medium.com/publishous/freelance-writer-how-to-know-when-its-time-to-fire-a-client-738ad9d5ec88 | ['George J. Ziogas'] | 2020-08-31 09:54:12.940000+00:00 | ['Entrepreneurship', 'Business', 'Writing', 'Work', 'Freelancing'] |
State of Managed Kubernetes 2020 | EKS vs. AKS vs. GKE from a Developer’s Perspective
In February of 2019, just a few months after AWS announced the GA release of EKS to join Azure’s AKS and GCP’s GKE, I wrote up a comparison of these services as part of the first edition of the open-source Kubernetes book. Since then, Kubernetes adoption exploded, and the managed Kubernetes offering from all the major cloud providers became standardized. According to Cloud Native Computing Foundation (CNCF)’s most recent survey released in March 2020, Kubernetes usage in production jumped from 58% to 78% with managed Kubernetes services from AWS and GCP leading the pack.
Container Management Usage from CNCF 2019 Survey
From my personal experience working with Kubernetes, the most notable difference from 2019 to now has been the feature parity across the clouds. The huge lead that GKE once enjoyed has been largely reduced, and in some cases, surpassed by other providers. Since there are plenty of resources comparing each service offerings and price differences (e.g. learnk8s.io, stackrox.com, parkmycloud.com), I’m going to focus on personal experiences using these services in development and production as a developer in this article.
Amazon EKS
Considering AWS’s dominance on the cloud, it’s not surprising to see huge usage numbers for EKS and kops. The obvious advantage for existing AWS customers is to move workloads from EC2 or ECS to EKS with minimal modification to other services. However, in terms of managed Kubernetes features, I generally found EKS to lag GKE and AKS. There is a public roadmap on Github for all AWS container services (ECS, ECR, Fargate, and EKS), but the general impression I get from AWS is a push for more serverless offerings (e.g. Lambda, Fargate) more so than container usage.
That isn’t to say that support from Amazon hasn’t been amazing nor do I think EKS is not Amazon’s priority. In fact, EKS provides a financially-backed SLA to encourage enterprise usage (Update Jun 15, 2020 — as for 5/19/20, AKS also provides financially backed SLA). With EKS making RBAC and Pod Security Policies mandatory, it beats out GKE and AKS in terms of base-level security. Finally, now that GKE is also charging $0.10/hour per master node management, the pricing differences between the two clouds are even more negligible with reserved instances and other enterprise agreements in place.
Like many other AWS services, EKS provides a large degree of flexibility in terms of configuring your cluster. On the other hand, this flexibility also means the management burden falls on the developer. For example, EKS provides support for Calico CNI for network policies but requires the users to install and upgrade them manually. Kubernetes logs can be exported to CloudWatch, but it’s off by default and leaves it up to the developer to deploy a logging agent to collect application logs. Finally, upgrades are also user-initiated with the responsibility of updating master service components (e.g. CoreDNS, kube-proxy, etc) falling on the developer as well.
Deploying Worker Nodes is a Separate Step than Provisioning a Cluster — Image from AWS
The most frustrating part with EKS was the difficulty in creating a cluster for experimentation. In production, most of the concerns above are solved with Terraform or CloudFormation. But when I wanted to simply create a small cluster to try out new things, using the CLI or the GUI often took a while to provision, only to realize that I missed a setting or IAM roles later in the process.
I found eksctl to be the most reliable method of creating a production-ready EKS cluster until we perfected the Terraform configs. The eksworkshop website also provides excellent guides for common cluster setup operations such as deploying the Kubernetes dashboard, standing up an EFK stack for logging, as well as integrating with other AWS services like X-Ray and AppMesh.
Overall, until EKS introduced managed node groups with Kubernetes 1.14 and above, I found the management burden on EKS fairly high, especially in the beginning. AWS is quickly catching up to the competition, but EKS is still not the best place to start for new users.
Azure AKS
Surprisingly, AKS surpassed GKE in terms of providing support for newer Kubernetes versions (preview for 1.18 on AKS vs 1.17 on GKE as of June 2020). Also, AKS remains as the only service to not charge for control plane usage. Like EKS, master node upgrades must be initiated by the developer, but EKS takes care of underlying system upgrades.
Personally, I have not used AKS in production, so I can’t comment on technical or operational challenges. However, as of 5/19/2020, AKS not only provides a financially backed SLA (99.95% with Availability Zones), but also made it an optional feature to allow for unlimited free clusters (Updated Jun 15, 2020).
Still, Azure’s continued investment in Kubernetes is apparent through contributions to Helm (Microsoft acquired Deis who created Helm) as it graduated from CNCF. As Azure continues to close the gap with AWS, I expect AKS usage to grow with increasing support to address community concerns.
Packaging Applications — CNCF Survey
Google Cloud GKE
While Google’s decision to begin charging for control plane usage for non-Anthos clusters stirred some frustrations among the developer community, GKE undoubtedly remains the king of managed Kubernetes in terms of features, support, and ease of use. For new users unfamiliar with Kubernetes, the GUI experience of creating a cluster and default logging and monitoring integration via Stackdriver makes it easy to get started.
GKE is also the only service to provide a completely automated master and node upgrade process. With the introduction of cluster maintenance windows, node upgrades can occur in a controlled environment with minimal overhead. Node auto-repair support also reduces management burdens on the developers.
Similar to many GCP products, GKE’s excellent managed environment does mean that customization may be difficult or sometimes impossible. For example, GKE installs kube-dns by default, and to use CoreDNS, you need to hack around the kube-dns settings. Likewise, if Stackdriver does not suit your needs for logging and monitoring, then you’ll have to uninstall those agents and manage other logging agents yourself.
Still, my experiences with GKE have been generally pleasant, and even considering the price increase, I still recommend GKE over EKS and AKS. The more exciting part with GKE is the growing number of services built on top such as managed Istio and Cloud Run. Managed service mesh and a serverless environment for containers will continue to lower the bar for migration to cloud and microservices architecture.
While GCP lags AWS and Azure in terms of overall cloud market share, it still holds its lead for Kubernetes in 2020.
Google Cloud Service Platform — GCP Blog
Resources | https://medium.com/swlh/state-of-managed-kubernetes-2020-4be006643360 | ['Yitaek Hwang'] | 2020-06-16 14:57:37.830000+00:00 | ['Kubernetes', 'Azure', 'Google Cloud Platform', 'AWS', 'Gke'] |
🌟Introducing Dash Cytoscape🌟 | Now you can create beautiful and powerful network mapping applications entirely in Python, no JavaScript required! Dash Cytoscape introduces the latest additions to our ever-growing family of Dash components. Built on top of Cytoscape.js, the Dash Cytoscape library brings Cytoscape’s capabilities into the Python ecosystem. Best of all, it’s open-sourced under an MIT license and available today on PyPI. Simply run pip install dash-cytoscape to get started! You can also find the complete source code in the Github repository, and view the documentation in the Dash Cytoscape User Guide.
In this post, we will:
Provide some background on the Cytoscape project.
Show you how Dash’s declarative layout, elements and styling help you build an intuitive and intelligent application.
Explain how Dash callbacks power your application’s interactivity.
Introduce you to the customizable styling available with Dash Cytoscape, including an online style editor that you can use as an interactive playground for your style and layout ideas.
Illustrate how to visualize large social networks using Dash Cytoscape.
Share our vision of integrating with other Python bioinformatics and computational biology tools.
Standing on the shoulders of giants
This project would not be possible without the amazing work done by the Cytoscape Consortium, an alliance of universities and industry experts working to make network visualization accessible for everyone. The Cytoscape project has been available for some time both as Java software and as a JavaScript API; they are maintained in their Github organization. The library can also be used in React through the recently released react-cytoscapejs library, which allows the creation of Cytoscape components that can be easily integrated in your React projects. Dash Cytoscape extends the latter by offering a Pythonic, callbacks-ready, declarative interface that is ready to be integrated in your existing Dash projects, or used as a standalone component to interactively display your graphs.
A familiar and declarative interface
Powerful built-in layouts
Picking the right layout for your graph is essential to helping viewers understand your data. This highly customizable feature is now fully available in Dash, and can be easily specified using a dictionary.
The original Cytoscape.js includes many great layouts for displaying your graph in the way it should be viewed. You can choose to display your nodes in a grid, in a circle, as a tree, or using physics simulations. In fact, you can even choose the exact number of rows or columns for your grid, the radius of your circle, or the temperature and cooling factor of your simulation. For example, to display your graph with a fixed grid of 25 rows, you can simply declare:
dash_cytoscape.Cytoscape(
id='cytoscape',
elements=[...],
layout={
'name': 'grid',
'rows': 25
}
)
Find the full example here.
Intuitive and clear element declaration
Creating nodes with Dash Cytoscape is straightforward: You make a dictionary in which you specify the data associated with the node (i.e., the ID and the display label of your node), and, optionally, the default position. To add an edge between two nodes, you give the ID of the source node and the target node, and specify how you want to label the nodes. Group all elements (nodes and edges) inside a list, and you are ready to go! In a nutshell, here’s how you would create a basic graph with two nodes:
dash_cytoscape.Cytoscape(
id='cytoscape',
layout={'name': 'preset'},
elements=[
{'data': {'id': 'one', 'label': 'Node 1'},
'position': {'x': 50, 'y': 50}},
{'data': {'id': 'two', 'label': 'Node 2'},
'position': {'x': 200, 'y': 200}},
{'data': {'source': 'one',
'target': 'two',
'label': 'Node 1 to 2'}}
]
)
If you already have an adjacency list, you can easily format the data to be accepted by Cytoscape, and display in your browser with about 70 lines of code:
Displaying over 8000 edges and their associated nodes with a concentric layout. This uses the Stanford Google+ Dataset.
Beautiful and customizable styling
Cytoscape provides a range of styling options through a familiar CSS-like interface. You get to specify the exact color, pixel size and opacity of your elements. You can choose the shape of your nodes from over 20 options, including circular, triangular, and rectangular, as well as non-traditional content for your nodes (e.g. displaying an image by adding a URL, or adding a pie chart inside a circular node). The edges can be curved or straight, and it is even possible to add arrows at the middle or end-point. To add a style to your stylesheet, you simply need to specify which group of elements you want to modify with a selector, and input the properties you want to modify as keys. For example, if you want nodes 15 pixels wide by 15 pixels high, styled opaque with a custom gray color, you add the following dictionary to your stylesheet:
{
'selector': 'node',
'style': {
'opacity': 0.9,
'height': 15,
'width': 15,
'background-color': '#222222'
}
}
The selector can be a type of element (i.e., a node or edge) or be a certain class (which you can specify). It can also have a certain ID or match certain conditions (e.g., node height is over a certain threshold).
Using the online style editor
In order to help the community get acquainted with the library, we created an online style editor application that lets you interactively modify the stylesheet and layout of sample graphs. This tool will help you learn how to use the style properties and quickly prototype new designs. Best of all, it displays the stylesheet in a JSON format so that you can simply copy and paste it into your Cytoscape app! Try it out here.
Create your on style and save the JSON. The source code is in usage-advanced.py.
Familiar Dash callbacks
Dash Callbacks are used to make your Dash apps interactive. They are fired whenever the input you define is modified, such as when the user clicks a button or drags a slider inside your UI. The callback functions are computed on the server side, which enables the use of optimized and heavy-duty libraries such as Scipy and Numpy.
Use Dash callbacks with dash-cytoscape to update the underlying elements , the layout , or the stylesheet of the graph. For more, see our documentation chapter on callbacks.
Additionally, you can use a collection of user-interaction events as an input to your callbacks. They are triggered whenever the user interacts with the graph itself; in other words, when they hover, tap, or select an element or a group of elements. You can choose to input the entire JSON description of the element object (including its connected edges, its parents, its children, and the complete stylesheet), or only the data contained withing the object. To see what is being output, you can assign the following simple callback to your graph:
@app.callback(Output('html-div-output', 'children'),
[Input('cytoscape', 'tapNodeData')])
def displayTapNodeData(data):
return json.dumps(data, indent=2)
This will output the formatted JSON sent by your Cytoscape component into an html.Div field. To read more about event callbacks and how to use them for user interaction, check out our user guide chapter on Cytoscape events.
Try out the demo here. You can find the source code in usage-events.py.
Visualizing large social networks
One way you might want to use Dash Cytoscape is to visualize large social networks. Visualizing large network graphs with thousands or millions of nodes can quickly become overwhelming. In this example, we use Dash Cytoscape with Dash callbacks to interactively explore a network by clicking on nodes of interest.
This graph displays the Google+ social social network from the Stanford Large Network Dataset collection.
Dynamically expand your graphs
Start with a single node (representing a Google+ user) and explore all of its outgoing (i.e. all of the users they are following) or incoming edges (i.e. all of their followers).
Try out the demo here. You can find the source code in usage-elements.py.
Fast and reactive styling
When mapping large networks, strategic styling can help enhance understanding of the data. Leveraging the rendering speed and scalability of Cytoscape.js, we can easily create callbacks that update the stylesheet of large graphs using Dash components such as dropdown menus and input fields, or that update upon clicking a certain node.
In this example, we display 750 edges of Google+ users and focus on a particular user by clicking on a specific node. The callback updates the stylesheet by appending a selector that colors the selected ID in purple, and the parents and children in different colors that you specify. Our user guide chapter on styling covers the basics to get you started.
Try out the demo here. You can find the source code in usage-stylesheet.py.
Integrating with other libraries
The release of Dash Cytoscape brings the capabilities of Cytoscape.js into the Python world, opening up the possibility of integrating with a wide range of excellent graph and network libraries in Python.
For example, Biopython is a collection of open-source bioinformatics Python tools. In around 100 lines of code, we wrote a parser capable of generating cytoscape elements from Biopython’s Phylo objects. The parser is generic enough that it can be directly integrated in your bioinformatics workflow, and enables you to quickly create interactive phylogeny apps, all in a familiar and pythonic environment. View the phylogeny demo in the docs.
Interactively explore your phylogeny trees. The elements are automatically generated from a biopython’s Phylo object, which can be initiated from a wide range of data format.
Dash Cytoscape is the first step to provide deeper Dash integration with Biopython and well known graph libraries such as NetworkX and Neo4j.
To wrap up
Today, Plotly.py is widely used for exploratory analysis and Dash is a powerful analytics solution in the scientific community. Recently, researchers published a paper on CRISPR in Nature and built their machine learning platform using Dash. There is an obvious need for powerful and user-friendly visualizations tools in Python, and network visualization is not an exception. We are planning to fully leverage the resources available in Python to make Cytoscape useful for more network scientists and computational biologists, as well as the broader scientific community.
Dash Cytoscape is a work in progress, and we encourage you to help us improve it and make it accessible to more people. Contribute to documentation, improve its compatibility with other libraries, or add functionalities that make it easier to use. Head over to our GitHub repository to get started!
We are currently working on multiple improvements, including support for NetworkX, integration with Biopython, and object-oriented declaration for elements, styles and layouts. Check out those issues to keep track of the progress, or to support us through your contributions!
If you wish to use this library in a commercial setting, please see our on-premise offerings, which not only guarantee technical support, but also support our open-source initiatives, including Dash Cytoscape itself. | https://medium.com/plotly/introducing-dash-cytoscape-ce96cac824e4 | [] | 2019-02-05 22:24:00.717000+00:00 | ['Python', 'Plotly', 'Data Science', 'Data Visualization', 'Dash'] |
The Danger of Humanizing Algorithms | The Danger of Humanizing Algorithms
Misleading terminology can be dangerous. Machines are actually not learning
Photo by Michael Dziedzic on Unsplash.
To many, 2016 marked the year when artificial intelligence (AI) came of age. AlphaGo triumphed against the world’s best human Go players, demonstrating the almost inexhaustible potential of artificial intelligence. Programs playing board games with superhuman skills like AlphaGo or AlphaZero have created unparalleled hype surrounding AI, and this has only been fueled by big data availability.
In this context, it is not surprising that the public, business, and scientific interest in machine learning are unchecked. These programs can go further than beating a human player, going so far as to invent new and ingenious gameplay. They learn from data, identify patterns, and make decisions based on these patterns. Depending on the application, decision-making occurs without or with only minimal human intervention. Since data production is a continuous process, machine learning solutions adapt autonomously, learning from new information and previous operations. In 2016, AlphaGo used a total of 300,000 games as training data to achieve its excellent results.
Every guide out there about how to implement machine learning applications will tell you that you need a clear vision of the problem it has to solve.
In many cases, the machine learning applications are faster, more accurate, and time-saving, therefore — among other benefits — shortening time-to-market. However, it will only address this specific problem with the data given.
But is this learning in correspondence to the way humans learn? No, it is not. Not even remotely. | https://medium.com/better-programming/the-danger-of-humanizing-algorithms-a9a0e1a5c8e6 | ['The Unlikely Techie'] | 2020-08-19 14:11:04.343000+00:00 | ['Machine Learning', 'Programming', 'Data Science', 'AI', 'Artificial Intelligence'] |
My Internships at Optimizely | I’ve been very lucky to be an intern at Optimizely for two summers now, as part of their talented Business Systems Engineering team. This team’s mission is to enable Optimizely to take action on its own internal data. Considering Optimizely’s mission is to let our customers take action on their data, it only makes sense that we have methods to make data-driven decisions ourselves. The Business System team builds rock-solid performant data pipelines that take messy, raw data streams and transform them into a neat homogeneous OLAP schema in the Data Warehouse.
You might think, why does Optimizely need a Data Warehouse or a Business Systems team? As a startup, it makes sense that we purchase or subscribe to a product like Zendesk, a helpdesk system, rather than hire a team of engineers to build our own. Optimizely uses a myriad of these products, but this creates a problem: useful data about our customers is siloed inside these products. There is no easy way to gain insights about our customers across all the systems we use. Furthermore, most external systems we subscribe to do not have the functionality to write complicated queries that an analyst would need to perform. A Data Warehouse allows us to elegantly solve both of these problems. Being data-driven is so important to us, that there are TVs around the office with charts which are powered by the Data Warehouse.
Optimizely Engineering is insistent that interns become part of the team as another engineer, not simply an “intern.” This is unlike most other places, where an intern is assigned work to do in isolation. Intern work at Optimizely is code reviewed just like every other engineer, and is (hopefully) eventually pushed into production. Interns follow the same engineering processes at Optimizely that full-time engineers follow. I worked on several exciting and high impact projects during my tenure here.
One of the most impactful projects I completed was overhauling the Zendesk Data Pipeline. Historically, this pipeline caused a great deal of grief to the team with frequent failures affecting the ability to monitor Success service stats in real time. I re-wrote it using a clean object-oriented structure, new API endpoints, and extending the functionality to track SLAs from our Success staff. Tracking these over time is critical to Optimizely’s Customer Success team, especially as the company prepares to roll out an exciting new initiative in the near future.
Another project I worked on was implementing a RESTful API called SpeedFeed that is being used to interview full-time candidates, in a take-home assignment. The SpeedFeed API assignment more closely represents the daily work of a data engineer compared to a traditional phone screen. This project enabled our hiring team to evaluate the interview candidates in a brand new way!
I also worked on building several new data pipelines. Two of these included Google Cloud and Amazon Web Service costing, that allow Optimizely to track hosting costs at a granular level. Another one was for Docebo, our e-course management system, that allows analysts to answer a plethora of important questions about customer engagement with our education platform.
Optimizely also enables engineer creativity to be showcased during Hackathons. I worked on two small projects during a special intern hackday. The first project came from an insight I gained using the Data Warehouse. I determined, by using the Levenshtein string distance function, that many customers likely misspelled their email addresses when signing up for Optimizely. To solve this, we integrated with Mailcheck.js, which offers suggestions for email misspellings. A second project involved increasing the security of our product, by integrating with Castle.io, which detects suspicious login activity. We know a security incident can end the whole company, which is why we try to be as proactive as possible, for example, by adding 2-Step Verification.
Overall, I had an excellent two summers interning at Optimizely. There are plenty of fun activities available in the Bay Area. Summer intern trips included an SF Giants Game, miniature golf, escape the room, group volunteering activities and weekend hiking excursions. I strongly recommend any great, aspiring software engineer to intern here at Optimizely. Optinauts have a bright future ahead, with a world-class engineering team coming up with brilliant solutions that delight our customers. If this sounds interesting to you, check out our careers page.
Optimizely Interns at a San Francisco Giants Game | https://medium.com/engineers-optimizely/my-internships-at-optimizely-417aad8572f4 | ['Ryan Smith'] | 2016-08-17 22:29:50.242000+00:00 | ['Software Engineering', 'Optimizely', 'San Francisco', 'Data Engineering', 'Internships'] |
It’s Not Microservices, It’s You | It’s Not Microservices, It’s You
Microservices are just one example of a technology trend that has been hyped up to be the answer to everybody’s problems
Photo by You X Ventures on Unsplash
The hype creates a dynamic of inflated expectations and excitement with business representatives and software engineers alike. In companies and teams where decision pushing is commonplace this often leads to rushed decisions, which likely ends in frustration and disappointment.
Business representatives, software engineers and other technical specialists should be freely exchanging ideas, discussing risks and doubts as well as challenging each other with strong mutual respect. Creating this culture takes effort, a sense of personal responsibility and proactiveness from all involved. And I can guarantee you, no architecture or new technology will create long term success if the culture within the company is dominated by a small group of individuals who don’t understand that being a leader is to listen.
Teams that rush into microservices can get burned in many different ways. When re architecting an application into smaller, loosely-coupled pieces of software that communicate with each other over a network, teams suddenly have to deal with the fallacies of distributed computing and decentralized data management. There’s a multitude of articles that explore these complexities in greater detail, so I won’t replicate that effort. I can say that underestimating these complexities often results in fragile architectures, scaling issues and substantial rework. Mastering them takes preparation, planning and experience.
We cannot overcome the fact there will be a learning curve, as there really is no substitute for experience. That being said, the chances of success can be greatly increased through the right preparation and planning. A key aspect in that regard is estimation.
In this blog I want to share an estimation technique that I’ve found to be helpful to channel the excitement around technology trends and break through unrealistic expectations by providing clarity on effort as well as associated complexity, risks and unknowns. This enables the right conversations between business representatives, software engineers and other technical specialists about trends like microservices before committing prematurely.
How to do work estimation right
No matter how hard you try, estimation will never be perfect. We cannot predict the future and foresee what we don’t know. The fact that we cannot be perfect when estimating, does not mean it doesn’t have value.
Complexity, uncertainty, and risk are all factors that influence confidence and therefore also influence the estimated effort. But most estimation techniques whether it’s hours, ideal days, story points or t-shirt sizing, only focus on the effort and don’t provide means to also express confidence.
Which is a shame since it’s a big part of the value that is achieved through estimation. Because work that the team doesn’t feel confident about is much more likely to cause problems. One way of making confidence transparent having the team that will be delivering the work perform range estimation. A range estimation technique I’ve had good results with is the 50/90 estimation technique. When using 50/90 estimation, every piece of work is estimated twice:
The first estimate represents a “aggressive, but possible” (ABP) estimate where there’s 50% certainty of completing the work within that time.
The second estimate represents a “highly probable” (HP) estimate where there’s 90% certainty of completing the work within that time.
A narrow range, with the ABP and HP estimates being fairly close together, means the team is confident in the work. A wide range, where the ABP and HP estimates are far apart, means the team is not confident in the work based on current information and knowledge.
When dealing with a wide range, the team should discuss the complexities, risks or unknowns they foresee and if these can be reduced or mitigated. Some examples it’s lead-time, dependencies on other teams, known bottlenecks, development complexity of technologies that are not known. When these things are outside the span of control of the team, they’re still relevant. Doesn’t mean it becomes their responsibility to fix, it’s their responsibility to make it transparent so it can either be acted upon or explicitly accepted.
There are three additional rules that ensure you get the most value out of the estimation process and the decision making process:
Make sure that the estimation is done by the engineers that are performing the work. Involving others with prior expertise is certainly valuable, but they should assume a role where they use their experiences in a coaching capacity with the goal to eliminate blind spots in the estimations.
Prevent estimations performed by one person to reduce cognitive bias and in a group setting make sure everyone involved provides input. Which can be challenging in groups with very vocal or overpowering individuals if not addressed. A simple trick often applied in story point estimation is having everybody present their estimates at the exact same time to eliminate the possibility of people being influenced. This is crucial to reduce the cognitive bias of individuals and extract the best out of the team by sparking the right conversations.
Communicate estimations in bandwidth form, with the identified risks and possible follow up actions to reduce or mediate risks. The 50/90 estimation technique offers a formula for compounding all the range estimates back into a single number.
With the estimation results available, it’s time to decide what to do with the identified risks. Often it is possible to reduce or mitigate them. A common example is building a proof of concept to better understand a particular problem or challenge. The improved understanding or reduced blind spot should improve estimate accuracy. Investing additional time and resources in reducing or mitigating risks might not always be possible or worth the effort. This is fine, as long as residual risks are explicitly accepted.
Preparation is half the victory
Instead of telling you if microservices are the right choice I shared a technique that enables your team to come to their own conclusion. Which is how it should be as nobody understands your circumstances better.
A key takeaway is that estimation is just a tool. The real goal is using that valuable information to bring everyone together and have honest conversations about the value and the realistic cost of applying/using microservices. Doing so gives a much better starting point than someone who watched a few tech talks on how Netflix migrated to microservices and feels that success can be replicated overnight.
It’s very possible that some teams come to the conclusion they don’t need microservices or that they don’t feel confident enough given the complexities and learning curve when viewed in context of time constraints. Companies and teams often have multiple improvements going one in parallel so priorities have to be set.
In that case, consider starting with a monolith that is designed to be modular. This enables that application to be broken up into microservices later if their situation changes and the additional complexity of microservices can be justified. This upholds the KISS and YAGNI principles and avoids over engineering while also being mindful that the situation might change in the future.
Those who have completely disregarded monoliths would do well to remember that Netflix started as a monolith before transitioning to microservices at a later stage. And while we all have the belief that we are building the next unicorn with massive scale requirements just around the corner, it’s likely that some of us are wrong. | https://medium.com/swlh/its-not-microservices-it-s-you-8f2431dc50ff | ['Oskar Uit De Bos'] | 2020-08-21 12:27:54.857000+00:00 | ['Software Development', 'Software Engineering', 'Microservices'] |
Pandora Boxchain: Monthly Digest, September | Starting this month and going forward we plan to publish a monthly digest with the most interesting updates for the Pandora Boxchain project.
Research and Development:
The main focus of our research and development activities in September was on Prometheus consensus and a layer 1 network based on it, which will become the hosting layer for our high-load computing protocol at the level 2. We call this 1st layer of network “BOX”, and the second “PAN”; together they ensemble Pandora Boxchain.
Overall directions of research of Prometheus consensus and BOX network development were related to:
Formal verification of the parts of the consensus algorithm utilising special insertion modelling methodology, developed by Prof. Litichevsky and Prof. Hilbert;
designing the improvements to the consensus algorithms;
development of consensus prototypes in Python;
improvements to the existing Rust implementation of the network node.
The results of these activities were the following:
Implemented prototype of Prometheus consensus in Python. We’ve started working on verification mechanisms which will protect the network from accepting faulty or malicious blocks and transactions. This includes wrongly signed, sent repeatedly, sent at a wrong time, in a wrong round, with wrong links to previous blocks or transactions. Started working on formal verification of gossips mechanism for Prometheus consensus. This mechanism allows validators to communicate and reach consensus on when a block was skipped either maliciously or by network problems. If one of the validators doesn’t see a block from the previous validator in a validation queue he sends negative gossip and other validators respond either with a block if they know about it or with negative gossip confirming block absence in DAG. Implemented so named “timestepper mechanism” that allows to perform tricky tests by manipulating simulation of time and seeing how nodes react to complex scenarios. Switched to Elliptic Curve cryptography in Prometheus consensus. It is much more space efficient than RSA and allows us to calculate close to real-world memory and performance overhead. Performed analysis of Dfinity, Casper, Tezos consensuses and made an investigations on BLS, Schnorr signatures. Created a new ordering and merging algorithms for Prometheus consensus. Worked with the gossip system: initialization of sending negative and positive state events, gossip transaction, checking of a full system of gossip systems work. Made mempool updates: improvements of the mempool functionality for working with gossip transactions, adding new transactions to the block, implementation of the penalty case, which arises if, within the framework of mempool, a negative and positive state from one author was discovered. In this case, the validator will immediately write out the penalty (even write out to himself) and immediately transit into the block. Created and tested tx_by_hash() storing — storage system for all transactions within the {tx_hash: tx} structure or {transaction hash: transaction}. This mechanism is based on DAG and includes methods for adding and searching transactions in this structure.
Additionally to R&D activities in Prometheus and BOX Network we were also performing further development of our Proof of Computing Work protocol and existing PAN testnet:
Developed a prototype of the electron.js application with built-in box proxy. Based on this prototype we will develop The Pandora Market desktop application with a description of the use cases. Studied Proof of Computing Work algorithm from the point of view of Markov chains. There are three types of nodes in Proof of Cognitive Work: worker (performs computations), validator (validates computations), arbiter (resolves conflicts through arbitration). Through the accumulation of reputation or by not following the protocol (i.e. Byzantine behaviour) the nodes in Pandora network migrate between these states. This makes Markov chain an interesting framework to study behaviour in Pandora’s ecosystem. The questions of this study are: what are the steady states in this ecosystem given different parameters of the model, and under which conditions ecosystem exits in a steady state, such that it functions as designed.
Events:
September, 5th ➡️ Pandora Boxchain Meetup in Berlin
On our second Pandora Boxchain meetup in Berlinblockchain and AI enthusiasts joined together after the Zero Knowledge Summit at Mindspace. Andrey Sobol presented our research on ‘Randomness in PoS’. After Sergey Korostelyov shed some light on how decentralised distributed network technology can help build decentralized AI systems. His presentation is available here.
September, 7th — 9th➡️#ETHBerlin Hackathon
Our team created a reference implementation of the ERC1329- Inalienable Reputation Token during #ETHBerlin Hackathon. This ERC proposes the standard in creating inalienable reputation tokens. Take a look at first version of ERC1329 and join the discussion on #Ethereum GitHub.
September, 9th ➡️ Blockchain Startup Pitch
We took part in Blockchain Startup Pitch that took place during Berlin Blockchain Week. The team presented the project to blockchain savvy audience and had a great discussion with the community.
September, 7th-11th ➡️ Blоckchain Cruise
Maxim Orlovsky, the founder of Pandora Boxchain, took part in Blockchain Cruise on The Mediterranean sea together with Charlie Lee, Bobby Lee, Jimmy Song, Brock Pierce, Tone Vays and other influential changemakers in the blockchain space. He presented the results of joint academic research and technical engineering, revealing a new type of PoS consensus Prometheus, that supersedes PoW in all main aspects. His presentation: “PoS Consensus: can it be as much censorship-resistant and secure as PoW?” is available for all on slideshare.net.
September 22–23 ➡️Baltics Honeybadger Conference 2018
At the end of the month our Team attended the Baltic Honeybadger 2018 Conference, where we had a lot of discussions regarding Pandora Boxchain technologies with Bitcoin community & developers, including Adam Back, Peter Todd, Giacomo Zucco, Matt Corallo, Eric Voskuil, Alex Petrov and others.
We post updates on Research and Development achievements and upcoming Events on socials. Our Communities in Social Networks are really strong and active, and they enlarged greatly during September. In only one month we gained over 5000 users in Facebook and more than 1200 followers followed as on Twitter. Join our communities and be a part of Pandora Boxchain. | https://medium.com/pandoraboxchain/pandora-boxchain-monthly-digest-with-the-most-interesting-updates-and-news-in-september-502520a07a87 | ['Olha Rymar'] | 2018-10-25 12:48:11.334000+00:00 | ['Decentralized', 'Blockchain', 'AI', 'Artificial Intelligence'] |
5 Steps to Being Traditionally Published | You hear pretty often that publishing is dead. Or that it’s impossible to actually sell a first book anymore. Or that you can’t do it, without a huge platform (or some other equally improbable necessity.)
The truth is, though, that it’s always been hard to have a book traditionally published. And it’s no more impossible today than it ever has been.
It’s really just numbers. There are way more people who want to be published than there are opportunities to be published. High demand equals high difficulty. So, even though the writer hires the agent, and sells to the editor, and the publisher — the agent, editor, publisher have so many opportunities to do their work that they don’t have to really hustle for clients.
People are scrambling, begging them to take most of the proceeds of each book sale. That’s just the way of the world.
That being said, though, if traditional publishing is your goal (and let’s be honest, it’s a goal for a lot of writers), it’s not impossible.
Here’s how you do it:
Write a really good book, all the way through to the end.
If you’re writing fiction, this is a must. No agent will look at you if you don’t have a complete novel, so no traditional publisher will have the chance to consider your work.
With non-fiction you can write a proposal instead of the whole book. If you’re a first time writer, though, it wouldn’t hurt to write your whole book even for non-fiction. Regardless, the work you turn into agents (proposal or finished draft) needs to shine.
While you’re writing, start building an email list.
I wish that someone had given me this advice when I sold my first novel to a traditional publisher.
I had eighteen months between when my book sold and when it was released. My best use of that time would have been building an email list, but I didn’t know that. No one told me.
So, I’m telling you.
You’ll have an easier time appealing to an agent (and a publisher) if you have a solid foundation of readers already. It goes without saying, you’ll have an easier time selling your book if you have people waiting to buy it.
If you can get to 10,000 on your email list, it will make a difference.
If you can get to 100,000 — suddenly you’re not 1 in a million anymore. You’re more rare, which means that agents and publishers will be more anxious to compete for your business.
The truth is, you probably don’t even need an agent or publisher if you build your list to that level. You certainly won’t have trouble attracting one or the other or both if that’s what you want.
Look for an agent.
This requires you to learn how to write a solid query letter. A query letter is a one-page sales letter that tries to entice the agent to request your manuscript or proposal. Most large traditional publishers require you to have an agent before they’ll look at your work.
Write the next book. And so on.
I guess the real truth is that it’s much harder to get your first finished book published than your second, and harder for the second than the third, etc. Writing is a skill that takes practice to master.
If you keep going, you increase your chances exponentially.
And if you quit, you build a block wall that your chances can’t overcome.
You aren’t competing against every single writer trying to get a book published.
I have a friend who used to read an agent’s slush pile. He says that a full 90 percent of the queries that come in are instant rejects — because the writing isn’t there.
You’re only competing against the books that are publish-ready. You need to make sure your book is publish-ready. Do what it takes to get there. That means a lot of practice, a lot of reading, a lot of writing. Maybe hiring an editor. Maybe taking some classes.
It means behaving like a professional writer.
If you stick with it and keep improving, you’ll get there. By time you do, though, you might find that you don’t really want to be there anymore. The world of books is changing way faster than the world of publishing has kept up with. | https://shauntagrimes.medium.com/5-steps-to-being-traditionally-published-415a4996cf38 | ['Shaunta Grimes'] | 2019-05-31 11:41:28.385000+00:00 | ['Work', 'Publishing', 'Writing', 'Self', 'Creativity'] |
How Boredom Can Help You Be More Productive and Creative | How Boredom Can Help You Be More Productive and Creative
We need to understand the importance of boredom
Photo by Anastasia Shuraeva on Pexels
In a study, participants were told to sit in a room doing nothing for 15 minutes. In that room, there was also a button which would electrically shock them. They could choose to click that button at their own will. 67% of men and 25% of women pressed that button. It goes to show how much we hate boredom. We hate it so much that we can choose pain over boredom.
So, we try to avoid it as much as we can. Avoiding boredom has never been easier because we’ve got so many things to do. We try to keep doing something in our free time, whether it is scrolling through social media, watching a badass movie, or binge-watching shows on Netflix. That makes sense because boredom is boring. | https://medium.com/curious/how-boredom-can-help-you-be-more-productive-and-creative-8a923db650af | ['Binit Acharya'] | 2020-10-16 05:40:30.707000+00:00 | ['Personal Development', 'Self Improvement', 'Life', 'Creativity', 'Productivity'] |
Weekly Prompt: 28-31.12 | Ahoy!!!
2020 is coming to an end…Oh no!!! Let’s all panic over the fact that we feel like we didn’t achieve much this year. Let’s publish content about how terrible these 12 months have been…Let’s complain and be negative, because that’s what we’re expected to do around this time, right?
Well, not us, folks, not us. We will use these last few days of 2020 we have left to reflect on our internal world. To hell with everything that happens externally…There’s only so much we can do about it, there’s only so much we can control! I am more concerned about what goes on within. What we’re focusing on. What we’re repeating in our heads. You know, the good stuff.
I know I’ve been living in my head a lot lately and so I want to ground myself through the following prompts. I hope you’re onboard with the idea of some more self reflection! It will be rewarding in the end (and you know it!):
Monday: Tell me about a “wow” or “oh yeah” moment in your life. When you came to a realization and something made complete sense. What happened? Where were you? What did you realize/learn?
Tuesday: What do you wish you were bold or brave enough to do?
Wednesday: What does having purpose mean to you?
Thursday (poetry challenge): In detail, write a poem about how you would like people to feel after interacting with you.
That’s it for now, dear friends! Hope you have enjoyed the ride KTHT took you on in 2020 and are excited for new projects and exciting challenges in 2021 :)
Thank you for your time, as always. A big, fat NAMAS’CRAY! | https://medium.com/know-thyself-heal-thyself/weekly-prompt-28-31-12-5476823302d9 | ['𝘋𝘪𝘢𝘯𝘢 𝘊.'] | 2020-12-28 16:36:29.277000+00:00 | ['Short Story', 'Writing', 'Energy', 'Newsletterandprompts', 'Creativity'] |
Why You Should Care About Joe Rogan’s Bowel Habits | Unless you’re a protein guzzling gym monkey, the thought of your entire diet consisting of this one food group - which national guidelines tell us should only make up ten to thirty-five percent of our diet - probably seems like a recipe made for gastric disaster. And if Joe Rogan’s experience of the diet so far is anything to go by, then you wouldn’t be far wrong. On a recent Instagram post Rogan gave us all the sordid details in typically comedic fashion:
“Carnivore diet update; the good and the bad. Let’s start with the bad. There’s really only one “bad” thing, and that thing is diarrhoea.
I’m not sure diarrhoea is an accurate word for it, like I don’t think a shark is technically a fish. It’s a different thing, and with regular diarrhoea I would compare it to a fire you see coming a block or two away and you have the time to make an escape, whereas this carnivore diet is like out of nowhere the fire is coming through the cracks, your doorknob is red hot, and all hope is lost. I haven’t shit my pants yet, but I’ve come to accept that if I keep going with this diet it’s just a matter of time before we lose a battle, and I fill my undies like a rainforest mudslide overtaking a mountain road.
It’s that bad. It seems to be getting a little better every day, so there’s that to look forward to, but as of today I trust my butthole about as much as I trust a shifty neighbour with a heavy Russian accent that asks a lot of personal questions.”
As funny as this post might seem (does toilet humour ever get old?), if we look past the punchlines we can see that Rogan is describing some relatively serious bowel related side effects to the carnivore diet. Side effects that anyone who is considering trying out this latest exclusionary diet should take note of if they want to avoid a prolonged period of solitary confinement to their lavatory.
Not all of us have a cushy enough lifestyle whereupon frequent toilet trips would only be a minor grievance. I imagine that running your own podcast, as Rogan does, in your own studio, on your own timetable, gives you a certain amount of freedom when it comes to your toilet trips. The same couldn’t be said for your average nine to five office worker, or a worker in any other amount of relatively run of the mill jobs.
I worked part-time as a shop assistant in a high street clothing store to get a bit of extra money while I was at university and I can distinctly remember the painful squirming of having to hold in a number one for longer than was comfortable, due to my boss not wanting the staff to have more than one toilet trip while they were on the cash register. I can’t begin to imagine what kind of pain I would have gone through if I’d been doing the carnivore diet whilst I worked there and I started getting some Rogan-esque bowel trouble whilst I was confined to that shop floor. | https://antonypinol.medium.com/why-you-should-care-about-joe-rogans-bowel-habits-6a8460fbc2c2 | ['Antony Pinol'] | 2020-01-16 09:11:47.521000+00:00 | ['Diet', 'Wellness', 'Health', 'Self Improvement', 'Self'] |
A Great Feeling | And today, we will start with my dear cousin.
Call her Jubilate Mashauri.
I call her cousin. Simply because we have a lineage relationship that brings us closer.
I met her when I was 11 years old and she was 12 years old. Just a one year difference. (But she still wants a ‘shkamoo’ from me!) She was in class six, I was in class five.
A beautiful, charming and open creature. I was playing football when she just popped in grandma’s gate with her younger sister tightly held on her left hand.
Seeing her for the first moment left me heading for a quick shower. Ha h! Silly me. She seemed interested with my character, and I got interested to know her.
Soon enough did I learn how related we were. In her course of stay at grandma’s place, we became close and loving relatives — playing, having fun and getting close to one another.
We spent much time learning of our characters and personalities in a nutshell, kiddish way I can say. Most especially, the ‘what’ and ‘why’ we wanted to become in the near future.
I don’t quite remember if she mentioned any of her plans to me, but I know what I always mentioned to her!
“I want to be a Lawyer! Yes, a lawyer.”
I cannot forget that. It was deep in my bloodline.
Now, I bet you know what happens when ka-boy child and a girl child get so close ha? Words start to develop. Fashionable silly words and jokes of love arise! (Chuckles) Spare me Jubilate, ha ha!
Let’s forget about that, anyways. [ It was foolish age. Not that important. ]
So, the holidays passed, I got back home, she kept schooling and life went on. But one thing had attached us in common.
The dream to attend Agape Lutheran Junior Seminary.
You a’ll know how these ambitions develop with kids.
Passion.
A connected passion embolden in character. | https://medium.com/the-chat-post/a-great-feeling-3aaff6dc4195 | ['Mark Malekela'] | 2020-01-09 01:46:27.741000+00:00 | ['Cousins', 'Life Lessons', 'People', 'Candid Chat', 'Storytelling'] |
Why Startups Need Affordable Help | Why Startups Need Affordable Help
Teaching Startup is trying to reach both near and far away places
It’s the worst kept secret in business, but entrepreneurs generally don’t have a lot of disposable cash.
This is not a blanket personal assessment, just a business rule. It doesn’t matter if the founder is a kid with a few hundred dollars of saved-up birthday cash or a multi-millionaire who can fund a new enterprise with pocket money. When a business starts, it starts with capital, and that capital is finite and has to pay for everything.
Sometimes including rent and meals for the founder.
The vast majority of new business founders don’t come into the business with a blank check. Most founders don’t struggle over how much equity they’re willing to part with for that $500K Shark Tank investment.
So it kills me when I see founders spend money they don’t have on things they don’t need — things that aren’t going to be immediately helpful in furthering their progress.
I founded Teaching Startup — a newsletter and app with answers for entrepreneurs — to provide help to every founder. But I know every founder isn’t made of money, so I made it affordable, $10 a month. And I know it takes time for that help to be realized, so I threw in a free trial. I also know it’s not for everyone, so I made it a cancel anytime, no commitment deal.
Some of my favorite feedback is the feedback I get from founders in Africa, India, South America, and other far-away (to me) corners of the world where the money doesn’t flow like it does in Silicon Valley or New York.
What they’re spending is worth so much more than $10 a month, so if they find value in the product, I know the product is valuable here in the US. Or in the UK, or Australia, or all those other places Teaching Startup members come from.
So if you’re in one of those far away places, let me offer to cut the cost for you. If you’re in one of those places where the startup money doesn’t flow, here, there, or anywhere, use invite code FARAWAY before the end of 2020 and we’ll lower the price of Teaching Startup to $6.99 a month for all of 2021.
Even if you aren’t in one of those places, use invite code NEAR and I’ll give you your first month for $5.
If Teaching Startup winds up not being right for you, no worries. In both cases, you get up to 30 days free to figure that out.
And if that still doesn’t fit your budget, talk to us, and we’ll see what we can do.
Good advice and the right answers don’t have to cost $300 an hour. We’re here to prove that. | https://jproco.medium.com/why-startups-need-affordable-help-c6f13ea06025 | ['Joe Procopio'] | 2020-12-09 13:45:42.778000+00:00 | ['Careers', 'Entrepreneurship', 'Business', 'Startup', 'Education'] |
How I Got Out Of My Head And Into The World | The truth will set you free
I found this the hardest. I didn’t lie, instead, I actively avoided any situation where I might have to come clean about losing my dad.
I didn’t want anyone to pity me, so I alienated myself from everyone.
It took me far too long to realise that most importantly, being honest with others about your situation will help you to come to terms with it.
The first time I told someone how I felt, it cascaded from my lips as a jumbled string of letters. It didn’t sound right.
They had existed as disconnected and painful words tumbling around in my mind like dirty laundry for so long.
I repeated it until it became a simple sentence and my mind began to feel clearer as a result.
Take a deep breath
It took me a few attempts using apps like Headspace. At first, I used it to help me sleep when my insomnia was bad.
The first few times I tried, I got so frustrated with the narrator telling me to focus on my breath when I couldn’t do it, that I’d end up in a fit of tears and give up. I was desperate to sleep.
I came back to it again and again. I gave up every time.
Then one day, something clicked.
I made it through a 5-minute session and ended up feeling enveloped in a warm, fuzzy feeling.
I was relaxed and for the first time in months, my mind was quiet.
The best things are wild and free
I used to stay inside for fear that I might cry if someone asked how I was, I can’t bear to be emotional in front of people, let alone in public. It’s so easy to fall into the trap of thinking that you should hide indoors when you’re feeling low.
Take yourself for a walk, find a beautiful park, forest or trail. Get in between the trees and walk through fields. The fresh air and the open space will make you feel at ease and the movement will help clear the fuzzy cloud in your head. You’ll go home feeling brighter than you did before.
Progress not perfection
When I first started going to the gym, I had no idea what in the world I was trying to achieve.
I used whatever machines I felt like using and picked up any weight. I would work-out until I could barely walk and left feeling high on life. I had dabbled in strength training, Muay-Thai and dance but never actually stuck to one particular thing to see any improvement.
When I was depressed I had no concept of time.
I could sink hours focusing on negative thoughts of the past or the future but was never firmly in the present.
When I trained for a specific deadlifting goal or some other target, I was immersed in the present moment while trying to hit every rep and set.
Strength training brought me firmly back to the present moment, no matter where my head was at the time.
I had a sense of focus.
I use the concepts of strength training which centre around progress not perfection and I apply them to my daily life. I now accept that things take time and that’s okay.
A small achievement every day. That’s all it takes. | https://medium.com/age-of-awareness/how-i-went-from-negative-to-positive-in-a-year-5e59320fecb5 | ['Tiffany Kee'] | 2020-11-21 10:44:21.140000+00:00 | ['Mental Health', 'Happiness', 'Grief', 'Health', 'Self'] |
How to create better Email Signatures | Most people who send emails don’t spend time on their email signatures, which is a real missed opportunity.
Your email signatures are opportunities for you to make clear who you are, to stand out. Make sure people can reach you and not only by email. This email signature can contain more information about you personally but also as a business.
So, if you’re putting your name and a point or two of contact information in your signature, you’re not taking full advantage of the opportunity to connect and engage with the people you’re emailing.
So what should you put into your signature? It depends on what you want to achieve and on your personal preference. Here are some suggestions as you create your own:
First and Last Name Affiliation Info Secondary Contact Information Social Media Icons Call to Action Disclaimer or Legal Requirements Photo or Logo
Let’s individually look at each point.
1. First and Last name
I don’t think this point needs any explanation. You should always put your full name in the email signature.
2. Affiliation Info
Closely following your name should be your affiliation information. Your affiliations could mean your job title, your company or organization, and even your department.
Providing this information provides more context about the conversation and your role in it. Suppose it’s a recognizable organization. This helps you get the attention of your readers, so they take your message seriously.
3. Secondary Contact Information
Secondary contact information is essential, too, so that the recipient knows how else to contact you.
This might include a secondary Email, a phone number, or even a Fax if that’s still used. This might also be an opportunity for you to promote your website.
4. Social Media Icons
Your social media platforms are the primary way of representing you in the modern era. Your brand is majorly exposed to these profiles, and they need to be followed.
You can tell a lot about a person by what they post and how they portray themselves.
That’s why it’s a great idea to include links to your social media pages in your email signature. It not only reinforces your brand, but it also helps people find new ways to contact and follow you.
This can even drive online traffic to your content if you post the links online on your profiles. So if you do include social icons in your signature, make sure you’re keeping your social profiles up-to-date.
Even if you have a presence on many social media sites, though, try to cap the number of icons to five or six. Focus on the accounts that matter most to growing your business or building your brand.
5. Call to Action
One of the most important things to include in your email signature is a call to action (CTA).
The best email signature CTAs are simple, up-to-date, non-pushy, and in line with your email style, making them appear more like a post-script and less like a sales pitch.
Links to videos can be especially noticeable because, in some email clients like Gmail, a video’s thumbnail will show up underneath your signature.
6. Industry Disclaimer or Legal Requirements
Some industries, such as legal, financial, and insurance, have specific email usage guidelines and etiquette to protect private information from being transmitted.
7. Photo or Logo
An image is a great choice to spice up your email signature. If you want a personal touch so that recipients you’ve never met can associate your name with your face, consider using a professional photo in your signature. | https://medium.com/build-back-business/how-to-create-better-email-signatures-7786410aef2a | ['Bryan Dijkhuizen'] | 2020-12-05 10:34:53.359000+00:00 | ['Email', 'Work', 'Entrepreneurship', 'Business', 'Marketing'] |
Why Your Startup Isn’t Cashflow Positive Until You Make A Living Wage | You work really hard for years building your company. And you’re burning through your savings as you build your company.
Picture: Depositphotos
Finally, finally you get your company to cash flow positive.
“Thank goodness,” you say to yourself. “We’re finally free of needing more money.
“Now, the business is self-sustaining. We can just invest the profits of the business back into the company.”
So it’s a rude shock when you realize that your company isn’t truly profitable even though your company is cash flow positive.
How can this be? Cash flow positive means you don’t need money any more, right?
Let me tell you about my friend Mark.
I met Mark a couple of years ago. He has a really cool company that he and his business partner started. They received some angel funding that helped them, but they are truly bootstrapping.
I love their business and their business model. Their product is unique. And, slowly but surely, Mark’s company has gained traction.
I tell Mark the same thing every time I see him. “When are you and your partner going start taking a salary?”
Mark’s answer is the same each time, “When we’re profitable.”
I wish I could get Mark to change his mind, but I haven’t been successful, yet.
You’re only truly profitable when you and your company are cash flow positive.
Congratulations. Your company is profitable, but you’re still draining your bank account.
Guess what? You haven’t achieved true profitability yet.
You’ve achieved true profitability when you are no longer draining your personal bank account of money.
Somehow, there is this misconception that your investors expect you to starve while you build your company. Nothing could be further from the truth.
Experienced investors know that it’s important for you to make a living wage. In other words, your investors want you to have a big enough salary, so you don’t have to worry about paying the bills each month.
Now, I’m not saying that you should pay yourself a huge salary. That doesn’t make sense.
However, I am saying that you should, as soon possible, pay yourself enough money, so you can pay your bills.
Why you should pay yourself a living wage.
I recommend to every entrepreneur I work with that they pay themselves something as soon as possible. Just pay yourself something. Even $100 per month is okay if that’s all you feel comfortable with.
The benefits of paying yourself a small salary go beyond the small amount of money you will make. Let me explain why, except this time I will use a negative example.
There was another entrepreneur I worked with named “James”.
James didn’t pay himself a salary.
We were going through our bi-weekly review of his company. The revenue was growing, and the company should have been cash flow positive.
In fact, cash from operations was growing, so it didn’t make any sense why James’ net cash position was dropping.
Then James gave me the answer. “My wife wants me to repay the second mortgage on our home.”
“But there’s no loan on the books,” I said. “You can’t just take money out of the company. You have shareholders.”
‘You don’t understand. We have to pay off that mortgage.”
“I understand what you want to do, but you can’t do it that way. You’re embezzling money from your company.”
You don’t want to put yourself in the position where you will be tempted to do the wrong thing.
I instantly knew I would have to stop working with James because James was embezzling money from his company.
I was bummed.
I had been working with James for a while. And James’ company had gotten to a nice amount of revenue and was cash flow positive.
James was going to blow it big time if he didn’t change his thought process. James wouldn’t change his mind, so I told James our business relationship was over.
I’m not saying that not paying yourself a salary will result in you doing what James did. Mark is proof of that. But why put yourself in a bad financial position?
Instead start the discipline of paying yourself a small salary.
You can start with as small an amount as you want, then…
Start paying yourself more as the health of your business improves, then…
Keep increasing the amount you are paying yourself more until you are paying yourself a living wage.
Disciplined cash management leads to better results for your company.
I am a big believer in being what I call “Appropriately Frugal” when you are the CEO.
Simply put, being Appropriately Frugal means that you spend money on the important stuff for your business, and save money on everything you can save money on.
However, you and your employees are not the area to save money on.
You do want to attract the best employees. Then you will need to pay them appropriately.
Again, I’m not saying that you should pay your employees crazy salaries. But I am saying that you should pay your employees market rate. Pay your employees as much as you can if you can’t afford market rate:
Your employees will feel appreciated, and…
Your employees will not have to worry about their finances.
The idea that your employees will accept less than market rate because you are just starting only makes sense if they can afford it. Otherwise…
You will not retain your employees, or…
You will not hire the best employees, or…
You will not hire any employees at all.
So why aren’t you willing to pay yourself if you are willing to pay your employees?
Why are you any different than your employees?
You are the most important asset your company has. You will not be at your best if you are constantly worrying about how you are going to pay your bills each month.
Just remember that true profitability comes when your company AND you are cash flow positive.
And Mark, I know you’re reading this post. I hope you have decided to pay yourself something. If you haven’t decided to pay yourself something, then I hope this post helps to change your mind.
For more, read: www.brettjfox.com/are-you-being-appropriately-frugal-and-why-its-so-important/ | https://brett-j-fox.medium.com/why-your-startup-isnt-cashflow-positive-until-you-make-a-living-wage-50468d573227 | ['Brett Fox'] | 2019-10-31 00:19:57.187000+00:00 | ['Entrepreneurship', 'Business', 'Startup', 'Venture Capital', 'Technology'] |
Leo Orenstein, Senior Controls Engineer: “If you want to do something challenging, go for autonomous trucking.” | This month’s employee spotlight is on our Senior Software Engineer, Leo Orenstein, who is designing control code for Starsky Robotics trucks. As the controller is a safety-critical multibody vehicle weighing over 40 tons and over 20 meters long that is supposed to operate autonomously on a public highway, there is no doubt it’s a hard problem to solve. Leo says he is enjoying every single part of it and looking for more people who are not afraid of challenges to join the team.
Leo, let’s talk first about your role at Starsky. What are you and your team working on?
I’m on the Planning and Controls team at Starsky. What we are doing is taking very high-level context behaviors such as “keep driving”, “change lanes” or “pull over because something has happened incorrectly” and turning them into specific commands that a self-driving truck can actually follow. The output will be “turn the steering wheel 13.48 degrees right” or “press the throttle 18.9 degrees”.
In other words, we take these pretty abstract ideas and translate them into a language that our computer hardware system can understand and follow. It’s a two-step process. It starts with Planning to identify these abstract things and break them down into tasks that are more detailed but still not quite reconcilable. Then Controls helps turn them into real commands.
I’ve been doing both Planning and Controls, bouncing between them, depending on what’s more critical at the time. Right now, I’ve been working more on the path planning side, and I find it incredibly interesting. It’s a relatively new field as opposed to Controls which is pretty well-established as it has existed for about 70 years now. Path planning has more liberty and is more open for experimentation.
How big is your team now?
There are six people on the Planning and Controls team at the moment, and we are hoping to recruit another team member by the end of the year.
You have hands-on experience of working in many different industries, including Oil and Gas, Transportation, Mining, and Aviation. What was the most interesting job you had before Starsky?
I was working at General Electric’s research center and that was a really interesting job because it was very diverse, and that was where I gained experience in so many different fields. There was this thing that we used to say to each other back then: “If you don’t like what you’re working on, don’t worry. It’s going to change soon.”
It did change a lot. For example, in the same month, I went to an offshore oil rig, a sugar cane plant, and an iron ore mining facility, because I was working on all these different projects. It was intense, but I enjoyed that variety. The work itself was interesting enough, but I especially liked working on different subjects, going from one to the other and quickly switching between them. Each project was unique. Industries, companies and their problems were completely different, and every time I managed to find the right solutions for them, it felt great.
As a person who has worked in both large corporations and small start-ups, can you compare these two experiences?
I’m definitely a start-up person. I have little question about this now. I like the agility of a start-up. I know this is a cliché, but it’s true. I believe in the idea of trying things. If you have an idea, try it. If it doesn’t work out, find something else and then try again.
At large corporations, you have cycles. Let’s say, we start working on a project. Three months in, we know it won’t work. However, it has funding for the whole year and it’s in scope. So, even though we know it won’t work, we keep trying because that’s the plan. I find this dreadful.
Of course, start-ups have their own issues too. For instance, whatever was working when there were 10 people in a company is not going to work when there are 20. It’s not going to work again when there are 50, and if a company doesn’t realize that, the issue becomes quite pronounced.
Besides that, it’s not a secret that big companies have more well-established processes. Sometimes it’s enough to just click on a button and have magic happen. Not a lot of magic happens in a start-up. If something is being done, you probably either know who’s doing it or going to be doing it yourself. I like working on lots of different things as this is the only way to actually get to know your product and understand how the magic is made.
“If you have an idea, try it. If it doesn’t work out, find something else and then try again.”
How has Starsky helped you in your professional development, and what advice would you give to prospective Starsky candidates?
Before I joined Starsky, I thought I was a decent coder. Then I figured out I was wrong. From a technical perspective, Starsky is a really great place to learn. The company has a very open, collaborative environment and the best culture for learning. It basically says, “if you don’t know things, that’s okay, let’s find out together.” It’s part of Starsky’s DNA. So, if you are joining the autonomous driving field from another industry, go for Starsky. We understand that no one knows all the answers, and we are willing to work with new people to ramp up our collective knowledge.
That being said, trucks are the hardest control problem I ever faced. It’s a very complex system. Even for human drivers, it’s a difficult thing to operate. There are many external factors affecting it and a lot of things can go wrong, so you need to be very precise. For instance, we can all of a sudden get a gust of crosswind. It’s almost impossible to predict it and quite hard to measure it, and just as sudden as it appeared, it may go away. However the truck cannot allow this to push it to the side. So, you need to figure out a way to overcome all these changes and make sure that the truck still responds well.
What’s great is that this is not a research project. We often say to each other: “Is there a simpler way of getting it done?” That’s because we are building an actual product rather than just trying to find a theoretical solution. So, we are looking for people who care a lot about turning things into reality. If you do care, if you are ready to push the boundaries, and if you want to do something challenging, then go for autonomous trucking.
“We are building an actual product rather than just trying to find a theoretical solution.”
What do you find the most challenging in developing autonomous driving systems?
Safety is the most challenging part. In general, the more well-defined a problem is, the more feasible and easier it is to solve. With a safety-critical system like an autonomous truck operating on a public highway, it’s like trying to solve a problem where absolutely anything can go wrong. So, you have to take a very disciplined safety-engineering approach and make sure you are covering all your bases. You need to find out all the failure cases, document them and implement safety mechanisms for all these scenarios. Even if your algorithm works 99.99 percent of the time, it will still be failing once a day. So, you need to make sure that the whole system is really bulletproof.
Can you share a few interesting facts about yourself to let people know you better?
I like to cook a lot, and I actually went to cooking classes for about a year at the time I was doing my master’s. I was studying, working and doing cooking classes. That was pretty intense. The breaking point was when someone asked me to open a restaurant with them. The guy had a restaurant space and asked me to open a brewery in it. I did the math and decided that it would be too much risk for me, so I passed on that opportunity.
That’s pretty much when I left cooking, as I figured out that I love it as a hobby. My wife tells me that the only thing that can really get me mad is getting something wrong when I’m cooking. I’m a very chill guy, but if I get the recipe wrong, I get crazy mad for the whole day.
Also, on a personal note, I’m having a baby soon. And I really appreciate how supportive of that Starsky has been. Not only do we have parental leave, but people truly understand the importance of that. I know that some companies don’t really care — even though you’re having a baby, you have to deliver a product in the first place. It’s more like taking parental leave but being on Slack while doing it. At Starsky, you are not simply getting the leave, but you are actually encouraged to enjoy it and bond with your family.
***
If you want to join the Starsky team and help us get unmanned trucks on the road, please apply here. | https://medium.com/starsky-robotics-blog/leo-orenstein-if-you-want-to-do-something-challenging-go-for-autonomous-trucking-944897d999ba | ['Starsky Team'] | 2019-10-22 17:48:26.542000+00:00 | ['Autonomous Cars', 'Careers', 'Startup', 'Self Driving Cars', 'Engineering'] |
Finally, an intuitive explanation of why ReLU works | One may be inclined to point out that ReLUs cannot extrapolate; that is, a series of ReLUs fitted to resemble a sine wave from -4 < x < 4 will not be able to continue the sine wave for values of x outside of those bounds. It’s important to remember, however, that it’s not the goal of a neural network to extrapolate, the goal is to generalize. Consider, for instance, a model fitted to predict house price based on number of bathrooms and number of bedrooms. It doesn’t matter if the model struggles to carry the pattern to negative values of number of bathrooms or values of number of bedrooms exceeding five hundred, because it’s not the objective of the model. (You can read more about generalization vs extrapolation here.)
The strength of the ReLU function lies not in itself, but in an entire army of ReLUs. This is why using a few ReLUs in a neural network does not yield satisfactory results; instead, there must be an abundance of ReLU activations to allow the network to construct an entire map of points. In multi-dimensional space, rectified linear units combine to form complex polyhedra along the class boundaries.
Here lies the reason why ReLU works so well: when there are enough of them, they can approximate any function just as well as other activation functions like sigmoid or tanh, much like stacking hundreds of Legos, without the downsides. There are several issues with smooth-curve functions that do not occur with ReLU — one being that computing the derivative, or the rate of change, the driving force behind gradient descent, is much cheaper with ReLU than with any other smooth-curve function.
Another is that sigmoid and other curves have an issue with the vanishing gradient problem; because the derivative of the sigmoid function gradually slopes off for larger absolute values of x. Because the distributions of inputs may shift around heavily earlier during training away from 0, the derivative will be so small that no useful information can be backpropagated to update the weights. This is often a major problem in neural network training.
Graphed in Desmos.
On the other hand, the derivative of the ReLU function is simple; it’s the slope of whatever line the input is on. It will reliably return a useful gradient, and while the fact that x = 0 {x < 0} may sometimes lead to a ‘dead neuron problem’, ReLU has still shown to be, in general, more powerful than not only curved functions (sigmoid, tanh) but also ReLU variants attempting to solve the dead neuron problem, like Leaky ReLU.
ReLU is designed to work in abundance; with heavy volume it approximates well, and with good approximation it performs just as well as any other activation function, without the downsides. | https://medium.com/analytics-vidhya/if-rectified-linear-units-are-linear-how-do-they-add-nonlinearity-40247d3e4792 | ['Andre Ye'] | 2020-09-02 17:36:24.981000+00:00 | ['Machine Learning', 'AI', 'Artificial Intelligence', 'Data Science', 'Towards Data Science'] |
Google’s New Accessibility Projects | Google has recently unveiled 3 separate efforts to bring technology to those with disabilities to help make their daily lives easier and more accessible. The three projects are Project Euphonia, which aims to help those with speech impairments; Live Relay, which assists anyone who is hard of hearing; and Project Diva, which aims to give autonomy and independence to people with the help of Google Assist.
More than 15% of people in the United States live with a disability, and that number is only expected to grow in the years ahead as we grow older and start living longer. There has never been a better time to try to harness the power of our technology to help make the lives of the disabled more comfortable and fulfilling.
Project Euphonia
Project Euphonia aims to help those with speech difficulties caused by cerebral palsy, autism, and other developmental disorders, as well as neurologic conditions like ALS (amyotrophic lateral sclerosis), stroke, MS (multiple sclerosis), Parkinson’s Disease, or traumatic brain injuries. Google’s aim with Project Euphonia is to use the power of AI to help computers understand speech that is impaired with improved accuracy, and then, in turn, use those computers to make sure everyone using the service can be understood.
Google has partnered with the ALS Residence Initiative and the ALS Therapy Development Institute to record voices of men and women with ALS, and have worked on optimizing algorithms that can help to transcribe and recognize their words more reliably.
Live Relay
Live Relay was set up with the goal of bringing voice calls to those who are deaf or hard of hearing. By using a phone’s own speech recognition and text-to-speech software, users will be able to let the phone listen and speak on their behalf, making it possible to speak to someone who is deaf or hard of hearing.
Google also plans to integrate real-time translation into their Live Relay software, allowing anyone in the world to speak to one another regardless of any language barrier.
Project Diva
Project Diva helps those who are nonverbal or suffer from limited mobility to give Google Assistant commands without needing to use their voice, but instead by using an external switch device.
The device is a small box into which an assistive button is plugged. The signal coming from the button is then converted by the box into a command sent to the Google Assistant.
For now, Project Diva is limited to single-purpose buttons, but they are currently devising a system that makes use of RFID tags which they can then associate with certain specific commands.
This article was originally published on RussEwell.co | https://russewell.medium.com/googles-new-accessibility-projects-bb5968546c1b | ['Russ Ewell'] | 2019-11-05 14:40:18.294000+00:00 | ['Russ Ewell', 'Disability', 'Artificial Intelligence', 'Google', 'Technology'] |
Using AI to detect Cat and Dog pictures, with Tensorflow & Keras. (3) | Pre-Trained convnet:
The number one reason for our data not reaching the heights of accuracy is the lack of data we have to train our system with. If Deep Learning is the new electricity then data is its fuel.
Thus to help us in our endeavor we will break our system into two parts. The convolutional block and the classifier block. The convolutional block will contain all our neural network components before the “Flatten” portion of our code.
We will be using a pre-trained convolutional base called the InceptionV3 architecture. The model was trained on 1.4 million images and thus has no shortage of the proverbial fuel.
Process of switching classifiers.
Analyzing the model:
Create a new block of code anywhere in our previous notebook, within the block write:
from tensorflow.keras.applications.inception_v3 import InceptionV3 #import InceptionV3 conv_base = InceptionV3(weights='imagenet', include_top=False, input_shape=(64, 64, 3)) for layer in conv_base.layers:
layer.trainable = False
We first import the InceptionV3convolutional base and set that as the conv_base. We will reconfigure the model to have our input shape of (64,64,3). We will also freeze our convbase’s trainability as we want to keep the information stored within the convbase and only train the classifier portion.
Next, under the above code, type:
print(conv_base.summary())
to get a view of the convolutional base. You should get the following output:
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 64, 64, 3)] 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 64, 64, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 64, 64, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 32, 32, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 32, 32, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 32, 32, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 16, 16, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 16, 16, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 16, 16, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 16, 16, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 8, 8, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 8, 8, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 8, 8, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 8, 8, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 4, 4, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 4, 4, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 4, 4, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 4, 4, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 2, 2, 512) 0
=================================================================
Total params: 14,714,688
Trainable params: 0
Non-trainable params: 14,714,688
_________________________________________________________________
None
As you can see the architecture is made up of convolutional 2d blocks and maxpooling2d blocks which is no different from our own code in part 2. The main difference is that they have trained more data on the convolutional base and thus required more layers.
Developing our model:
We will know to change our previous model architecture to :
network = models.Sequential() network.add(conv_base) network.add(layers.Flatten()) network.add(layers.Dense(256, kernel_regularizer=regularizers.l2(0.001))) network.add(layers.LeakyReLU()) network.add(layers.Dense(1,activation='sigmoid'))
The rest of the model block will stay the same. Notice, we added our conv_base block just like any other layer.
Now before we run the block I must warn you that it will take a substantial amount of time due to the large nature of the conv_base.
Now, if you’re willing to wait, go ahead and run the model!
Graphical analysis of our model:
Finally, we can run our image block from part two to see the accuracy we achieved with this method:
A sad fail
Unfortunately, it seems using pre-trained models doesn’t help in our case. This is most likely due to the lack of data used to optimizer our classifier. This can be fixed by using more images as well as using larger images.
Interestingly, removing the following piece of code:
for layer in conv_base.layers:
layer.trainable = False
grants an increase of accuracy to 96%.
96% Accuracy rate
Thus implying the data used to train InceptionV3 does not “coincide” with our data. Furthermore, running it for 500 epochs which would take 3 hours will grant an increase in accuracy to 98%.
Next Time:
Next time, we will use our model to create a visual-based password cracker!
***GitHubCode***:
https://github.com/MoKillem/CatDogVanillaNeuralNetwork/blob/main/CNN_CATS_96%25.ipynb | https://codewebduh.medium.com/using-ai-to-detect-cat-and-dog-pictures-with-tensorflow-keras-3-f24698c574c6 | ['Maxamed Sarinle'] | 2020-11-25 09:15:40.323000+00:00 | ['Keras', 'Artificial Intelligence', 'Python', 'Convolution Neural Net', 'TensorFlow'] |
Design Patterns: Factory | It’s a factory
The factory is a creational pattern that can be used to retrieve instances of objects, without having to new them up directly in calling code.
At my job, I find I’m using creational patterns constantly; and most of the time it’s a factory.
In class-based programming, the factory method pattern is a creational pattern that uses factory methods to deal with the problem of creating objects without having to specify the exact class of the object that will be created. This is done by creating objects by calling a factory method-either specified in an interface and implemented by child classes, or implemented in a base class and optionally overridden by derived classes-rather than by calling a constructor.
From Wikipedia
This pattern is very related to the strategy pattern — at least as far as I’m concerned. In the previous post on the strategy pattern we learned that you can use multiple implementations of a single interface as differing “strategies”. In the post, we were deciding based on some pretend run time situation of which strategy to use:
ILogger logger = null;
if (args[0] == "file")
logger = new FileLogger(); // create a file logger because the consumer specified it in some way.
else
logger = new ConsoleLogger(); // create a console logger as the fall back strategy.
The above could be an example of the application choosing a strategy based on some run time input (the value in args[0] ).
Why is the snippet a problem? It probably won’t be the first time it happens, and when your codebase is very simple. As your codebase evolves however, and you get perhaps more places where you would want to instantiate a ILogger, and more ILoggers get added, you start needing to update more and more code. What do I mean by that? Well, imagine you added this “if/else” logger logic to 50 additional files. That if/else logic now exists in 50 files!
Every time a “branch” occurs in code, that makes the code harder to understand. This may be only one simple 4 line set of instructions, with a simple to follow branch, but what if this same sort of situation were throughout your codebase, applying to more than just an ILogger ?
What if, even worse, you add a MsSqlLogger , and a MongoLogger to your possibilities of loggers, now you have an if/else branch to update in a hypothetical 50 files; that's no good!
How can we avoid some of this hassle? The factory method to the rescue!
Implementation
We’ll be using the same ILogger strategy and implementation from the previous post as a base line. The few additions are:
public enum LoggerType
{
Console,
File
}
public interface ILoggerFactory
{
ILogger GetLogger(LoggerType loggerType);
}
That’s it for the “ abstraction “ part of our factory. Now the implementation:
public class LoggerFactory : ILoggerFactory
{
public ILogger GetLogger(LoggerType loggerType)
{
switch (loggerType)
{
case LoggerType.Console:
return new ConsoleLogger();
case LoggerType.File:
return new FileLogger();
default:
throw new ArgumentException($"{nameof(loggerType)} was invalid.");
}
}
}
and a (bad) example of how to use it (since we aren’t for this example using dependency injection like we should in the real world):
static void Main(string[] args)
{
ILoggerFactory loggerFactory = new LoggerFactory();
ILogger logger = null;
logger = loggerFactory.GetLogger(LoggerType.Console);
logger.Log($"Doot doot, this should be a {nameof(ConsoleLogger)}. {logger.GetType()}");
logger = loggerFactory.GetLogger(LoggerType.File);
logger.Log($"Doot doot, this should be a {nameof(FileLogger)}. {logger.GetType()}");
}
Reasons to use this pattern
How does the previous section actually help us? If you recall, in our hypothetical scenario our original “if/else” branching logic occurred in 50 files. We needed to then add two additional strategies, meaning we needed to update 50 files. How did the factory help us? Well now, that branching logic is completely contained within the factory implementation itself. We simply add our MsSql and Mongo values to our enum, and add two new case statements to our factory implementation - a total of 2 files updated, rather than 50.
This not only saves us a ton of time, it help ensure that we don’t miss making updates in any of our 50 files. One additional thought is the factory itself is very testable. It’s easy to test all the “logic” that’s involved with choosing the correct strategy, because all of that logic is completely contained within the factory itself, rather than across 50 files!
References | https://medium.com/swlh/design-patterns-factory-b5d0417bb086 | ['Russell Hammett Jr.', 'Kritner'] | 2020-03-01 11:03:36.151000+00:00 | ['Programming', 'Software Engineering', 'Software Development', 'Software Design Patterns', 'Design Patterns'] |
OMG. Buffer Lost HALF Its Social Media Traffic This Year. What Does It Mean? | Social media marketing software team admits they’re failing at social media marketing.
Buffer, a company I’ve considered one of the leaders in social media with a massive presence (think top 1%, unicorn status) made a shocking announcement this week.
In a post on the their blog, Buffer author Kevan Lee plainly states, “We as a Buffer marketing team — working on a product that helps people succeed on social media — have yet to figure out how to get things working on Facebook (especially), Twitter, Pinterest, and more.”
Somehow, some way, Buffer has lost nearly half its social referral traffic over the last year.
The bottom seems to be falling out across Facebook, Twitter, LinkedIn and Google+:
Now, the figures are shocking, but Buffer’s openness about them is par for the course. They’ve long been trailblazers in corporate transparency, even publishing all of their salaries to the web.
The Buffer team is running some experiments to try to determine the cause of this huge loss in social referral traffic, but I have a few ideas of my own on it:
1. It Could Be Instrumentation Error
Facebook Mobile (which is essentially 80% of Facebook’s) apparently doesn’t add UTM parameters. This means that social traffic could potentially be mischaracterized as direct.
Google Analytics certainly doesn’t have a huge incentive to make Facebook, Twitter or other social networks look great, so they have no incentive to straighten this out.
2. The 72% Drop in Google+ Traffic Seems Reasonable Without Having Done Anything “Wrong”
The biggest drop Buffer has seen (by far) was in their Google+ traffic, which is down 72% over the last year. We all know Google+ has had one foot in the grave so long I wouldn’t even include it in a calculation of average traffic losses.
I checked WordStream’s own analytics and discovered that our Google+ referral numbers are actually similar to Buffer’s, despite the fact I’ve personally maintained an active presence on Google+ both personally and for the company.
I’d be willing to bet that other companies are seeing similar results on Google+. It’s just not as active as it once was.
3. We’re Drowning in Crap Content
Organic social is so ridiculously competitive now, with an ever-increasing volume of content going after the same finite number of people’s attention. Even when you’re exceptional, the pool of other exceptional content creators is growing.
As Rand Fishkin said, “Buffer’s content in 2013/14 was revolutionary and unique. It’s stayed good, but competition has figured out some of what made them special.”
Perhaps readers are tiring of “super transparency” as a content marketing style.
It’s actually a bit humbling that even companies like Buffer, whom so many of us look to for strategy on creating and promoting remarkable content, are also struggling with this.
4. Facebook/Twitter Ads Are Super Important
WordStream’s own Facebook traffic grows every month at a really good clip, but yes, we’re spending money on Facebook Ads.
Sure, it’s a bummer that all of social isn’t free, but what the heck — sometimes it’s nice be able to fix a problem by throwing money at it (it’s a pretty easy solution, actually).
Organic Facebook reach is just really pathetic now. If your only plan for getting people from Facebook to your website is to post things on your Page, you’re going to fail. It doesn’t matter how awesome your content is… Facebook just doesn’t want to show it organically anymore. The Newsfeed is too busy.
The good news is that if you’re posting quality content and focusing on engagement, your Facebook ads can be super cheap.
5. Organic Social is a Bit of a Hamster Wheel
With declining organic reach, there’s less of a “snowball effect” like what you typically see in SEO, where a steady amount of effort produces increasing returns every month.
You have to work really, really hard on a continuous basis at organic social to move the needle even a little. You pretty much have to double your efforts to double results, which is pretty hard to do when you’re already as big as Buffer.
In short, I don’t think Buffer’s plummeting organic social traffic is the result of any lack of creativity or effort on their part. I reject Kevan Lee’s conclusions to that effect, as they’re obviously brilliant people and didn’t get where they are by sucking at social.
Personally, I think it has more to do with external factors and their need to adapt to them. In fact, I first thought, “What?! They don’t have a social media manager?” But then almost immediately afterward I said to myself, “Don’t hire one now… put that money into your social ads budget instead.”
Best of luck to Buffer as they try to figure out their internal number, and kudos to them for sharing them in such an honest and forthright way. The whole industry will learn from their experience.
What do you think of Buffer’s traffic loss and their potential reasons for it? Share your thoughts in the comments.
Image credit: Business Insider
About The Author
Larry Kim is the CEO of Mobile Monkey and founder of WordStream. You can connect with him on Twitter, Facebook, LinkedIn and Instagram. | https://medium.com/marketing-and-entrepreneurship/omg-buffer-lost-half-its-social-media-traffic-this-year-what-does-it-mean-bc567b4f05c7 | ['Larry Kim'] | 2017-04-30 16:25:59.447000+00:00 | ['Marketing', 'Twitter', 'Facebook', 'Social Media', 'Social Media Marketing'] |
7 Food-Related Hacks That You Must Implement to Lose Weight Easily | 7 Food-Related Hacks That You Must Implement to Lose Weight Easily
These hacks are based on psychology, biology and hormonal science.
Photo by Brooke Lark on Unsplash
Food is one of the most powerful natural stimulants out there — seeing, smelling or even thinking about which is enough to activate a craving and feeling of hunger in human bodies.
It has been proven that any time a person sees food, particularly unhealthy, junk and calorie-rich food, the reward centres in his or her brain light up making them feel hungrier.
A hunger that is always accompanied by an increase in your heart rate and sometimes even makes you drool a little. This happens usually because your body is priming you to eat, whether you are planning to do so or not.
In a 2006 study, jars of Hershey’s kisses were placed on the desks of some secretaries in an organisation. Half of the secretaries received them in a transparent jar while the other half in an opaque one. The ones who could see the chocolates staring them in the face all day ate 71% more than those who couldn’t see them.
The phenomenon associated with this study is called food cue reactivity. According to which, if you see the food your body would crave to eat it.
This happening doesn’t only apply to food, but it also implies to anything that we have learnt to associate with food in the past. Therefore, certain smells, sights, images and locations can trigger food cravings depending on our learned associations with them as well.
Think about how this is already happening in your own life on almost a daily basis, for example, you decided to eat clean and healthy today but your coworker or friend brought burgers or you were casually watching tv and a pizza hut commercial came on.
You would immediately crave the food, your will power would go down and your mouth will be like - damn, I haven't had pizza in ages and next thing we know you have already ordered it.
Unless we are aware of how food cues influence our decision, we on a daily basis can end up consuming calories we had no intention of eating without even paying much attention to them. | https://medium.com/in-fitness-and-in-health/7-food-related-hacks-that-you-must-implement-to-lose-weight-easily-1cc53835f4d1 | ['Shruti'] | 2020-12-18 15:06:54.740000+00:00 | ['Food', 'Health', 'Self', 'Fitness', 'Psychology'] |
A Light at the End of the Covid Tunnel | Hang in there, Covid-19 vaccines are coming soon. (Image credit: Tama66 via pixabay)
We’ve learned a lot since I last wrote about Covid-19 in May.
Wear a mask. They work. There is no excuse, there is no debate. Dozens of studies on the subject all support the same conclusion: higher rates of mask wearing leads to lower rates of new infections, and less deaths. It’s quite simple: the virus is primarily spread by respiratory droplets in the air. Wearing a mask protects other people by blocking your aerosol particles, and protects yourself by filtering out other people’s. While any mask is far better than nothing, the data shows multi-layer cloth masks are more effective filters than thin single-layer ones (like bandanas or neck gaiters).
Lots of new research suggests that vitamin D may be helpful in protecting you against Covid-19. Comparisons between mild and severe cases show that vitamin D deficiency is a common underlying factor in critically ill hospitalized patients. Correlation doesn’t necessarily imply causation, but vitamin D is also known to be important four our immune system. Your body can make its own by getting some sun, but not so much during winter quarantines. Given the likely upside in reducing Covid severity, and lack of any downsides, I think everyone should start taking vitamin D supplements as a precaution. Don’t go crazy, 1000–2000 IUs per day is fine, taken with meals to help absorption.
We now have at least two extremely promising vaccine candidates, and more still coming down the pipeline! The Pfizer/BioNTech study enrolled 43,000 participants, half of which got a real vaccine and half of which received a placebo. After a first-stage analysis, there were 170 confirmed cases of Covid-19. Of all the people who got sick, 95% of them (162/170) were in the placebo group. It’s a similar story for the Moderna trial: 30,000 participants, 95 confirmed cases, and 90 out of the 95 people who contracted Covid were in the unvaccinated group. These are amazingly good results. The data shows that getting these vaccines will significantly reduce your chances of contracting COVID-19, and if you do get it, will significantly reduce your odds of dying from it.
I know that some people will be understandably cautious about getting such a new vaccine. The politicization of everything in our lives has unfortunately undermined the public’s confidence in this vaccine development process. But let me make this as clear as possible — you can trust the scientific method. These clinical studies have strictly designed trial protocols, approved by the FDA/NIH, with predefined rules about how the statistical analysis will be conducted. The results are in, and the math doesn’t lie.
Within the next few weeks, you’ll be hearing about these vaccines getting “FDA emergency use authorization.” In short, this means that after rigorous safety and efficacy testing in the clinical trials, the data demonstrates such a compelling potential benefit to public health that it outweighs any potential unknown risks. I know that some people will be understandably concerned about the possibility of these vaccines having dangerous side effects. We have good news on that front as well: from the tens of thousands of volunteers who have received vaccines since July, there have been no adverse reactions beyond the standard fatigue and aches you might get from a flu shot. (By the way, get your flu shot too.)
Some will argue that we have no way of knowing whether rare, unintended, side effects might emerge over time. While it’s true that these sorts of long-term studies are still ongoing, we have to weigh this uncertainty against the data we already do have, which is how dangerous and deadly Covid-19 is. Even if you are fortunate to have a mild case, an alarming number of “Covid long-haulers” report suffering long-term health complications even after recovering from the viral infection. These lingering symptoms include chronic fatigue, lung damage, heart inflammation, and mental fog. The bottom line is that vaccination will give you the opportunity to prepare your body to defend itself against these worst outcomes. If I could get either of these vaccines tomorrow, I would do so without any hesitation.
As a scientist, I can assure you that the light at the end of the tunnel is now visible! Millions of doses of these vaccines have already been manufactured, in anticipation of achieving the promising clinical data we now have. The speed at which we have achieved these vaccines should be viewed not with suspicion or with fear, but with pride at witnessing one of the greatest scientific achievements of our lives. The most at-risk groups, like frontline health care workers, will start getting vaccinated by the end of this year. For the rest of us, vaccines should be available to the general public through the Spring of 2021. I would bet on a return to near-normal by next summer!
In the weeks following Thanksgiving in the US, more than 2,000 Americans are dying every single day. Case numbers, hospitalizations, and deaths are on the rise almost everywhere. It’s never been more important for everyone to remain vigilant. I know we’re all tired of this pandemic. It has caused incalculable societal disruption, economic devastation, isolation, suffering, and loss. Many of us have endured the sadness of missing our friends and families, and putting our lives on hold for almost an entire year. But if you’ve been careful enough to avoid getting coronavirus so far — don’t let all of these sacrifices be for nothing. Hang on just a little longer. Wear a mask, take your vitamins, avoid indoor gatherings, and get a vaccine as soon as you can. | https://medium.com/politically-speaking/a-light-at-the-end-of-the-covid-tunnel-f6eef3e8cbe5 | ['Chris Calvey'] | 2020-12-07 20:42:44.449000+00:00 | ['Society', 'Politics', 'News', 'Coronavirus', 'Pandemic'] |
What is Deep Learning and How Deep Learning Works? | By the technology develops, we’re getting into a new era that full of human-like robots. It wasn’t possible even to imagine that is coming 100 years ago. Since the increase in technology has exponential growth, advanced robots will occur in a much shorter period.
What is Artificial Intelligence?
It’s better to understand the AI (Artificial Intelligence) concept before jumping into Deep Learning. We can define AI as a system that analyzes its environment and takes actions to maximize its chance of success without being explicitly programmed. As we got a brief idea about AI, let’s inspect some subfields of AI roughly.
Natural Language Processing
Machine Learning
Neural Networks
Robotics
But, What about Deep Learning? Where is it? Well, deep learning is not directly a subfield of AI. It is a subfield of some subfields since it uses multiple fields in its application. For the sake of visualizing what we’ve talked about, look at the following Venn Diagram:
So, What Is Deep Learning?
Deep learning is a method of artificial intelligence that mimics the functioning of the human brain while interpreting data and generating patterns for use in making decisions.
Deep Learning is a subset of machine learning, as shown in the Venn Diagram above. Deep Learning has networks capable of learning from data that is mostly unsupervised. Deep Learning also is known as a deep neural network or deep neural learning.
Deep Learning also uses a subfield of AI, which is Neural Networks. A Perceptron is a single layer neural network; a multi-layer perceptron is called Neural Networks.
What the heck is The Perceptron?
Neuron vs. Perceptron — Image from IF Inteligencia Futura
A Perceptron is nothing but an artificial neuron. A Neuron takes the input signals and processes it then outputs a signal; A Perceptron takes inputs processes it and then outputs a processed data. The picture above may seem a bit complicated and rough, so let’s inspect it in more detail.
How The Perceptrons works
The diagram above visualizes how they work quite clearly. In Fact, the formula at the right explains everything. A Perceptron takes an input matrix and multiplies (application of the dot product) it with each different weight matrix then, sums all of these dot products. After that, an activation function applies to that result summation. In our world, most of the things are not linear; so, the activation function we use is mostly non-linear.
In Fact, what I’ve explained above is not entirely correct. Mostly, we also add a bias term before applying the activation function. That addition of bias term allows us the shift our activation function. Notice that, while we’re programming this algorithm, we’re specifically using an activation function that coded by another programmer, or we’re coding our activation function from scratch.
Why Is Non-Linearity Important?
As we clearly can see from the picture above, If we use a linear function to separate the triangles between the circles, it can’t be said that we’re doing a good job. But, If we use a non-linear function to do that task, we can easily separate them without waiving anything.
Let’s say that we have a classification algorithm that classifies if a picture we provide is a dog’s or cat’s image. Assume the triangles in the graph above is represents the cats, the circles represent the dogs. If we use a linear activation function, our algorithm won’t be successful enough to classify 6 pictures correctly in total. Since 3 triangles and 3 circles are not located where they supposed to be. But if we use a non-linear activation function, that failure disappears! We’re able to separate them properly with a non-linear function.
How does the Learning Process start?
The machine needs to know what’s right and what’s wrong to learn correctly. In other words, we need to feed it with some data. Let’s get back to our dog and cat classification example.
So, we’re feeding our deep learning algorithm with some data that includes cat and dog pictures. While we’re feeding with these data, we’re providing correct answers to the machine. The machine changes these weight matrices during this learning process. So, we don’t have constant weight matrices all the time.
Then, we’re providing random pictures to classify whether it’s a cat or dog. How accurate results we get mainly depends on the amount of data we provide to the machine. The activation function and bias we use also affect the accuracy of our deep learning algorithm.
To understand more about programming, AI, and technology, follow the upcoming posts.
Any comments, or if you have any questions, write it in the comment! | https://medium.com/ai-in-plain-english/what-is-deep-learning-and-how-deep-learning-works-6f055125633d | ['Cagri Ozarpaci'] | 2020-06-22 21:31:59.814000+00:00 | ['Deep Learning', 'AI', 'Artificial Intelligence', 'Computer Science', 'Data Science'] |
[DeNA TechCon 2020 ライブ配信] DeNA データプラットフォームにおける自由と統制のバランス | in In Fitness And In Health | https://medium.com/dena-analytics-blog/dena-techcon-2020-%E3%83%A9%E3%82%A4%E3%83%96%E9%85%8D%E4%BF%A1-dena-%E3%83%87%E3%83%BC%E3%82%BF%E3%83%97%E3%83%A9%E3%83%83%E3%83%88%E3%83%95%E3%82%A9%E3%83%BC%E3%83%A0%E3%81%AB%E3%81%8A%E3%81%91%E3%82%8B%E8%87%AA%E7%94%B1%E3%81%A8%E7%B5%B1%E5%88%B6%E3%81%AE%E3%83%90%E3%83%A9%E3%83%B3%E3%82%B9-5075ebd9ea03 | [] | 2020-03-26 05:32:13.382000+00:00 | ['Bigquery', 'Analytics', 'Kubernetes', 'Data Engineering', 'Technology'] |
How Green Technologies Of The Future Are Being Built In Singapore | Also known as The Lion City, The Garden City and the Little Red Dot, Singapore is famous for its significant achievements in innovation, favorable tax system, recognized universities and great life quality. However, Singapore also ranks at the very top of the world’s most densely populated independent territories right after Macau and Monaco. And with insignificant amount of natural resources of its own, the country needs a strong, forward-thinking take on energy and innovation in order to power its fast-growing community.
A few weeks ago, I was fortunate to visit Singapore and have a chat with Nilesh Y. Jadhav, the Senior Scientist and Program Director at the Energy Research Institute at NTU Singapore. We discussed Singapore’s energy situation and how his work at NTU involves running EcoCampus, the soon-to-be world’s greenest campus where they develop and test green technologies of the future.
First off, on February 20th the The Government of Singapore introduced carbon tax on large direct emitters that received approval from stakeholders and observers. Many believe that the tax was well-timed and could boost Singapore’s economy.
#1–What do you think of the new tax and how would you describe rest of the country’s energy policy?
Nilesh Y. Jadhav
The new tax was a great move to level the playing field for renewables. However, looking at the wider picture, I would say that Singapore has been quite prudent in its take on energy.
Unlike many other countries including the U.S., Singapore doesn’t believe in subsidising energy. Regardless of scarcity of land and dense population, the country aims to be energy resilient and sustainable.
During the past two decades, Singapore moved from oil fired to gas fired electricity generation which drastically improved the country’s carbon footprint. 95% of Singapore’s electricity today comes from natural gas. Now, for the resilience of gas infrastructure and to build solid electricity security, the country is implementing LNG storage capabilities.
#2 — What about renewable energy?
In Singapore, we have quite limited renewable energy options. Our wind speed and tidal currents aren’t sufficient. And geothermal energy is not viable as well. Hence, so far the only option for renewable energy has been solar. Thanks to our location close to equator, we get quite good solar irradiation in Singapore (an average of 1,580 kWh/m2/year) which is about 30% more than Germany. However, the tropical climate does make our weather a bit unpredictable with clouds and the rain. This leads to variability in solar generation.
I have read that you are testing a large floating solar farm in a water reservoir? Has it been successful?
It was indeed successfully installed in one of the reservoirs and they are collecting project data to see if this approach is economically and technologically feasible. We did an analysis on the solar potential and discovered that even if we were to cover all the rooftops and water reservoirs in Singapore with solar panels, we would only meet 10% of all the country’s energy needs with solar.
Another way that the country is supporting renewable energy is through financing models and economic development by anchoring the clean energy ecosystem in Singapore.
Notably, a major solar module manufacturer viz. Renewable Energy Corporation (REC) chose Singapore to setup it’s largest fully-integrated solar module manufacturing facility that is expected to produce 190,000 solar modules per month. Singapore Government has committed to over 350MW of solar installations by 2020.
#3 — As an avid solar energy enthusiast, do you think that in the future solar energy could be the world’s leading source of renewable energy?
I do believe that. Many things are happening in the right direction — each year, the price has been falling rapidly (in year 2016 only, solar PV module prices have slashed by more than 20%). Yet again, it’s linked to geographic and climatic conditions, for example in Denmark solar will probably be second or third best because there’s remarkably more wind. For countries such as China and India I think solar is the most efficient option. However, at the current price of solar and energy storage it won’t be able to supply the base load, but the second tier.
The Hive Building at NTU , Singapore
#4 — When did you join NTU and what initiated the birth of EcoCampus?
Fun fact is that I used to work for Shell — some people call it the “dark side” of energy. Then, about five years ago I decided to shift my career to clean energy and sustainability and that led me to a position at NTU in the Energy Research Institute (ERI@N).
I started off with assisting in developing technology roadmaps for The the Singapore Government for solar, smart grid and energy efficient buildings. Two years later we started the EcoCampus initiative, a living laboratory for innovative clean technology solutions.
The most important characteristic of the EcoCampus initiative is that each technology needs to be demonstrated on the campus apart from the R&D work. This really adds the biggest value to the company as often times, it’s difficult to find the first adopter of cutting edge technologies. As a small country we are resource scarce, but being open to collaborations and to the market, we can give the companies access to high-quality research, testing and access to the Asian market.
Among many others, Siemens needed a place to testbed their technology that they wanted to bring from the US to Singapore. So they landed at
#5 — Tell us a little bit more about the projects that have been built in EcoCampus?
Today, six of the projects are successfully completed. One of them, developed together with Engie, is an app for energy conservation through user behaviour. We tested it with the students and the whole campus staff. They interacted with the facility managers in order to save energy via the app. We involved professors from the Sociology and Economics Departments in order to add some great gamification elements and make people want to use it.
Thanks to this solution, we would be able to save about 5% of energy on the campus through behaviour change. Right now, we are working on the second version called PowerZee, which will be used in other universities all over Singapore and the world. Find out more at “App Allows Students To Reduce Uni’s Power Bills”.
#6— Could you share some numbers on how much money the green approach will help you save money at NTU?
Yes, certainly.
Our goal is to reduce 35% in energy, water and waste intensity by 2020. This should leave us with around 5–10 million Singapore dollars of savings per year.
This goal is also in line with Singapore’s commitment in the Paris Agreement which states 36% reduction in carbon intensity. Due to our unordinary mix of energy, the saving in energy are directly linked to carbon savings.
#7 — One of your key research fields is Energy Information and Analytics. Could you talk about some of these projects?
Most certainly. With more than 200 buildings on the campus we are able to collect a lot of data. We are using smart meters and BMS (building management system) for collecting all the data. We are tracking everything from energy efficiency to consumption patterns. For example, during the holidays the energy consumption in the campus decreases significantly. Thanks to the data we can negotiate our energy bill. Along with analytics we also do data simulation and modelling on energy use of different buildings.
#8 — What kind of data do you use for the weather predictions?
In fact, we can’t really rely on external data, so we are using the two weather stations that we have on the campus and another one in another campus nearby with great solar surveillance cameras.
#9—And what about the data visualisations side. Do you do it in-house?
As researchers we are fans of using open source software, so we do most of the data modelling and visualisations ourselves, but we also work with companies such as IES to develop the virtual campus data platform with advanced simulation capabilities. For example, there is an on-going project for making our professors accountable for the energy consumption of their departments. The idea is that the professors have a fixed energy budget and if they manage to save energy they can keep the rest of the budget for research and if they overspend, they need to explain their actions.
Gardens By The Bay, Singapore
#10 —That is an interesting approach. What other developments are taking place in Singapore?
I would say that the most exciting developments are happening in energy efficiency and smart buildings. One of the sustainability goals for Singapore is that by 2020, 80% of the Singapore’s buildings need to be green certified. At the moment it’s little bit over 20%. Interestingly, it’s not only buildings or campuses, but entire city districts that are becoming green. There are research and policy efforts in Singapore to push further towards zero energy and even positive energy buildings.
There is one great research project that we call a Smart Multi Energy System (SMES). It combines thermal, electrical and gas energy sources and they are being optimised based on the availability of each energy source at any point of time. It enables you to play with the grid in real time offering enhanced demand response opportunities.
Once this project finishes it can be deployed at any industrial site that has different energy sources and it will help to save up to 20% of all costs.
What was supposed to be a 20 minute interview, lasted over an hour.
Thank you, Nilesh, and thank you all for reading. We also discussed autonomous vehicles, wireless charging and food waste so I certainly encourage you to ask additional questions should you have any. Just write me at aljash@planetos.com or add them to the comments. | https://medium.com/planet-os/how-the-green-technologies-of-the-future-are-being-built-in-singapore-8baeb64546f1 | ['Annika Ljaš'] | 2017-03-16 12:37:36.504000+00:00 | ['Renewable Energy', 'Singapore', 'Climate Change', 'Energy', 'Environment'] |
The 7 best business books of 2020 (that I read) | The 7 Best Business Books of 2020 (That I Read)
Of the 46 books I read in 2020, these are the best for entrepreneurs and managers.
Photo by CHUTTERSNAP on Unsplash
It was a tough year for most of us. As an entrepreneur in the hospitality area, my business was on the verge of the abyss. One thing that helped us to survive were ideas from a book I read some time ago.
As much as it is good to keep hope and some optimism, we should be cautious about prospects. Many of the problems faced this year will not magically disappear at midnight of 31/12/2020. Therefore, I am listing to you the 7 best books for entrepreneurs, managers, and business-related professionals. One of the few positive points of this year was that, with my business closed during part of the year, I could read more.
After 46 books (you can see my reviews for all of them on my Goodreads page, feel free to add me there too), these are the 7 I recommend for you to read in 2021. Put them on your bucket list, you will not regret it.
The 1-Page Marketing Plan: Get New Customers, Make More Money, And Stand out From The Crowd
During my years at the business school, we had plenty of Marketing classes. Most of them used material from superstar-authors like Philip Kotler and Michael Porter. I will not deny that it was useful and even inspiring, but they focus more on the big-corporation game instead of the new and smaller business.
This book, written by Allan Dib, resolved many knowledge gaps I had about marketing for startups and small businesses. If you are an entrepreneur with a new project in mind, it should be on your must-read list.
The pages have the perfect combination of being simultaneously entertaining and informative. Allan Dib’s writing-style makes it easier to assimilate the content by using trivial, everyday examples. The paragraphs about building a mailing list felt like an anchor on me, to the point I questioned why I didn’t build my company mail list three years ago.
If you are a big-corp marketing manager, maybe this book will not be the most useful for you. But possibly, after reading it, you will end up wanting to start your own business.
Unlimited Memory: How to Use Advanced Learning Strategies to Learn Faster, Remember More and be More Productive
It would not be an exaggeration to say that this is one of the most practical non-fiction books that I ever read.
Kevin Horsley has very good credentials: a 2nd place in a world memory contest, for example. This book is all about mnemonics — tools that help us remember certain facts or large amounts of information. One many think that in the age of Google, to improve our memory performance is a waste of time. Nothing more distant from the truth: even to use google, you need to know what you want to know.
Besides, to remember all the names and tastes of your co-workers and clients may impress everyone and bring competitive advantages.
In 136 pages, Kevin Horsley delivers to us methods like The Journey, The Car, The Body, The Peg, and Mind-Mapping. Down-to-earth, workable techniques that, after you learn, may look like you are even cheating. The first two methods (The Journey and The Body) sounded to me almost like magic, since in a few minutes you can store in perfect sequence a considerable amount of information.
If you think that your struggle with explaining the financial numbers to your partners is because of poor memory, I bet you are wrong. You were just never shown how powerful your memory really can be.
High Output Management
Another masterpiece from an author with respectable credentials. Andrew Grove was the 3rd employee of Intel — after both founders- later its CEO and turned to be one of the most legendary Silicon Valley executives.
High Output Management is originally from 1983, so it could be considered old for our current standards, but it is not. In fact, this is probably what makes this book a must-read.
Andrew Grove does not save words or is afraid of crushing sensibilities. He, straightforwardly, wrote what he really applied over his brilliant career. In another article, I listed 9 management lessons I took from his writings. Teachings like:
How to understand why your team is not achieving good results.
For every indicator, have a counter indicator. This one is especially helpful when defining your business goals, as explained here.
This one is especially helpful when defining your business goals, as explained here. How monitoring should be done.
Answer correctly to “Do you have a minute?” and do not lose your talents.
It is a book with great, priceless lessons for anyone managing a team, either a single summer intern or a multinational with thousands of employees.
Never Split the Difference
If you are a budget hotel or hostel entrepreneur in Eastern Europe, stop reading this article right now.
I don’t want a potential competitor having such a competitive edge as the one provided by Never Split the Difference. And I am talking seriously because this book even helped me to cut the expenses of my company during the COVID-19 crisis!
The author, Chris Voss, served as one of the main FBI negotiators in dozens of crises, not only in the USA but also abroad. Like in the Philippines, where he negotiated with the members from Abu Sayyaf, an ISIS-affiliated terrorist organization.
With all his career expertise, he translates in 274 pages some brilliant insights, useful in multiple sorts of negotiations. There are important lessons to avoid the fight-or-flight mindset that eventually makes the parts lose.
When I read a good book, I take notes, but only from the most important points. From Never Split the Difference I took almost 5 pages of notes in A4 paper. This should put into perspective how many remarkable are there.
So Good They Can’t Ignore You: Why Skills Trump Passion in the Quest for Work You Love
If you are familiar with the concept of deep work, likely you are also acquainted with the name of Cal Newport.
This is another world-class publication from this young MIT Ph.D. Filled with great career examples, used as the basis for the careful development of his conclusion: the common-belief that passion should be the driver of a career change or choice is a bad idea.
Do not think this is the only takeaway from the almost 300 pages. Another interesting concept, the theory of career capital, may sound simple once you understand it, but often we neglect it in an era of lifestyle-design experts giving poor advice.
This theory foundation is that, instead of over discovering a true calling, one should master rare and valuable skills. Then you use these skills to build career capital. Later you invest this career capital to gain control over what you do and how you do it. Only then you identify and act on a life-changing mission.
As the author settles:
This philosophy is less sexy than the fantasy of dropping everything to go live among the monks in the mountains, but it’s also a philosophy that has been shown time and again to actually work.
Another excellent point made is against multitasking, as I explained in another article.
Starting Your Own Business Far From Home: What (Not) to Do When Opening a Company in Another State, Country, or Galaxy
Disclaimer: the author of this book is also the author of this article, but you still can check dozens of reviews from verified readers on the Amazon page.
Four years ago I dropped a promising career just after a promotion, to follow the dream of opening my own business. It was not easy, especially because it was a tourist hospitality business and we faced one of the worst crises in history in this sector during 2020.
As an additional obstacle, I opened this company in a country totally different from my culture and with a language that I barely speak (at the beginning).
But both I and my business survived. The lessons I learned, the mistakes I made, and the solutions I found are all in this book.
Because there is no better year to launch an entrepreneurial venture than 2021 — plenty of cash-strapped but promising business are for sales around, and if you look carefully, there will be excellent opportunities waiting for a risk-taking entrepreneur.
Moral Letters to Lucilius — Volume 1
It may surprise you I am listing a book with almost two thousand years among the best business-readings of 2020. But Moral Letters to Lucilius is one of the most brilliant survival manuals that I ever saw.
The first time I heard about Seneca was by reference in one of Nassim Taleb’s books.
After reading it, he turned to be y favorite Roman philosopher for a reason: All his letters and manuscripts have timeless advice about human nature, negotiation, and even physical exercise. Who would imagine that an ancient Latin philosopher made burpees in the morning?
Contrary to common (and often justified) preconceptions, it is an ancient book that is outstanding but also a pleasure to read. Or should I say “a joy”, since the term pleasure is not very welcome for stoics?
Bottom line: one of the best books I read, ever. | https://medium.com/datadriveninvestor/the-7-best-business-books-of-2020-that-i-read-637422f177ff | ['Levi Borba'] | 2020-12-29 17:38:38.621000+00:00 | ['Management', 'Entrepreneurship', 'Business', 'Startup', 'Money'] |
How we built the CyberSift Attack Map | Recently we launched a small site called the “CyberSift Attack Map”hosted athttp://attack-map.cybersift.io.Any one involved in the InfoSec industry will be instantly familiar with the site:
It’s basically a map of attacks which either trip some rule in a signature based IPS such as SNORT, or land in a honeypot. In this article we’ll list some of the libraries and techniques we used to build the site for any devs out there who are interested.
Backend
We used the pythonFlask microframework, work … Read more at David’s Blog | https://medium.com/david-vassallos-blog-posts/how-we-built-the-cybersift-attack-map-5c05fb2a5b9d | ['David Vassallo'] | 2018-07-09 14:12:33.729000+00:00 | ['JavaScript', 'Web Apps', 'Python'] |
The Scientific Guide To Not Freezing Your Ass off This Winter | Two Septembers ago, a South Dakota snowstorm caught me off guard. I packed light — too light — for a trip to the Black Hills, to participate in the Buffalo Roundup and Arts Festival at Custer State Park. Huddled in the bed of a pickup truck in the middle of a thundering herd of buffalo, wearing every article of clothing I had and still cold all the way down in my bones, I swore I’d never be unprepared for the conditions again.
This winter, as the ongoing pandemic makes it unsafe to gather indoors, you may find yourself spending more time outside if you want to do any socializing, braving low temperatures and less-than-ideal weather in many parts of the country. You may not have plans to race across the frigid prairie chasing buffalo, but even if you’re just having some backyard beers with your friends, the same concepts apply: Preparation is key, clothing choice is all-important, and understanding the science of warmth can help you hang onto it.
Your body is constantly producing — and losing — heat
“For us to have our metabolism, our cells being alive, [that] takes energy,” says Christopher Minson, PhD, a thermal physiologist at the University of Oregon. “The byproduct of metabolism is heat, and that’s why we have a body temperature.”
But as your body constantly generates heat, it also needs to get rid of it to not overheat, and there are three primary ways that happens: conduction, convection, and evaporation.
Conduction happens through contact with surfaces. If your body temperature is higher than the things around you, you’ll lose heat when you touch those things. “Different materials have different conductivity,” Minson says. “Metal, for instance, really conducts a lot. You’ll lose a lot of heat to a metal surface, vs. plastic or something else. Even wood is much better at not conducting heat.”
Here’s the first piece of advice: If you’re planning an outdoor event, consider the furniture. If you skip metal folding chairs in favor of seats made of wood, fabric, or plastic, conduction will decrease and you and your guests will automatically stay warmer.
As for convection, “If you’re standing in a current — whether it’s water or wind, that’s convection,” says Minson. As air moves around you, it pulls heat away from your body. The calmer the current, the warmer you’ll stay, but this one is a bit more complicated in the context of safe socializing, because when it comes to reducing Covid-19 transmission, airflow is your friend.
So, rather than erecting a tent (which isn’t guaranteed to lower risk of infection), find other ways to keep the wind from whipping away your warmth. You can try an umbrella — it can act almost like your own personal enclosure — to reduce heat loss through convection, or other accessories, like a tight-knit balaclava or wind shell jacket, that cut the wind.
And don’t forget your feet: “Running shoes are well-ventilated so your foot doesn’t overheat,” Minson says, which is great while you’re exercising, but less helpful when you’re just hanging out. “Wear something that covers your shoes to keep air from getting in, or choose shoes that keep wind out. Leather is a good choice.”
When thinking about what to wear, keep in mind the third, and possibly most important, method of heat loss: evaporation.
What really determines how comfortable you’ll be isn’t the layers you wear, but what they’re made of, and what’s between them.
Your body is constantly producing moisture, and the evaporation of that moisture is the foundation of the human thermoregulation system. Sweat evaporating off your skin cools you down in summer. In winter, the cooling effect is a lot less desirable, but you still need to get rid of the dampness. While a totally impermeable outer layer might keep the wind out, it could also lock your moisture in, and that’s not necessarily a good thing.
“The idea is you need a balance, especially if you’re moving around and generating some heat,” Minson says. “You need some ability to lose the water vapor from your skin. If you wear a plastic bag, there’s no ability for humidity to escape from your body.” If you’re moving a lot and generating a lot of warmth, you could start to overheat. And when you stop moving, all that moisture will eventually cool down, making you much colder. All this to say: ventilation is vital.
Trapping warm air
When you’re preparing to endure low temperatures, it can be tempting to don layer after layer, imagining that the more clothes you wear, the warmer you’ll be. But (as I can attest, after shivering through a day in South Dakota despite wearing everything from my suitcase), that’s not always true. Technically, what really determines how comfortable you’ll be isn’t the layers you wear, but what they’re made of, and what’s between them.
“Fundamentally, what keeps you warm is air,” explains Michael Cattanach, global product director for Polartec, a Massachusetts-based company that makes synthetic thermal fabric for outdoor apparel. “It’s about keeping pockets of air next to your body and using fabrics that trap air and keep layers of air together.”
“Fundamentally, what keeps you warm is air.”
Remember, your body is constantly giving off heat. When you wear clothes that trap still air (but not moisture) against your skin, the air absorbs that heat and you stay warmer. Leggings with a thermal grid pattern, for example, leave more room for air than something like skin-tight spandex, and will therefore keep you warmer. And heated air will remain around your body much longer if you insulate. Just like the insulation we use in our houses prevents heat loss, your clothing creates a barrier that keeps heat from escaping.
The art of layering is about quality, not quantity
Cattanach’s formula for a foolproof clothing system includes three layers: “something next to the skin to manage sweat and moisture, a second layer that’s insulating, then something with weather protection on the surface.”
The base layer is arguably the most important of the three, and should be fitted but not constricting. If you think you may break a sweat, or plan to be sitting by a fire that may eventually make you overly warm, for instance, go for a fabric that’s moisture-wicking; synthetics and synthetic/natural blends are a good choice.
“[Wool] is the original smart fiber, and can absorb and release moisture. Since basically the beginning of time, it’s existed to keep a mammal warm, cool, safe and comfortable.”
In terms of all-natural fibers, cotton is comfortable as long as your skin stays dry, but won’t do you any favors once you sweat and create moisture. Wool, on the other hand, can keep you warm even if you sweat a bit, and release that moisture to prevent overheating.
“It’s the original smart fiber, and can absorb and release moisture. Since basically the beginning of time, it’s existed to keep a mammal warm, cool, safe and comfortable”, says Clara Parkes, the New York Times bestselling author of Knitlandia and Vanishing Fleece. She points out that we’re mammals, too, so wool’s a natural choice to keep us warm. A thin and soft Merino wool base layer can help keep you comfortable without overheating, and a thicker wool sweater on top will let you use the principle of trapped air to your advantage.
“Wool is great for insulation because each fiber can have 18 or 20 curvatures per inch,” Parkes explains. “They’re like coiled springs, always pushing away from one another and creating space. The thicker the coils, the more still air is trapped in the fabric, and the higher its insulation abilities will be.”
Thick, rough sweaters are especially warm because the fibers are “jumbled and chaotic,” holding lots of air, says Parkes. For a less prickly layer against your skin, Merino works well because it’s a high-curvature fiber, trapping a disproportionate amount of air, despite its thinness.
The potential drawbacks to wool — and the reasons many lean toward newer, synthetic fleeces — are that it’s heavy and daunting to wash. But the latter shouldn’t stop you from buying that impossibly warm sweater, says Parkes.
“If you’re at all nervous about it, just do a hand wash, and treat it like you would your hair,” she says. “It’s chemically identical. A sink full of warm water, a quick dip, and an air dry is all it takes.”
What you do matters almost as much as what you wear
I called this a guide for staying warm while enjoying backyard beers, but actually, when it comes to staying warm, an alcoholic beverage can work against you.
“One of the most profound systems we have for heat loss or conservation is the simple dilation and constriction of our skin,” says Minson. When you’re cold, the skin constricts, sending blood flow back toward the core. When you’re hot, blood vessels just under the skin dilate, releasing heat. Unfortunately, alcohol is a vasodilator. When you first start to drink, you may feel warmer thanks to the blood rising to the surface. But it won’t last long — all that escaping heat through conduction and convection will cool you off quick.
That’s not to say your choices are to abstain or freeze — but if you’re going to be drinking, try to make up for that heat loss by raising your metabolic rate.
“Move a little more,” says Minson. “If you start feeling cold, just get up and do some squats. You may look ridiculous, but you’ll stay warmer.” The other thing you can do to hack your metabolic rate is with what you eat and drink, Minson adds. “More protein and fats will raise your metabolic rate.” In other words, don’t skimp on the charcuterie.
You can train your brain to tolerate the cold
Thermoregulation is a physical science, but there’s a major psychological component to staying warm, too. Humans are super adaptable creatures, and as the winter wears on, we really do grow accustomed to being cold.
“In November when it’s 42 degrees outside, you’re going to feel chilly because you’re not used to it,” Minson says. “When March rolls around, you’re used to it. It might be the same temperature, but your brain adapts.” Hence my misery during that South Dakota September snow: My brain wasn’t in winter mode yet.
You can make that adaptation happen sooner, he adds, through what basically amounts to exposure therapy. Bundle up a little less than you think you really need to, and force yourself to power through the discomfort. A caveat: No one’s suggesting you go out and get frostbite in the name of brain hacking. If you start to shiver, your core temperature is actually dropping and it’s time to add another layer. If you can’t get back up to feeling comfortable fairly quickly, it’s probably time to dissolve the hangout and call it a night.
But within reason, Minson says, it’s okay to embrace the cold. “It’s about being in a cold environment and being like, ‘Okay, I’m aware of the cold but I don’t feel cold.’ It’s losing the fear and realizing you can handle it. We really can hack our brains and feel more comfortable in the cold.” | https://elemental.medium.com/the-scientific-guide-to-not-freezing-your-ass-off-this-winter-27620cb5b47 | ['Kate Morgan'] | 2020-12-14 06:32:49.746000+00:00 | ['Winter', 'Outdoors', 'Coronavirus', 'Pandemic', 'Weather'] |
Predicting Heart Disease With a Neural Network | Predicting Heart Disease With a Neural Network
Predict the probability of getting heart disease with a Python neural network
Photo by Kendal on Unsplash
In these times of coronavirus, many hospitals are short-staffed and in dire straits. A lack of staff causes many problems. Not all patients can be treated, doctors are excessively tired and risk not taking appropriate precautions. Once a doctor gets sick, staff reductions accelerate, and so on.
This leads us to consider the importance of technology in the medical field. One of the most promising branches of technology today is artificial intelligence (AI). Today, we’re going to talk about implementing artificial neural networks in the field of medicine. More specifically, we will create a neural network that predicts the probability of having heart disease.
Disclaimer: If you look at my previous work, you will see that this not the first time I have written about AI for medical purposes. I want to be clear that this is not a scientifically rigorous study — it’s just a way of implementing AI to solve real-world problems.
Having said that, let’s start! | https://medium.com/better-programming/predicting-heart-disease-with-a-neural-network-a48d2ce59bc5 | ['Tommaso De Ponti'] | 2020-04-24 16:17:59.096000+00:00 | ['Programming', 'AI', 'Neural Networks', 'Python', 'Machine Learning'] |
Code Samples from TFCO — TensorFlow Constrained Optimization | Includes Code Samples from TFCO — TensorFlow Constrained Optimization
The above article models business functions which is equivalent to modelling the conceptual structure of the system. It is always good to model the business process (BPMN) because that is the standardised way of modeling the system. Business Functions model the category of operations of the system routine.
In order to work with Deep Learning Libraries, I have created an article that showcases about TensorFlow Constrained Optimization (TFCO) which works similar to Boxing and Unboxing technique as explained above in the article.
In this example, I have provided a class which assigns the responsibilities to the TensorFlow operations defined in the example. The example here uses Recall constraints which recalls the data objects based on a Hinge Loss. A Recall is a metric that is equivalent to TPR (True Positive Rate). Recalling a data object implies assessing the correctness measure of the object’s existence. The constraint optimization problem is defined within a Class using an Object Oriented Programming fashion. Each constraint of the class is defined in a method as a tensor totally relying on Object Constraint Language (OCL) like syntax. Implying, each method returns a tensor of unit variable for single constraint. The TFCO process takes in one input data point similar to the two data points structure taken by a DEA model. The Data Management Units (DMUs) are similar to weights accepted the TFCO in this model but there is a Characteristic Loss Function as explained below.
Google Research’s TensorFlow Constrained Optimization is a Python Library for performing Machine Learning based Optimizations. In this article, I have taken an example from Recall constraint, which characterises features in the data and minimizes the rejection of objects represented in the data.
Hinge Loss
Hinge Loss is represented as [0, 1 — y(x)], this implies in the entropy calculation, it does not consider those labels which are true predictions whereas those objects which are classified false are considered for false. A minimization algorithm is performed to reduce the false positives.
The problem is rate minimization problem, where the constraints are defined and the hinge loss is defined.
Defining the Objective
# we use hinge loss because we need to capture those that are not classified correctly and minimize that loss
def objective(self):
predictions = self._predictions
if callable(predictions):
predictions = predictions()
return tf.compat.v1.losses.hinge_loss(labels=self._labels,
logits=predictions)
The objective here is hinge loss with labels representing the true positives and false positives.
Defining the Constraints
The constraints are defined such that the recall value is at least the lower bound which is mentioned in the problem. In Convex Optimization Case, the constraints are represented as ≥ 0.
def constraints(self):
# In eager mode, the predictions must be a nullary function returning a
# Tensor. In graph mode, they could be either such a function, or a Tensor
# itself.
predictions = self._predictions
if callable(predictions):
predictions = predictions()
# Recall that the labels are binary (0 or 1).
true_positives = self._labels * tf.cast(predictions > 0, dtype=tf.float32)
true_positive_count = tf.reduce_sum(true_positives)
recall = true_positive_count / self._positive_count
# The constraint is (recall >= self._recall_lower_bound), which we convert
# to (self._recall_lower_bound - recall <= 0) because
# ConstrainedMinimizationProblems must always provide their constraints in
# the form (tensor <= 0).
#
# The result of this function should be a tensor, with each element being
# a quantity that is constrained to be non-positive. We only have one
# constraint, so we return a one-element tensor.
return self._recall_lower_bound - recall
def proxy_constraints(self):
# In eager mode, the predictions must be a nullary function returning a
# Tensor. In graph mode, they could be either such a function, or a Tensor
# itself.
predictions = self._predictions
if callable(predictions):
predictions = predictions()
# Use 1 - hinge since we're SUBTRACTING recall in the constraint function,
# and we want the proxy constraint function to be convex. Recall that the
# labels are binary (0 or 1).
true_positives = self._labels * tf.minimum(1.0, predictions)
true_positive_count = tf.reduce_sum(true_positives)
recall = true_positive_count / self._positive_count
# Please see the corresponding comment in the constraints property.
return self._recall_lower_bound - recall
The Full Example Problem of Recall Constraint
class ExampleProblem(tfco.ConstrainedMinimizationProblem):
def __init__(self, labels, predictions, recall_lower_bound):
self._labels = labels
self._predictions = predictions
self._recall_lower_bound = recall_lower_bound
# The number of positively-labeled examples.
self._positive_count = tf.reduce_sum(self._labels) @property
def num_constraints(self):
return 1
# we use hinge loss because we need to capture those that are not classified correctly and minimize that loss
def objective(self):
pass
def constraints(self):
pass
def proxy_constraints(self):
pass problem = ExampleProblem(
labels=constant_labels,
predictions=predictions,
recall_lower_bound=recall_lower_bound,
)
Visualization of Constant Input Data for which the Recall is calculated
*Please Note: In this case the problem is originating from the data
Recall Calculated using Hinge Loss for the Provided Input Data Distribution
Constrained average hinge loss = 1.185147
Constrained recall = 0.845000
In the article shown above we do not have ever changing data, using existing data we calculate the input data weights in order to predict those samples that produce lowest recall. The predictions from one constrained optimization model is sent to the next model that runs on different loss. This way we can model how those two objects communicate each other.
I’ll leave it up to you guys to decide if Azure ML Studio or AWS Deep Racer can be used to build Machine Learning Models using these ideas.
References | https://medium.com/nerd-for-tech/code-samples-from-tfco-tensorflow-constrained-optimization-17acdf4913e | ['Aswin Vijayakumar'] | 2020-11-04 16:03:26.433000+00:00 | ['Artificial Intelligence', 'Python', 'TensorFlow', 'Constrained Optimization', 'Machine Learning'] |
Entrepreneurs: If You’re Looking for Podcasts in 2020, Pick These | Stuff You Should Know
For random knowledge, SYSK is the place to go. This award-winning podcast comes from the writers over at HowStuffWorks and is consistently ranked in the top charts. Every Tuesday, Thursday, and Saturday, Josh Clark and Charles W. “Chuck” Bryant educate listeners on different topics.
No matter the topic, they always cross-connect with pop culture. Want to learn how going to the moon works? How yawning works? What prison food is like? After lots of time listening, you’ll end up feeling like you’ve completed a degree in Out-Of-Left-Field Things.
Business Wars
There are fascinating stories behind many of the household-name companies and products that we all know. Business Wars host David Brown takes you through the audible journeys that brought many of these companies and products to what they are today.
Grasp the details of how Evan Spiegel grew Snapchat to go head-to-head with Facebook, or listen to the battle in the chocolate market between Hershey and Mars. The use of great sound effects and creative narration by this Wondery podcast makes the listening experience comparable to watching a documentary.
Reply All
For tales that keep you listening, tune in to Reply All. Focused on how people shape the internet and how the internet shapes people, hosts PJ Vogt and Alex Goldman have lively discussions about random yet intriguing situations and dig deep.
One episode, The Snapchat Thief, is about how the identity of a Snapchat account hacker was investigated and (spoiler alert) eventually found. Another episode, called Adam Pisces and the $2 Coke, is about the occurrence of a flood of strange Domino’s Pizza orders. Each segment is about 30 to 45 minutes long, a good length for the average commute.
How I Built This with Guy Raz
Chances are, you’ve at least heard about HIBT. Produced by NPR, this is a podcast about the stories behind the movements built by entrepreneurs, innovators, and idealists. Each weekly episode is 30 to 60 minutes of conversation between host Guy Raz and a notable guest.
You can hear about the origins of Atari (and Chuck E. Cheese) from Nolan Bushnell himself, and about how Sara Blakely founded Spanx. You can listen to Drybar’s Alli Webb, or to Haim Saban’s story about Power Rangers. If you want to learn about the in-depth process and interesting hurdles that go hand-in-hand with groundbreaking success, you’ll enjoy this.
Every Little Thing
Similar to Stuff You Should Know, ELT is a goldmine for random facts. As the host, Flora Lichtman takes you through some of the most pressing questions out there. How are new stamp designs created? What are dogs saying when they bark, and why do auctioneers talk so fast? How do you make that pumpkin spice flavor we all know?
This podcast also has a wide variety of invited guest speakers. In one segment you can hear from an airline pilot, and another you can learn from a microbiologist. If you’re someone who likes to learn something new every day, these segments have you covered.
Syntax.fm
If you happen to be a hardcore tech geek or want to get accustomed to tech lingo, you’ll love Syntax.fm. The hosts, Scott Tolinski and Wes Bos, teach web development for a living, so they have a wide range of tech fluency, from JavaScript to CSS to React to WordPress. Although niche, these are topics that influence the work of many.
They have unique segments like the Spooky Stories episodes, through which you can hear about moderately-disastrous tech-related incidents. They also discuss more general topics, like design foundations for developers and how to get better at solving problems. Episodes are light-hearted and full of awesome info.
The Pitch
If you’re a fan of Shark Tank, you will enjoy tuning in to The Pitch. The show, hosted by Josh Muccio, features entrepreneurs who are in need of venture funding and pitch investors, live. The goal is to give listeners an authentic look into what it’s really like to get involved with venture capital.
You’ll hear from one entrepreneur per episode, so you’ll get into the details. You’ll hear stories about new businesses, post-pitch pivoting, and will even get to follow folks through their journey months after their pitch. | https://medium.com/swlh/entrepreneurs-if-youre-looking-for-podcasts-in-2020-pick-these-15e4b613006b | ['Ben Scheer'] | 2020-01-05 10:33:13.551000+00:00 | ['Business', 'Startup', 'Podcast', 'Technology', 'Productivity'] |
My Life Without Bread | The changes have been so profound, I feel like a completely different person. It makes me wonder if most of the modern problems that people suffer from aren’t caused by our diets. Mine certainly was.
For three years now, I’ve eaten like this and here’s what’s happened:
I’ve lost weight.
In total, I’ve lost about 40 lbs. I could probably lose a little more, but even if I don’t, I’m still way better off than I was.
I’ve maintained it.
It’s been maintainable because, after a while, my cravings for these foods disappeared.
It’s the norm for me now. It’s not something I’m “sticking to,” it’s just how I eat.
Photo by Author: You can see how my face was bloated and puffy.
My joints no longer hurt.
The joint of my right middle finger used to be enlarged. I thought it was the onset of arthritis. At night my right hand would stiffen into a painful claw that I’d have to work to loosen every morning. I couldn’t wear my wedding ring, not because my finger was too fat, but because it wouldn’t go over my knuckle.
I also had pain in my shoulders that made taking a sweater off over my head difficult and my knees ached, just walking up the stairs.
I took Advil daily, to combat the pain.
All of that pain and inflammation has disappeared and only returns when I eat sugar.
I can run up the stairs.
Now I can easily pop up and down the stairs instead of lumbering, huffing, and puffing. Which is great considering that I make my living running after toddlers.
My mood swings have disappeared.
I used to get quite irritated over small things.
Now my moods are stable. I’m more easygoing. I am calmer and more approachable. I’m sure everyone is thankful for that.
I look healthier and younger and I’m starting to like the way I look for the first time in my life.
In the last three years, since I’ve become genuinely healthier, I’ve finally begun to like the way I look. I’m not perfect, but when I look in the mirror, I like what I see.
I feel like I’m 35-years-old.
I definitely don’t feel “my age.”
When I think about how old I am, from the inside out, I feel about the same as I did when I was 35. Possibly better, because I had an undiagnosed heart condition and I was always fatigued back then.
I have mental energy.
I have the mental energy to get everything done in my day. I can concentrate better, remember things easier and I don’t need a nap every afternoon. | https://medium.com/illumination/my-life-without-bread-f791f18cc2a9 | ['Erin King'] | 2020-08-15 18:32:55.329000+00:00 | ['Diet', 'Food', 'Health', 'Self', 'Books'] |
Python Dash Data Visualization Dashboard Web App Template | In this tutorial, I will share a sample template for the data visualization web app dashboard using Python Dash which will look like below.
This is a sample template that can be used or extended to create dashboards quickly using Python Dash and connecting the correct data sources. Prior background with Python and Dash will be good to grasp the article.
I will run through the code in the article and share the link of GitHub code for anyone to use.
import dash
import dash_bootstrap_components as dbc
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output, State
import plotly.express as px
Import the relevant libraries. pip install any missing libraries.
The template uses Dash bootstrap, Dash HTML and Dash core components.
‘dbc’ are dash boostrap components, ‘dcc’ are dash core components and ‘html’ are dash html components
The layout consists of the sidebar and main content page
The app is initialized as:
app = dash.Dash(external_stylesheets=[dbc.themes.BOOTSTRAP])
app.layout = html.Div([sidebar, content])
Sidebar consists of Parameters header and controls.
sidebar = html.Div(
[
html.H2('Parameters', style=TEXT_STYLE),
html.Hr(),
controls
],
style=SIDEBAR_STYLE,
)
Below are all the controls of the sidebar which consist of a dropdown, range slider, checklist, and radio buttons. One can extend to add their own.
controls = dbc.FormGroup(
[
html.P('Dropdown', style={
'textAlign': 'center'
}),
dcc.Dropdown(
id='dropdown',
options=[{
'label': 'Value One',
'value': 'value1'
}, {
'label': 'Value Two',
'value': 'value2'
},
{
'label': 'Value Three',
'value': 'value3'
}
],
value=['value1'], # default value
multi=True
),
html.Br(),
html.P('Range Slider', style={
'textAlign': 'center'
}),
dcc.RangeSlider(
id='range_slider',
min=0,
max=20,
step=0.5,
value=[5, 15]
),
html.P('Check Box', style={
'textAlign': 'center'
}),
dbc.Card([dbc.Checklist(
id='check_list',
options=[{
'label': 'Value One',
'value': 'value1'
},
{
'label': 'Value Two',
'value': 'value2'
},
{
'label': 'Value Three',
'value': 'value3'
}
],
value=['value1', 'value2'],
inline=True
)]),
html.Br(),
html.P('Radio Items', style={
'textAlign': 'center'
}),
dbc.Card([dbc.RadioItems(
id='radio_items',
options=[{
'label': 'Value One',
'value': 'value1'
},
{
'label': 'Value Two',
'value': 'value2'
},
{
'label': 'Value Three',
'value': 'value3'
}
],
value='value1',
style={
'margin': 'auto'
}
)]),
html.Br(),
dbc.Button(
id='submit_button',
n_clicks=0,
children='Submit',
color='primary',
block=True
),
]
)
I am using Dash Boostrap Layout for the layout of the main content page
https://dash-bootstrap-components.opensource.faculty.ai/docs/components/layout/
The main content page has a header and then divided into 4 rows.
The first row has 4 cards, the second row has 3 figures, the third row has one figure and the fourth row has 2 figures.
content = html.Div(
[
html.H2('Analytics Dashboard Template', style=TEXT_STYLE),
html.Hr(),
content_first_row,
content_second_row,
content_third_row,
content_fourth_row
],
style=CONTENT_STYLE
)
Following is the first row containing 4 cards.
content_first_row = dbc.Row([
dbc.Col(
dbc.Card(
[
dbc.CardBody(
[
html.H4(id='card_title_1', children=['Card Title 1'], className='card-title',
style=CARD_TEXT_STYLE),
html.P(id='card_text_1', children=['Sample text.'], style=CARD_TEXT_STYLE),
]
)
]
),
md=3
),
dbc.Col(
dbc.Card(
[
dbc.CardBody(
[
html.H4('Card Title 2', className='card-title', style=CARD_TEXT_STYLE),
html.P('Sample text.', style=CARD_TEXT_STYLE),
]
),
]
),
md=3
),
dbc.Col(
dbc.Card(
[
dbc.CardBody(
[
html.H4('Card Title 3', className='card-title', style=CARD_TEXT_STYLE),
html.P('Sample text.', style=CARD_TEXT_STYLE),
]
),
]
),
md=3
),
dbc.Col(
dbc.Card(
[
dbc.CardBody(
[
html.H4('Card Title 4', className='card-title', style=CARD_TEXT_STYLE),
html.P('Sample text.', style=CARD_TEXT_STYLE),
]
),
]
),
md=3
)
])
More reference on dash cards can be found here
The following is the second row have 2 columns with figures.
content_second_row = dbc.Row(
[
dbc.Col(
dcc.Graph(id='graph_1'), md=4
),
dbc.Col(
dcc.Graph(id='graph_2'), md=4
),
dbc.Col(
dcc.Graph(id='graph_3'), md=4
)
]
)
Following is the third row with one column with a figure.
content_third_row = dbc.Row(
[
dbc.Col(
dcc.Graph(id='graph_4'), md=12,
)
]
)
The following is the final row with two columns with figures.
content_fourth_row = dbc.Row(
[
dbc.Col(
dcc.Graph(id='graph_5'), md=6
),
dbc.Col(
dcc.Graph(id='graph_6'), md=6
)
]
)
Example of a sample callback for a graph. This can be extended to use data sources and figures of anyone’s choice.
@app.callback(
Output('graph_1', 'figure'),
[Input('submit_button', 'n_clicks')],
[State('dropdown', 'value'), State('range_slider', 'value'), State('check_list', 'value'),
State('radio_items', 'value')
])
def update_graph_1(n_clicks, dropdown_value, range_slider_value, check_list_value, radio_items_value):
print(n_clicks)
print(dropdown_value)
print(range_slider_value)
print(check_list_value)
print(radio_items_value)
fig = {
'data': [{
'x': [1, 2, 3],
'y': [3, 4, 5]
}]
}
return fig
Example of a sample callback for Card. This can be extended to have dynamic text displayed on cards.
@app.callback(
Output('card_title_1', 'children'),
[Input('submit_button', 'n_clicks')],
[State('dropdown', 'value'), State('range_slider', 'value'), State('check_list', 'value'),
State('radio_items', 'value')
])
def update_card_title_1(n_clicks, dropdown_value, range_slider_value, check_list_value, radio_items_value):
print(n_clicks)
print(dropdown_value)
print(range_slider_value)
print(check_list_value)
print(radio_items_value) # Sample data and figure
return 'Card Tile 1 change by call back'
@app.callback(
Output('card_text_1', 'children'),
[Input('submit_button', 'n_clicks')],
[State('dropdown', 'value'), State('range_slider', 'value'), State('check_list', 'value'),
State('radio_items', 'value')
])
def update_card_text_1(n_clicks, dropdown_value, range_slider_value, check_list_value, radio_items_value):
print(n_clicks)
print(dropdown_value)
print(range_slider_value)
print(check_list_value)
print(radio_items_value) # Sample data and figure
return 'Card text change by call back'
CSS for the components. The sidebar is position fixed ( scrolling the page does not change the sidebar). Width, margin-right, and margin-left are in terms of percentage in order for the webpage to dynamically change size according to the size.
# the style arguments for the sidebar.
SIDEBAR_STYLE = {
'position': 'fixed',
'top': 0,
'left': 0,
'bottom': 0,
'width': '20%',
'padding': '20px 10px',
'background-color': '#f8f9fa'
}
# the style arguments for the main content page.
CONTENT_STYLE = {
'margin-left': '25%',
'margin-right': '5%',
'top': 0,
'padding': '20px 10px'
}
TEXT_STYLE = {
'textAlign': 'center',
'color': '#191970'
}
CARD_TEXT_STYLE = {
'textAlign': 'center',
'color': '#0074D9'
}
Github repository for the template source code.
You can find the dash_template.py file in the ‘src’ folder. Run this and can check the web app page on http://127.0.0.1:8085/
Few reference links
https://plotly.com/python/plotly-express/ | https://medium.com/analytics-vidhya/python-dash-data-visualization-dashboard-template-6a5bff3c2b76 | ['Ishan Mehta'] | 2020-06-11 16:04:20.493000+00:00 | ['Plotly', 'Python', 'Data Science', 'Dashboard Design', 'Data Visualization'] |
Failed Predictions for 2020 We Wish Came True | It’s always interesting to wonder how much our ancestors, predecessors, and younger selves knew where they were going. But equally fascinating, in my opinion, are those bold predictions from the past that hit completely wide of the mark. Not only is it a neat insight into the way the minds of the past considered their place in human history, but it serves as a reminder that no matter our achievements, we can never gauge our societal momentum with any real exactness.
2020 has been an eventful and chaotic year, so I figured I would turn my ear back to the voices of the past and delve into a little alternate history. It seems that those voices had a lot of ideas about what 2020 in particular might look like. Prophecies aren’t interesting, so I made sure to stick to considered, thoughtful predictions made by futurologists, writers, engineers, scientists, and other trend forecasters.
These are the unmet expectations, forgotten dreams, and unrequited wishes for the 2020 that never was.
A 26-Hour Work Week
Photo by You X Ventures on Unsplash
This one is probably the most disappointing. In 1968, physicist Herman Kahn and futurist Anthony J. Weiner predicted that by 2020 the average American would be working 26 hours per week- about 1370 per year. It was a pretty bold prediction considering the average American worked approximately 37 hours per week in 1968. And it speaks to the optimism of the Post-War period that envisioned a future of linear progress and continuous economic growth. As it stands, the average American now works roughly 35 hours per week according to the Bureau of Labor Statistics- and that figure varies according to factors such as gender, age and marital status (the average for men is 41 hours for instance). It also doesn’t include “side hustles” which many modern Americans increasingly feel the need to support themselves with. The U.S also has a relatively high figure of 11% of its employees that work over 50 hours per week (according to the OECD).
Sadly, the idea of a 26-hour work week seems less realistic now than it did in the 1960s- and not just for Americans. But if any country is going to get close to making it less of a fantasy, it will be a progressive nation like Denmark, Norway, or The Netherlands.
Humans Will Land On Mars
Photo by Nicolas Lobos on Unsplash
Although this prediction is, ultimately, wrong- it’s not far off. The idea that we would send human beings to Mars by 2020 is something I remember growing up with, in fact. Humans setting foot on Mars by the early 21st century was a recurring promise in the books and documentaries I consumed as a kid. In a 1997 issue of WIRED, Peter Leyden and Peter Schwartz gave 2020 as the year we would finally succeed in sending a manned spacecraft to the Red Planet. We’re on our way, having successfully landed several robotic craft (such as probes, rovers, and landers), but current estimates for a manned mission put it a good decade hence.
What’s most interesting about Leyden and Schwartz’s prediction however, is not that we would reach Mars by 2020, but that we would do so as part of a “joint effort supported by virtually all nations on the planet”. They describe four astronauts from a multinational team beaming images of the Martian landscape back to 11 billion people- which is also interesting, as the most recent United Nations estimates for the world population (as of September 2020) sit at 7.8 billion, with 10 billion not expected until 2057.
The beaming of those images are an important part of the prediction though, and tell us that this was as much a prediction about sociology as it was scientific discovery. The images that never were beamed to us this year have an emotional weight to them. Leyden and Schwartz envisioned the 2020 Mars landing as being a turning point in history, a triumph of global cooperation that would put an end to an Earth divided by nations and give rise to a more collective mindset.
“The images from Mars drive home another point: We’re one global society, one human race. The divisions we impose on ourselves look ludicrous from afar. The concept of a planet of warring nations, a state of affairs that defined the previous century, makes no sense.”
It’s poignant to think that this, rather than our technical capabilities, has proven to be the most unrealistic aspect of their prediction. It makes me think of classic science fiction from the Cold War era (think Gene Roddenberry’s Star Trek or Poul Anderson’s Tau Zero) in which a future spacefaring Earth always had a single identity. Nation-states were gone, but cultural identities were never lost. Ethnic and religious conflicts were seen as archaic. Although it may seem far away right now, there is hope in the idea that through technology we can achieve social progress.
The Death of Nationalism
Photo by Jørgen Håland on Unsplash
This one ties in quite nicely to the previous prediction. If you think about it, they’re essentially the same: through advances in technology, we can overcome national and ethnic divides, and come together as one. In 1968, political science professor Ithiel de Sola Pool confidently proclaimed that “By the year 2018 nationalism should be a waning force in the world,” due to our enhanced capabilities for translation and communication. While it’s true that the internet has facilitated a more interconnected world, our technical innovations haven’t brought about the greater empathy de Sola Pool hoped for. Quite the opposite, in fact. Trump, Brexit, Bolsonaro, Erdoğan, Orbán, the Front National, and the Alternative für Deutschland were and are driven by a viciously-xenophobic, fervently anti-intellectual brand of populist nationalism.
The question that remains is whether de Sola Pool’s prediction was wrong entirely or whether it was simply premature. If we are to think of human history in terms of Hegelian Dialectics, then the process of nationalism’s erasure could very well be underway. It’s just not a smooth and linear process. Rather, it’s a messy, generational progression of “two steps forward, one step back”. The French Revolution deposed a tyrannical monarchy but led to a little something known as The Terror, and from that chaos emerged a new tyrant in the form of Napoleon- a political opportunist who derailed the very liberty he professed to love. It was a good half-century before the fruits of the Revolution came to bear insofar as individual liberty was concerned. By that same token, the rise of Trump, Brexiteers, and those like them could be the last fightback of populist nationalism as the world moves inexorably to a more interconnected and interdependent future. The more they swing in one direction, the likelier it is that the next generation of policymakers will move to compensate. My point being, we won’t know for definite that de Sola Pool was off the mark until many years hence.
Hyper-Intelligent Apes Will Be Our Personal Slaves
Photo by Margaux Ansel on Unsplash
No, I’m not kidding. During my research for this article, this was the prediction for 2020 that seemed to crop up the most in my internet searches. Probably because people can’t quite believe that this was a serious prediction for the world in which we now live. In 1967 The Futurist published an article that stated “By the year 2020, it may be possible to breed intelligent species of animals, such as apes, that will be capable of performing manual labor.”
According to the writer this included everything from vacuuming the house to pruning the rosebushes, and even driving our cars. These apes, which would be specially-bred and trained as chauffeurs, would supposedly reduce the amount of car crashes. Now I’ve never seen a chimp drive a car outside of a circus, so I can’t attest as to whether or not they would be more adept at spotting potential hazards on the road than we are. But these aren’t just any old apes- the article implies they’re a kind of super-ape, bred for specific purposes in the same manner as dogs. Alas these apes don’t exist, but the basic idea that by 2020 we will use our enhanced technology to find new uses for animals is not incorrect. Scientists and mechanical engineers at Singapore’s Nanyang Technological University have recently experimented with the creation of “cyborg insects”, successfully implanting electrodes into the leg-muscles of beetles in order to control how they move. These remote-control bugs- far cheaper than robots of the same size- can theoretically be put to a number of uses- from espionage to search-and-rescue. It’s not as impressive as a baboon trying to scrub dried oatmeal from a breakfast bowl, but it’s in the spirit of things.
Telepathy & Teleportation
Photo by David Clode on Unsplash
Perhaps the most surprising aspect of this prediction is not so much that it exists, but that it was made as recently as 2014. Michael J. O’Farrell, founder of The Mobile Institute and veteran of the tech industry, proclaimed in the 2014 book Shift 2020 that both telepathy and teleportation will have been made possible by the current year. This breakthrough was supposed to have been achieved through a process known as “nanomobility”.
O’Farrell writes that “By 2020, I predict people will become incubators for personally controlled and protected Embodied Application Platforms and Body Area Networks, with a primary source-code Physical State and hyper-interactive genetically reproduced Virtual States. All states would host a mass of molecular-sized web-servers; IP domains and AP transport protocols capable of self-sustaining replication, atomically powered quantum computing and persona-patented commerce. I have coined the phrase nanomobility to capture and describe this new uncharted state.”
So what’s the modern reality of telepathy and teleportation?
Well the truth is that they simply don’t exist- at least, not in the way we typically imagine these concepts. The closest we’ve gotten to telepathy is electro-encephalography (EEG), in which a device not dissimilar in shape to a swimming cap is outfitted with large electrodes and placed upon the scalp of the subject. These electrodes record electrical activity which is then interpreted by a computer. Scientists have used this interface to both send signals from the brain and receive electrical pulses in turn. Volunteers have been able to transmit brain activity to each other, to computer software, and even to animals- with one volunteer able to stimulate the motor area of a sedated rat’s brain in order to get it to move its tail.
The closest scientists have come to something resembling teleportation is a process known as quantum teleportation, which is less an act of transportation so much as it is communication. Quantum information has been proven capable of transmitting from one place to another. In 2014, researchers at the Technical University Delft reported having teleported the information of two entangled quantumbits three meters apart. These breakthroughs may not have impacted our everyday lives in the way that the futurists’ hoped, but they are nonetheless extraordinary accomplishments that we can only hope will serve as part of a greater journey of discovery. | https://medium.com/predict/failed-predictions-for-2020-we-wish-came-true-7dba84a76bea | ['Michael J. Vowles'] | 2020-12-11 22:43:19.768000+00:00 | ['History', 'Future', 'Technology', 'Science', '2020'] |
Minds In Their Prime | I fall in love with minds that are better than mine
The kind of minds that in this world are hard to find
And when you find them you better take the time
To revel in their imagery the bridges they build, so hard to define
And were those minds to all fall into line
And cogitate at their utmost prime
Imagine the world we could create, the states sublime
Immaculate wonder, the universe refined | https://medium.com/poets-unlimited/minds-in-their-prime-ace425c8f71c | ['Aarish Shah'] | 2017-09-16 03:21:57.878000+00:00 | ['Inspiration', 'Writing', 'Poetry', 'Creativity', 'Photography'] |
Reporters: Big Tech is Slowly Killing Journalism | A movement is growing to try to save the news business by reining in the power of tech giants Google and Facebook, which together control 60% of digital advertising. Watchdog groups accuse the companies of profiting off the work of journalists while undercutting the ad revenue that pays their salaries.
The Senate Judiciary Committee recently held a hearing on the subject of big data and privacy. Laura Bassett, a freelance journalist formerly with the Huffington Post, testified at that hearing. She said Google and Facebook should be broken up — or at least, heavily regulated.
“They’re basically a country, they’re that powerful. Not only do they have the power to tip elections and control what kind of news they’re putting at the top of their feeds, but they’re also killing journalists, financially,” Bassett said. “So, it’s just creating a real problem when one or two companies has the power to cripple the free press as we know it.”
More than 2,500 reporters have been laid off so far this year. A study from the University of North Carolina Chapel Hill last year found about 1,800 local newspapers have gone out of business since 2004–20% of the total industry. The decline began many years ago when sites like Craigslist reduced newspaper revenues by about 40% by rendering classified ads obsolete.
Freelance reporter John Stanton, formerly of Buzzfeed, also submitted testimony at the hearing. He said he sees the widespread layoffs of reporters as a threat to communities and democracy — leaving “news deserts” with little-to-no reporting on government corruption and a host of local issues, positive and negative.
He urged Facebook and Google to be better corporate citizens and devise a way to ensure content providers get paid.
“While they’re not governmental entities, they do have a responsibility — given that they now kind-of control the way that people consume news — to not put profits above the ability to have a vibrant, thriving news culture,” Stanton said.
Brian O’Kelley, a tech entrepreneur who invented the system that underpins digital advertising, also testified at the hearing. He said big news sites should band together and stop allowing digital firms to handle their ad sales — thus forcing advertisers off Facebook and Google, and back to the news sites themselves.
“They can just click the box and turn it off and stop working with all these programmatic advertising companies,” O’Kelley said. “And because it is funding some of their business right now, turning it off and switching to something else feels scary — even if it is the right decision in the medium term.”
O’Kelley said part of the solution may be a federal law patterned after one in California, giving consumers the power to limit the ways websites collect their personal data and browser history. | https://medium.com/save-journalism/reporters-big-tech-is-slowly-killing-journalism-29d2b56fa097 | ['Save Journalism'] | 2019-05-28 14:51:54.909000+00:00 | ['Local News', 'Advertising', 'Journalism', 'Google', 'Big Tech'] |
Artificial Intelligence on Cyber Security and Pandemic in 2020 | As we come to realize after a long and trying time in 2020 with the effect of the pandemic 2020. It became an understanding that the use of technology became more fundamentally important to us with lockdown being implemented. Hence with the development of artificial intelligence.
Artificial intelligence (AI) is evolving — literally. Researchers have created software that borrows concepts from Darwinian evolution, including “survival of the fittest,” to build AI programs that improve generation after generation without human input. The program replicated decades of AI research in a matter of days, and its designers think that one day, it could discover new approaches to AI.
Photo by Michael Dziedzic on Unsplash
While most people were taking baby steps, they took a giant leap into the unknown,” says Risto Miikkulainen, a computer scientist at the University of Texas, Austin, who was not involved with the work. “This is one of those papers that could launch a lot of future research.”
Building an AI algorithm takes time. Take neural networks, a common type of machine learning used for translating languages, and driving cars. These networks loosely mimic the structure of the brain and learn from training data by altering the strength of connections between artificial neurons. Smaller subcircuits of neurons carry out specific tasks — for instance, spotting road signs — and researchers can spend months working out how to connect them so they work together seamlessly.
In recent years, scientists have sped up the process by automating some steps. But these programs still rely on stitching together ready-made circuits designed by humans. That means the output is still limited by engineers’ imaginations and their existing biases.
So Quoc Le, a computer scientist at Google, and colleagues developed a program called AutoML-Zero that could develop AI programs with effectively zero human input, using only basic mathematical concepts a high school student would know. “Our ultimate goal is to actually develop novel machine learning concepts that even researchers could not find,” he says.
The program discovers algorithms using a loose approximation of evolution. It starts by creating a population of 100 candidate algorithms by randomly combining mathematical operations. It then tests them on a simple task, such as an image recognition problem where it has to decide whether a picture shows a cat or a truck.
In each cycle, the program compares the algorithms’ performance against hand-designed algorithms. Copies of the top performers are “mutated” by randomly replacing, editing, or deleting some of its code to create slight variations of the best algorithms. These “children” get added to the population, while older programs get culled. The cycle repeats.
The system creates thousands of these populations at once, which lets it churn through tens of thousands of algorithms a second until it finds a good solution. The program also uses tricks to speed up the search, like occasionally exchanging algorithms between populations to prevent any evolutionary dead ends, and automatically weeding out duplicate algorithms.
Artificial Intelligence on Cyber Security
There is currently a big debate raging about whether Artificial Intelligence (AI) is a good or bad thing in terms of its impact on human life. With more and more enterprises using AI for their needs, it’s time to analyze the possible impacts of the implementation of AI in the cybersecurity field.
The positive uses of AI for cybersecurity
Biometric logins are increasingly being used to create secure logins by either scanning fingerprints, retinas, or palm prints. This can be used alone or in conjunction with a password and is already being used in most new smartphones. Large companies have been the victims of security breaches that compromised email addresses, personal information, and passwords.
Cybersecurity experts have reiterated on multiple occasions that passwords are extremely vulnerable to cyber attacks, compromising personal information, credit card information, and social security numbers. These are all reasons why biometric logins are a positive AI contribution to cybersecurity.
AI can also be used to detect threats and other potentially malicious activities. Conventional systems simply cannot keep up with the sheer number of malware that is created every month, so this is a potential area for AI to step in and address this problem. Cybersecurity companies are teaching AI systems to detect viruses and malware by using complex algorithms so AI can then run pattern recognition in software. AI systems can be trained to identify even the smallest behaviors of ransomware and malware attacks before it enters the system and then isolate them from that system. They can also use predictive functions that surpass the speed of traditional approaches.
Systems that run on AI unlock potential for natural language processing which collects information automatically by combing through articles, news, and studies on cyber threats. This information can give insight into anomalies, cyber attacks, and prevention strategies. This allows cybersecurity firms to stay updated on the latest risks and time frames and build responsive strategies to keep organizations protected.
AI systems can also be used in situations of multi-factor authentication to provide access to their users. Different users of a company have different levels of authentication privileges which also depend on the location from which they’re accessing the data. When AI is used, the authentication framework can be a lot more dynamic and real-time and it can modify access privileges based on the network and location of the user. Multi-factor authentication collects user information to understand the behavior of this person and decide about the user’s access privileges.
To use AI to its fullest capabilities, it must be implemented by the right cybersecurity firms that are familiar with its functioning. Whereas in the past, malware attacks could occur without leaving any indication on which weakness it exploited, AI can step in to protect the cybersecurity firms and their clients from attacks even when multiple skilled attacks are occurring.
Drawbacks and limitations of using AI for cybersecurity
The benefits outlined above are just a fraction of the potential of AI in helping cybersecurity, but some limitations are preventing AI from becoming a mainstream tool used in the field. To build and maintain an AI system, companies would require an immense amount of resources including memory, data, and computing power.
Additionally, because AI systems are trained through learning data sets, cybersecurity firms need to get their hands on many different data sets of malware codes, non-malicious codes, and anomalies. Obtaining all of these accurate data sets can take a really long time and resources which some companies cannot afford.
Another drawback is that hackers can also use AI themselves to test their malware and improve and enhance it to potentially become AI-proof. In fact, AI-proof malware can be extremely destructive as they can learn from existing AI tools and develop more advanced attacks to be able to penetrate traditional cybersecurity programs or even AI-boosted systems.
Solutions to AI limitations
Knowing these limitations and drawbacks, it’s obvious that AI is a long way from becoming the only cybersecurity solution. The best approach in the meantime would be to combine traditional techniques with AI tools, so organizations should keep these solutions in mind when developing their cybersecurity strategy:
Employ a cybersecurity firm with professionals who have experience and skills in many different facets of cybersecurity.
Have your cybersecurity team test your systems and networks for any potential gaps and fix them immediately.
Use filters for URLs to block malicious links that potentially have a virus or malware.
Install firewalls and other malware scanners to protect your systems and have these constantly updated to match redesigned malware.
As the potential of AI is being explored to boost the cybersecurity profile of a corporation, it is also being developed by hackers. Since it is still being developed and its potential is far from reach, we cannot yet know whether it will one day be helpful or detrimental for cybersecurity. In the meantime, organizations must do as much as they can with a mix of traditional methods and AI to stay on top of their cybersecurity strategy.
Artificial Intelligence in COVID- 19 Pandemic
It can be understood during the COVID-19 pandemic, health care professionals and researchers have been confined mostly to using local and national datasets to study the impact of comorbidities, pre-existing medication use, demographics, and various interventions on disease course.
Multiple organizations are running an initiative to accelerate global collaborative research on COVID-19 through access to high-quality, real-time multi-center patient datasets. The National Science Foundation has provided funding to develop the Records Evaluation for COVID-19 Emergency Research (RECovER) initiative.
They are using the technology to find trends and data connections to help better understand and treat COVID-19, with a special emphasis on the impact existing medications have on COVID-19.
This approach allows a health care professional or researcher to identify patterns in patient responses to drugs, select or rank the predictions from our platform for drug repurposing, and evaluate their responses over time. This will help with COVID-19 and other potential pandemics.
Artificial Intelligence can inform public health decision-making amid the pandemic.
A new model for predicting COVID-19’s impact using artificial intelligence (AI) dramatically outperforms other models, so much so that it has attracted the interest of public health officials across the country.
While existing models to predict the spread of a disease already exists, few, if any, incorporate AI, which allows a model to make predictions based on observations of what is actually happening — for example, increasing cases among specific populations — as opposed to what the model’s designers think will happen. With the use of AI, it is possible to discover patterns hidden in data that humans alone might not recognize.
AI is a powerful tool, so it only makes sense to apply it to one of the most urgent problems the world faces,” says Yaser Abu-Mostafa (Ph.D. ’83), professor of electrical engineering and computer science, who led the development of the new CS156 model (so-named for the Caltech computer science class where it got its start).
The researchers evaluate the accuracy of the model by comparing it to the predictions of an ensemble model built by the Centers for Disease Control and Prevention from 45 major models from universities and institutes across the country. Using 1,500 predictions as points of comparison with the CDC ensemble, the researchers found that the CS156 model was more accurate than the ensemble model 58 percent of the time as of November 25.
Abu-Mostafa is currently expanding the CS156 model based on feedback from public health officials in the hope that it can be a lifesaving tool to guide policy decisions.
This model is being modified to allow public health officials to predict how various interventions — like mask mandates and safer-at-home orders — affect control of the spread of the disease. Armed with those predictions, public health officials would be better able to evaluate which interventions are most likely to help.
At the end of it all, it is an undeniable fact that AI is at the center of a new enterprise to build computational models of intelligence. The main assumption is that intelligence (human or otherwise) can be represented in terms of symbol structures and symbolic operations which can be programmed in a digital computer.
There is much debate as to whether such an appropriately programmed computer would be a mind, or would merely simulate one, but AI researchers need not wait for the conclusion to that debate, nor for the hypothetical computer that could model all of the human intelligence, however, we cannot deny the fact of the contribution of AI on cybersecurity and public health. | https://medium.com/change-becomes-you/artificial-intelligence-on-cyber-security-and-pandemic-in-2020-2a03f01f9756 | ['Antoine Blodgett'] | 2020-12-08 07:17:46.199000+00:00 | ['Covid 19', 'AI', 'Cybersecurity', 'Artificial Intelligence', 'Tech'] |
Microservices and AWS App Mesh | The Application is similar to the one described in the Part 1. I just added few lines of extra code to get the response from microservice bookingapp-movie. Refer my GitHub Repo for bookingapp-home for the python code I used.
From Part 1, I have my ECS cluster ready with 3 tasks, bookingapp-home, bookingapp-movie and bookingapp-redis all 3 tasks have service discovery configured and resolving to the endpoints properly. Let’s assume that our application is working fine and we want to rollout new code changes only to bookingapp-movie microservice. We can rollout the changes using rolling update strategy, but if we face any issue in the new code, all the traffic will get impacted. To do a rollout of new changes safely, we can use canary model i.e. route 75% traffic to the old bookingapp-movie service and 25% to the new bookingapp-moviev2 service, if we don’t observe any issue, send 50% to new bookingapp-moviev2 service and eventually send all traffic to the new service. By this method, by changing the simple weight parameter we can safely rollout new code changes without any impact.
Create new service in AWS ECS
I have cloned my GitHub Repo bookingapp-moviev2 and created a new docker image and pushed it to docker hub.
I am going to create a new task called bookingapp-moviev2 using the new docker image and bring up a service moviev2 and add it to the ALB.
Add the container bookingapp-moviev2:latest and create the task definition.
Now create a service moviev2 for the task.
Add the new service to the ALB and service discovery enabled as moviev2.internal-bookingapp.com.
I have the autoscaling configured for this service as well.
Finally review and save the service. Now you will be having 4 services in place.
home → from bookingapp-home task movie → from bookingapp-movie task moviev2 → from bookingapp-moviev2 task (running modified code of bookingapp-movie) redis → from bookingapp-redis task.
ECS services
The sample application has a ALB in front of it, and the ALB listens on port 80 and the backend is configured based on URL paths.
/home → bookingapp-home-tg → refers home service → bookingapp-home task. /movie → bookingapp-movie-tg → refers movie service → bookingapp-movie task. /moviev2 → bookingapp-moviev2-tg → refers moviev2 service → bookingapp-moviev2 task. /redis → bookingapp-redis-tg → refers home service → bookingapp-redis task.
When you see the canary deployment architecture diagram shown above, you will see that the home service contacts movie service (endpoint movie.internal-bookingapp.com). I successfully made some code changes and created a new service for movie called moviev2 (endpoint moviev2.internal-bookingapp.com). Now moviev2 service is in place but requests are not going there. Let’s see how we can replace movie service with moviev2 using canary deployment model with the help of AWS App Mesh.
AWS App Mesh
Now the good part about App Mesh is you don’t have to change anything in your application code to use it. Let’s create the necessary stuffs in AWS App Mesh.
Create a mesh for our application — bookingapp.
Create mesh
Create virtual node for all our services. Now create for home service, listener on port 5000 as we exposed the same port from the container. Leave the backend empty for now, we need to update it later once we create virtual services.
Now repeat the same for other services also, bookingapp-movie and bookingapp-moviev2.
Create virtual services for all of our services. Make sure the service name is same as the one you created in ECS service discovery.
Create the same for other services as well. We have a total of 4 virtual services now.
After we create services, we need to add backend to the home-virtual-node because home service has to contact movie service.
Virtual Nodes → home-virtual-node → Edit
Create a virtual router only for movie service. As mentioned above, virtual router will route the traffic based on the routes we listed.
In the route section, specify route type ass http, target as virtual node movie-virtual-node and moviev2-virutal-node and weight as you wish with match as /movie. That is the path we use to access the service in the container. Create the virtual route.
Now add the virtual router to the service movie.internal-bookingapp.com.
Let’s pause and understand the flow here, when the traffic comes to service movie.internal-bookingapp.com, it will reach the envoy proxy and service movie.internal-bookingapp.com has a provider called movie-virtual-router, so it will route the traffic there, virtual router has 2 routes as 50% weight each, so each request will go to each virtual route, one virtual route points to virtual node movie-virtual-node which maps to AWS CloudMap Service movie. AWS CloudMap service resolves to IP and forwards the request. This is how the overall traffic flow happens using AWS App Mesh.
Now update the task definitions to use App Mesh. On the ECS cluster go to task definitions of bookingapp-home and create a new revision.
Enable App Mesh and provide all the necessary details.
Click Apply and the proxy configuration will be auto populated.
After you apply you will now see the envoy container added to the container section.
Click create to create the new task definition version. Repeat the same for other task definition also bookingapp-movie and bookingapp-moviev2.
Now, update services to use the latest task definition. In the ECS cluster go to service tab, select movie service and update.
Make sure you select Force new deployment check box and deploy the service. Repeat the same for moviev2 and home service as well. Wait for the fargate to pull latest container images and bring it up. Once instances are up, make sure it is added to ALB target groups respectively.
Simple curl request to ALB /home path equally distributes the load to both the services (movie.internal-bookingapp.com and moviev2.internal-bookingapp.com).
Now I confirmed that my moviev2 service works fine with 50% of the traffic. We can now increase the traffic from 50% to 80% and see the traffic distribution.
Traffic is mostly routed to moviev2 service still around 10–20% approx routed to movie service based on weight. Now we can simply assign 100% weight for moviev2 service and eventually stop the fargate instances and delete the ALB target group.
Closing Notes
Using AWS App Mesh we can easily integrate our existing services without any code changes on our application stack and we can deploy the code changes on the fly just by increasing a simple weight parameter in the App Mesh Route rules. Very easy to revert the deployment to old code by just switching the weight parameter to 100% for old service. | https://deepanmurugan.medium.com/microservices-and-aws-app-mesh-f4c7cab9ddca | [] | 2020-12-30 16:50:19.078000+00:00 | ['Microservices', 'App Mesh', 'AWS', 'Aws Ecs', 'Docker'] |
The World’s Happiest Countries | The World’s Happiest Countries
Vast differences in well-being exist between the happiest and least happy nations.
Finland maintained its status as the world’s happiest country, while the United States slipped a notch to № 19, according to the latest annual World Happiness Report, released March 20, 2019. Here’s how some of the 156 countries placed, based on Gallup Polls, as analyzed by the United Nations Sustainable Development Solutions Network:
The report — which should be taken with at least a few grains of salt given that it relies on somewhat unreliable self-reporting, and that it reflects averages that don’t speak to any specific individuals’ well-being — revealed several trends. One that jumped out at researchers who analyzed the data:
Happiness in the United States, among both adults and adolescents, has generally declined and is lower now than at the turn of the millennium, the researchers said. Smartphones and other digital technology may be playing a role, but are not the sole cause.
“The compulsive pursuit of substance abuse and addictive behaviors is causing severe unhappiness.”
“This year’s report provides sobering evidence of how addictions are causing considerable unhappiness and depression in the US,” said Jeffrey Sachs, director of the Sustainable Development Solutions Network. “Addictions come in many forms, from substance abuse to gambling to digital media. The compulsive pursuit of substance abuse and addictive behaviors is causing severe unhappiness. Government, business, and communities should use these indicators to set new policies aimed at overcoming these sources of unhappiness.”
The report indicates that the main factors separating the happiest countries from the least happy are income per capita, social support, healthy life expectancy, freedom, perception of corruption, and …
Generosity. The report finds support for other research suggesting that volunteering time and donating money to help others brings happiness to the giver.
“The world is a rapidly changing place,” said John Helliwell, a professor emeritus in economics at the University of British Columbia and co-editor of the report. “How communities interact with each other whether in schools, workplaces, neighborhoods or on social media has profound effects on world happiness.”
Other broad trends revealed in the report, which is based on a three-year average of the survey data (the most recent period being 2016–2018):
Among the 20 countries where happiness grew the most between 2005 and today, “10 are in Central and Eastern Europe, five are in sub-Saharan Africa, and three in Latin America.”
The 10 countries with the biggest declines in happiness “typically suffered some combination of economic, political, and social stresses,” the report states. The five largest drops since 2005: Yemen, India, Syria, Botswana and Venezuela.
Average overall world happiness has fallen in recent years, driven by the sustained downward trend in India and the growing population there.
Researchers see “a widespread recent upward trend in negative affect, comprising worry, sadness and anger, especially marked in Asia and Africa, and more recently elsewhere.”
Image: Unsplash/Anthony Ginsbrook
My own ongoing Happiness Survey (you can take it here — full results to be reported later this year) has yielded some preliminary, non-scientific results related to individual happiness. So far, those who report being the happiest also most strongly agree with these statements, on average:
I’m physically healthy.
I’m mentally healthy.
I have a great relationship with a significant other.
I’m close with my family.
I enjoy my work/career.
I laugh a lot.
However, I suggest interpreting both sets of results with caution, if for no other reason than this simple fact: Defining happiness is a challenge itself. | https://medium.com/luminate/the-worlds-happiest-countries-f31e88cba993 | ['Robert Roy Britt'] | 2019-03-21 00:57:22.666000+00:00 | ['Happiness', 'Health', 'Life', 'Wellbeing', 'Science'] |
How I “Sanity Check” Financials For a B2C Business Idea | Hypothetically, let’s consider a B2C software app that grows primarily through paid acquisition (advertising) as an example.
In this scenario, it has a “freemium” model wherein the basic functionality is free but heavier usage customers have to pay via subscription.
It’s not a marketplace or a service that gets better with more users, so the only “real” value a customer provides to the business is revenue.
There’s three immediate KPIs that are important to contemplate:
Customer Acquisition Cost: How much it costs to acquire customers.
Conversion Rate: What % of customers convert into paying customers.
Paying Customer Value: How much value paying customers generate in revenue.
This can seem a little abstract to consider without any data, but, it’s possible to inject a sense of reality by using a framework.
By researching the “closest” competitors to the idea or components of the business model in question, and getting a feel for their KPIs, there’s an initial baseline to work with.
At this point, I can whirl up a basic spreadsheet and start to populate it.
Here’s a link to it.
In the top left, in green, I have the major KPI variables. Changing these affects the rest of the spreadsheet.
My “research” returned these figures:
CAC: $10
CR: 10%
PCV (6M)*: $150
*I drop off revenue in months 7–12 to roughly account for churn.
So, once input these figures, it populates the other cells with data.
In the example above, I started off by spending $1,000 in advertising in Month 1 as a one-off injection of capital, and reinvested the returns through to Month 12.
This generated “gross profit” for the year of $1,528.91, and $17,677.71 over two years, which can be used to fund operating expenditure:
But, all we have here is a baseline. Now, it’s time to probe further by “playing around” with the KPIs using a logical basis.
This is unique in each circumstance, so I try to calibrate them realistically and with merit. Otherwise, the process falls apart.
For example, if I’m able to offer my product in a market that is untapped by competitors, I could make the assumption CAC will be lower at varying degrees and explore how that changes the financials.
If my product delivers more value to the customer than the competition, I could make the assumption PCV (6M) will be higher in varying degrees and explore how that changes the financials. This is where a unique value proposition can really shine through.
The reverse is also true — catering for the “unknown” and factors that are “overlooked” or “underestimated”.
What happens if CAC is doubled — $20?
What happens if the Conversion Rate is halved — 5%?
What happens if PCV (6M) is 50% less — $100?
Doing this helps me build “quick narratives” around the business model.
Not just by changing numbers randomly, but by deliberately adjusting them depending upon what I believe to be the strengths and weaknesses of the business in that unique context.
It’s possible to “get a feel” for what KPIs are most likely to deviate based upon the research, and to roughly what degree. This can be used to map out a minimum and maximum threshold for each KPI.
You can also use a sensitivity analysis chart to consume this data better visually, where Conversion Rate is on the x-axis and PCV is on the y-axis:
Image supplied by author.
In the above example, the KPI range where a business can hit and “win” (green) is larger than the red area (where it “loses”). | https://medium.com/founders-hustle/how-i-sanity-check-financials-for-a-b2c-business-idea-37877cdf0dc9 | ['Martin Delaney'] | 2020-12-16 09:04:41.102000+00:00 | ['Leadership', 'Entrepreneurship', 'Business', 'Startup', 'Founders'] |
The Dark Side of Attending an Elite College | Whenever I meet up with friends from Penn or high school classmates that attended other top schools, the conversation always turns to a familiar topic:
What would life be like if I hadn’t attended a top college?
As someone who was obsessed with college admissions in high school, I fully bought into the myth that higher education at an elite school would make life easier. Better job opportunities, amazing connections and alumni network, and a sense of confidence that we would carry for the rest of our lives.
And to be honest — all of this was largely true. I had multiple six figure job offers in consulting and finance upon graduation. I was able to peek behind the curtain and examine the lives of the true global elite. And regardless of my work experience, college is still a major talking point in most interviews.
But all of this obscured the true opportunity cost of attending an elite school. And based on conversations with hundreds of similar grads, psychologists, and even professors teaching at elite colleges, there seems to be a general consensus that there are enormous hidden costs associated with a top school.
Setting aside the obvious (and very real) risk of accumulating hundreds of thousands of dollars in student debt, graduating from a top school can prove detrimental in the following ways:
Career Options
We are consistently told that elite schools open doors to elite jobs. This is true but it glosses over the fact that these jobs are unappealing in nature to most people, whether they could get them or not. It also neglects to mention that these “elite jobs” will become your only options if you want to “maintain your upward trajectory.”
You pretty much have a few options when you graduate from an elite school:
Become an investment banking analyst
Become a management consultant
Work for an established tech company in Silicon Valley (an option that has become more common over the last 5 years)
If you are unsure of what to do, you might look at academia or traditional higher prestige graduate school. You can become a doctor, lawyer or perhaps work for a fledgling startup, if money is not an issue.
At most schools, people pursuing law and medicine usually have long desired to work in these fields. I can’t count how many friends became doctors (especially surgeons) because they were decent at math and science, didn’t want to work in engineering, and were to risk adverse to look into other options.
Similarly, law school has become the go to place for well to do, intelligent (yet aimless) grads from top schools. As one acquaintance told me at a recent party, “It’s three whole years of substantial studying but a pretty decent break from having to get a real job. Plus it will get my parents off of my back.”
It’s three whole years of substantial studying but a pretty decent break from having to get a real job. Plus it will get my parents off of my back.
These are the same people who are bored out of their minds when I talk about anything remotely related to the legal field. Clearly, they do not want to be lawyers. But if you’re a humanities major who isn’t sure what you want to do with your life, this becomes an attractive option. Many of these students have family that will gladly pay for any graduate degree. Even the ones drowning in debt from their undergraduate education might pick this path. After all, you’re already in too deep — both financially and mentally.
There is also the killing of positive career ambitions. I value entrepreneurship, but rarely see people who are willing to take the risk to do something different, despite having great ideas with potentially monumental impact. Mind you — I went to school with a number of friends in Wharton. You would think that of all school that a few business savvy entrepreneurs might emerge, Wharton would certainly be a major producer of business. But this is rarely the case. Why? Because while many of these students were once adventurous risk takers, they become heavily risk adverse, to the point where they would rather work a job they hate, in a city they can barely afford, and hang out with people they can hardly stand — just so that they don’t have to feel the humiliation of removing Goldman Sachs from their LinkedIn title.
Stress/Increased Sensitivity to What Other People Think
This one is perhaps the most insidious and it starts in early on in college. You begin to believe that taking on enormous amounts of stress and even doing things that are unethical (or even illegal) are all just “part of the game.” As long as you can pull it all together by your 8 a.m. class or 9 a.m. interview, it doesn’t matter that you haven’t slept in three days or that your partner on a group project has figured out a way to sabotage another group’s project, to give you guys an edge when graded on a curve.
Not only do many of these people need a general wake up call — telling them they need to look at the bigger picture and focus on living a healthier lifestyle — many are in immediate need of help.
I learned this the hard way when I got an email over winter break at Penn informing me that a student in my Spanish class had jumped to her death from a building in Philadelphia. The death of my classmate gained national news attention because she was pretty, smart, an athlete, and seemingly had everything going for her.
In all these cases, a false sense of inadequacy seemed to be at the root of the problem. Could there have been other mental health issues unrelated to attending an elite school? Of course. But based on my personal knowledge of these people, I find it hard to believe that their hyper critical environment played no role.
Freedom
While a healthy work life balance and mental health are crucial issues, perhaps the worst of the unspoken dark sides of attending a top college is the loss of freedom many people experience.
If you really dig into why most people wanted to go to a top school, the answer is pretty much the same — they wanted some sort of freedom. This could be financial freedom for someone who wants a better life, personal freedom for those who never felt accepted, or even the freedom to impact change and make a positive difference in the world.
But unless you can move past your degree, make choices based on what you need and not what you think others expect, and ultimately reject the perceptions of your family, friends, and coworkers (no easy task), your elite college degree will only serve as an ever tightening noose and will ultimately hinder you from finding happiness. | https://medium.com/escaping-the-9-to-5/the-dark-side-of-attending-an-elite-college-c92d1b6c3ccb | ['Casey Botticello'] | 2020-05-06 02:21:12.830000+00:00 | ['Mental Health', 'Entrepreneurship', 'Business', 'Education', 'Finance'] |
Microplastic Pollution In Our Soil | Microplastic Pollution In Our Soil
Microplastic pollution isn’t just a marine pollution problem. Here‘s what we need to know.
Photo by Noah Buscher on Unsplash
When we think about microplastic pollution, we often think about the ocean. After all, we usually truck our plastic trash off for recycling or the landfill. That should keep it there, right?
Sadly, recent research studies have found that microplastic pollution is a growing concern in farm soil.
Thanks to these scientists, we’re now aware that microplastic can enter plants and impede the growth of plants. This means that animals that eat plants consume the plastic in these plants too.
Obviously, that includes us. I know it isn’t exactly news, we’re already breathing, drinking, and eating microplastic through seafood. Still, now it confirmed that plastic is everywhere, even in fruits and vegetables!
In the United States and Europe, we deposit 107,000 to 730,000 tonnes of microplastic on agricultural lands annually, which could be more than two times the amount that enters the ocean (93,000 to 236,000 tons).
Where did all these microplastics come from?
Sources and causes
Sewage sludge
A year ago, I wrote about microfiber pollution and how it’s affecting our marine environment. In my research, I came across how plastic microfibers from our clothes shed when we wash and dry them.
I said clothes but it really includes any type of fabric made with synthetic fibers. Anything polyester, acrylic, nylon, or spandex is plastic in the form of textiles. They’re commonly used to make sweaters, fleece jackets, sheets, quilts, soft toys, rugs, upholstery, and etc.
Every time we wash and dry these things, tiny plastic particles break off and go into the drain. At the water treatment plant, filters catch the bigger microplastics while the rest enters the waterways.
Microplastics are plastic particles smaller than 5mm/0.2 inches. In contrast, microfibers are less than 10 micrometers (0.001 mm/about 0.00004″).
Water treatment plants are mostly unable to catch microfibers, but they can catch microplastic. These microplastics end up in sewage sludge which is commonly used as fertilizer at farms.
In Europe and the US, we apply 50% of sewage sludge as fertilizers on agricultural lands, essentially dumping tons of microplastic onto farmlands year after year. In the US, the annual tonnage dumped is approximately 21249 metric tons.
Slow-release fertilizers, coated seed, and plastic mulches
Besides sewage sludge, we’ve also introduced microplastic directly into farm soils in the form of plastic-encapsulated slow-release fertilizers and plastic-coated seeds. The plastic coatings were meant to protect seeds from bacteria and diseases.
These are significant sources of plastic pollution. A 2019 European Chemicals Agency report placed the annual plastic released onto agricultural lands at 10,000 metric tons for slow-release fertilizers, and 500 metric tons for coated seeds.
In addition, some farmers use plastic mulches in place of mulch to keep moisture and warmth in the soil and to suppress weeds. Since the 1950s, farmers have also started using plastic in place of glass for their greenhouses.
These plastics are difficult to recycle and dispose of. They’re often burnt or piled in a corner of their farms where they slowly break down into smaller bits of microplastic.
Naturally, all these sources of plastic breaks down into microplastic that contaminates whatever grows from the soil.
Rain
It’s raining plastic!
Microplastic has been detected in high concentrations air and rain samples in major cities like London and Paris, but studies have found them in the arctic and remote areas all over Europe and the US too.
To find out the extent of plastic pollution over protected areas in the US, Janice Brahney, an assistant professor at Utah State University, conducted a study. She collected atmospheric dust samples and rainwater from 11 National Parks and Wilderness areas in the western US.
It’s raining plastic over at Bryce Canyon, Utah.
Photo by Mark Boss on Unsplash
They found microplastic in 98% of the samples and estimated the number of plastic particles deposited over the area to be the equivalent of 123 to 300 million water bottles.
The biggest source of this microplastic pollution came from synthetic textiles from clothing, carpet, tents and climbing ropes, etc.
Microbeads accounted for 30% of the observed plastic, but they aren’t the microbeads in personal products. The scientists think they might be broken off from paint and coatings.
Consequences of microplastic pollution in soil
What does plastic pollution in the soil mean to us?
Mary Beth Kirkham, a plant physiologist and professor at Kansas State University, conducted an interesting experiment.
She grew wheat plants contaminated with microplastics, cadmium, and both microplastics and cadmium. Cadmium is a very toxic cancer-causing metal commonly released into the environment through car batteries and tires.
She then compared the growth of these plants to plants grown without these contaminants.
More than two weeks later, the plants grown with microplastic turned yellow and wilted. Plants grown only with cadmium-contaminated soil did better, so the plant growth problem was due to microplastics.
Worse, plants grown with soil contaminated with both cadmium and microplastic contained a higher level of cadmium. This is an indication that microplastics act as a vector for cadmium to enter the plant.
Similar effects have been observed by scientists all over the world.
Alters soil characteristics
In a study conducted in Germany, researchers added different types of microplastic to the soil in different concentrations. Then they studied the microplastics’ effect on soil structure and function, water holding capacity, and microbial activity.
They used 4 different types (polyacrylic fibers, polyamide beads,
polyester fibers, and polyethylene fragments) of microplastic commonly found in the environment. And added them in different amounts up to just 2% in concentration. (Plastic has been detected in soil in concentrations up to 7%.)
Though the full impact of microplastic soil contamination still needs to be studied, the results from this study show that microplastic affects fundamental soil characteristics.
In the words of the scientists, “microplastics are relevant long-term anthropogenic stressors and drivers of global change in terrestrial ecosystems.”
Contaminates vegetables and fruits
A group of Italian researchers has detected the presence of microplastic in a variety of supermarket produce like apples, carrots, and lettuce. Apples were the most contaminated, while lettuce was the least contaminated.
The scientists think that the perennial nature of fruit trees allow more plastic to accumulate.
Another study done by Chinese researchers found that plants contaminated with nanoplastics don’t grow as well and have lower chlorophyll content.
They found evidence of nanoplastics bioaccumulating in plants and concluded that microplastic pollution can affect agricultural sustainability and food safety.
Bioaccumulates up the food chain — plastic and toxins
The natural question at this point is, what happens to animals (and humans) that consume these plants?
In studies conducted on rats, scientists learned that microplastics can accumulate in the gut, liver, and kidneys, disrupt the metabolism of energy and fat, and cause oxidative stress.
The smaller the microplastic, the quicker and easier it passed into the rat’s tissues and organs. The horror!
Consider Professor Kirkham’s experiment which demonstrated that microplastic can increase the chemical contamination of plants, and the problem becomes worse.
Owing to the characteristic of plastic, it’s easy for microorganisms and pollutants (like lead and pesticides) to bind to its surface.
While we don’t understand the full effects of these contaminated particles on the human body yet, both microplastic and its contaminants can bioaccumulate as we go up the food chain.
For instance, the microplastic enters the plants, cows eat the plant in copious amounts. Over time, microplastics and toxins that entered the plant bioaccumulates. By the time we consume the beef, the plastic and toxin content would be elevated.
Now what?
The more I read about plastic pollution, the more evident it is that what I know is just the tip of this nasty iceberg. I’m grateful for hardworking scientists studying climate change and plastic pollution.
The solution to microplastic pollution, if there’s even one, has to be a collective effort. No single country, individual, or profession can solve this problem. Absolutely everyone has to chip in.
As a consumer, there are limits to what we can do, but as usual, I’ll suggest the following:
Vote for leaders who know about and propose comprehensive climate change solution (a comprehensive solution will include plastic pollution too)
Listen to and learn from the scientists
Talk about the plastic and climate issues to everyone who’s willing to listen
Make lifestyle changes to reduce plastic use
A note about synthetic fibers
Previously, I was in two minds about synthetic fibers. Surely recycled polyester clothes are good? Plastic down-recycled into stuffings and rugs seems to be a good use of plastic too, but now I’m thinking twice about it.
After all, microplastic from synthetic textiles (including stuffings and rugs) is a very significant global source of pollution in the environment — land, water, air… it’s everywhere.
However, suggesting a wardrobe change is extremely irresponsible if we don’t address our overconsumption of clothes. People may start buying too many natural-fiber clothes and that would tax natural resources.
A better way is to buy secondhand natural-fiber clothes, reduce our polyester clothes use, and go for a small but high-quality wardrobe rather than a tonne of plasticky clothes. | https://medium.com/thoughts-economics-politics-sustainability/microplastic-pollution-in-our-soil-9772d639d96f | ['Julie X'] | 2020-09-08 20:35:29.830000+00:00 | ['Sustainability', 'Microplastic Pollution', 'Climate Change', 'Environment', 'Plastic Pollution'] |
Solving “Container Killed by Yarn For Exceeding Memory Limits” Exception in Apache Spark | Introduction
Apache Spark is an open-source framework for distributed big-data processing. Originally written in Scala, it also has native bindings for Java, Python, and R programming languages. It also supports SQL, Streaming Data, Machine Learning, and Graph Processing.
All in all, Apache Spark is often termed as Unified analytics engine for large-scale data processing.
If you have been using Apache Spark for some time, you would have faced an exception which looks something like this:
Container killed by YARN for exceeding memory limits, 5 GB of 5GB used
The reason can either be on the driver node or on the executor node. In simple words, the exception says, that while processing, spark had to take more data in memory that the executor/driver actually has.
There can be a few reasons for this which can be resolved in the following ways:
Your data is skewed, which means you have not partitioned the data properly during processing which resulted in more data to process for a particular task. In this case, you can examine your data and try a custom partitioner that uniformly partitions the dataset.
Your Spark Job might be shuffling a lot of data over the network. Out of the memory available for an executor, only some part is allotted for shuffle cycle. Try using efficient Spark API's like reduceByKey over groupByKey etc, if not already done. Sometimes, shuffle can be unavoidable though. In that case, we need to increase memory configurations which we will discuss in further points
If the above two points are not applicable, try the following in order until the error is resolved. Revert any changes you might have made to spark conf files before moving ahead.
Increase Memory Overhead
Memory Overhead is the amount of off-heap memory allocated to each executor. By default, memory overhead is set to the higher value between 10% of the Executor Memory or 384 mb. Memory Overhead is used for Java NIO direct buffers, thread stacks, shared native libraries, or memory-mapped files.
The above exception can occur on either driver or executor node. Wherever the error is, try increasing the overhead memory gradually for that container only (driver or executor) and re-run the job. Maximum recommended memoryOverhead is 25% of the executor memory
Caution: Make sure that the sum of the driver or executor memory plus the driver or executor memory overhead is always less than the value of yarn.nodemanager.resource.memory-mb i.e. spark.driver/executor.memory + spark.driver/executor.memoryOverhead < yarn.nodemanager.resource.memory-mb
You have to change the property by editing the spark-defaults.conf file on the master node.
sudo vim /etc/spark/conf/spark-defaults.conf spark.driver.memoryOverhead 1024
spark.executor.memoryOverhead 1024
You can specify the above properties cluster-wide for all the jobs or you can also pass it as a configuration for a single job like below
spark-submit --class org.apache.spark.examples.WordCount --master yarn --deploy-mode cluster --conf spark.driver.memoryOverhead=512 --conf spark.executor.memoryOverhead=512 <path/to/jar>
If this doesn’t solve your problem, try the next point
Reducing the number of Executor Cores
If you have a higher number of executor cores, the amount of memory required goes up. So, try reducing the number of cores per executor which reduces the number of tasks that can run on the executor, thus reducing the memory required. Again, change the configuration of driver or executor depending on where the error is.
sudo vim /etc/spark/conf/spark-defaults.conf
spark.driver.cores 3
spark.executor.cores 3
Similar to the previous point, you can specify the above properties cluster-wide for all the jobs or you can also pass it as a configuration for a single job like below:
spark-submit --class org.apache.spark.examples.WordCount --master yarn --deploy-mode cluster --executor-cores 5--driver-cores 4 <path/to/jar>
If this doesn’t work, see the next point
Increase the number of partitions
If there are more partitions, the amount of memory required per partition would be less. Memory usage can be monitored by Ganglia. You can increase the number of partitions by invoking .repartition(<num_partitions>) on RDD or Dataframe
No luck yet? Increase executor or driver memory.
Increase Driver or Executor Memory
Depending on where the error has occurred, increase the memory of the driver or executor
Caution:
spark.driver/executor.memory + spark.driver/executor.memoryOverhead < yarn.nodemanager.resource.memory-mb
sudo vim /etc/spark/conf/spark-defaults.conf
spark.executor.memory 2g
spark.driver.memory 1g
Just like other properties, this can also be overridden per job
spark-submit --class org.apache.spark.examples.WordCount --master yarn --deploy-mode cluster --executor-memory 2g --driver-memory 1g <path/to/jar>
Most likely by now, you should have resolved the exception.
If not, you might need more memory-optimized instances for your cluster!
Happy Coding!
Reference: https://aws.amazon.com/premiumsupport/knowledge-center/emr-spark-yarn-memory-limit/ | https://medium.com/analytics-vidhya/solving-container-killed-by-yarn-for-exceeding-memory-limits-exception-in-apache-spark-b3349685df16 | ['Chandan Bhattad'] | 2019-11-01 04:46:53.356000+00:00 | ['Spark', 'Big Data', 'Distributed Systems', 'Data Engineering', 'Apache Spark'] |
An Honest Conversation With My Mum Looking Back At My Eating Disorder | An Honest Conversation With My Mum Looking Back At My Eating Disorder Refinery29 UK Follow Mar 29 · 8 min read
By Eve Simmons
Photographed by Eylul Aslan
I am a strong woman and it’s all thanks to my mother, a staunch feminist who spent the majority of her 20s reclaiming the night and her 30s dressing her baby daughter in anything other than pink dresses. The first sentence I ever learned was “more food please”. A little further down the line I learned how to ask (politely) for seconds, whenever I wanted them. Following the unwritten rule of feminism, the word ‘diet’ was forbidden. So when I developed a tormenting, tyrannical eating disorder at the age of 22 my mum was, understandably, shocked.
As was I — not to mention anyone who had ever shared a “shall we order one of everything?” meal with me.
I was living in my north London family home at the time, having just landed my first job in fashion journalism as an intern. A combination of a mild identity crisis, slotting myself into the skinny model set and an anxious disposition led to me clutching for some sense of control, when all else felt uncontrollable.
The rise of clean eating was a convenient curse. Manipulating and later, restricting, my diet was the focus I’d been looking for.
It took all of two months for Mum to notice and march me to the doctor’s surgery. And it took her all of five months to come to the heartbreaking realisation that this was something she couldn’t fix.
Now, five years on, we’ve just completed my third Eating Disorders Awareness Week as a fully recovered and functioning adult.
It’s only now, after starting my own eating disorder support website and having written a book on the subject, that I’ve begun to read stories from parents, carers and other loved ones, and come to terms with what my disorder must have been like for my nearest and dearest.
Despite her unwavering love and support, I know that my mother — like every mother who has ever lived — still harbours a pernicious guilt. And given the enormous portion of pudding she now slops on my plate at family dinners, I know she’s terrified it’ll happen again.
Her words, spoken in one particularly poignant family therapy session, still linger. “It was my job to protect you. And I couldn’t. I’ll never forgive myself for that.”
It’s been four years since we had that conversation and we haven’t spoken in great detail about it since. I’ve been petrified to bring it up — hearing her utter those words was hard enough the first time.
Now, I want to relieve her of those feelings. So last week, as we tucked into an apple tart, I attempted to do just that.
Eve: This tart is lovely. Remember when I never used to eat tart?
Mum: The day the doctors told us that you had to go into hospital. You were terrified. They said they were going to take you in and you just stared at me as if to say, Please, just make it better. And I knew if it were down to me, I wouldn’t be able to do it. I remember looking at you and saying, ‘I think you have to do what they’re saying. You have to go into hospital.’ It was heartbreaking.
Eve: God yeah, I’m so sorry Mum. And then there was the time you bought me a collection of teeny tiny chocolate bars.
Mum: I was so petrified of overwhelming you. My approach was always ‘slowly slowly’, so I would collect little boxes of raisins and mini nuts and put them in your handbag, thinking, hoping, you might get tempted. Then a few weeks later I was putting your clothes away and found everything I’d bought unopened, stuffed at the back of your cupboard. I just sat on the stairs and cried.
Eve: Well that’s nice and depressing. Look at me now though! [said through mouthful of pastry]
Mum: Well exactly. The one thing I always said about both my children was that they loved their food. You both grew up with healthy appetites and adored your food. And I loved watching it. You’d eat wholeheartedly. Then suddenly, you didn’t.
Eve: And it’s especially weird considering I’m your child…and my brother’s sister.
Mum: Yes. I did wonder how on Earth it could happen to us…and where I went wrong. Obviously I blamed myself, because I always blame myself.
Eve: [Teary] But you know Mum, from what I’ve learned in the past few years about this illness, sometimes there really is no explanation. It just happens, just like any other illness.
Mum: I know, I know. But while I know that I couldn’t have necessarily prevented it, as a mother what hurts is not being able to make it better. You grew up generally listening to what I said. I always hoped that I’d had a positive influence on what you thought and you’d come to me expecting answers. The worst moment was when I realised no matter what I did, I couldn’t make it better this time.
Eve: When did you realise that?
Mum: The day the doctors told us that you had to go into hospital. You were terrified. They said they were going to take you in and you just stared at me as if to say, Please, just make it better. And I knew if it were down to me, I wouldn’t be able to do it. I remember looking at you and saying, ‘I think you have to do what they’re saying. You have to go into hospital.’ It was heartbreaking.
Eve: Did you ever think about what would happen if…the…worst…
Mum: I didn’t let myself think about it. I couldn’t bear to. That’s why I knew you had to go to hospital — as scary as it was. Your brother was living in the US, my husband had been dead for a decade. You were my…everything. I wasn’t losing another person I loved.
Eve: I guess being in hospital sheltered me from whatever was going on at home — and how you were dealing with it.
Mum: I was absolutely frantic. Leaving you there was one of the hardest things I’ve ever had to do. I spent hours on end on the phone to the hospital, trying to find out what was going on and make sure you were seeing a professional, rather than being isolated in your room. I knew that few of the staff had professional training and a lot of them had actually come from working in prisons. And that’s how they treated you — like prisoners.
Eve: I couldn’t have got through it without that. But it wasn’t too bad in the end, food-wise. As soon as I started eating — because I didn’t have a choice — it became less scary and I was able to eat pretty much everything quite quickly.
Mum: Not from where I was sitting. You had good days and bad days. If you were ever stressed out or upset or worried, you wouldn’t eat much and then your weight would drop, just like that. I came to see you after you’d been in hospital for a month and you took off your jumper — and I could see all your bones. I was with your brother and he was so shocked, he couldn’t speak for an hour after we left.
Eve: That’s so weird because I remember feeling like I was getting better at that point — and that I looked okay.
Mum: [Raises eyebrows] You didn’t that day. But then I started to see that you still had fight in you. The hospital was so horrid that you pledged to do whatever you could to get out of there — and you did. You fought to escape so you could tell the story — like a true Simmons.
Eve: Here’s an uncomfortable question. Despite always teaching me that all food was good food, did my illness make you question your own eating habits?
Mum: Well, for the past 10 years I’ve lived with inflammatory bowel disease and have had to eat very small portions, otherwise I could be in agonising pain. And I know that’s something you picked up on. There were times when I’d force myself to eat a bigger meal to set a good example and end up awake all night, writhing in pain. But I was confident in the knowledge that I never had a problem with food. I never even worried about size growing up, like so many girls my age.
Eve: What? Never?
Mum: Nope. I was always quite curvy but didn’t ever obsess over it. I didn’t get on well [with my mother] so I rejected everything she did — including diets. Oh and [giggling] when it comes to exercise, I think I’ve done about five sit-ups in my whole life.
Eve: Yes, we never were [a family] for exercise, were we?
Mum: No, which is why I thought it was the weirdest thing when I saw you doing sit-ups on your bedroom floor when you became ill. It just wasn’t us — it wasn’t you.
Eve: See, how can you feel guilty when you couldn’t have possibly passed anything on to me?
Mum: Because mothers always blame themselves don’t they? And I’m convinced it’s something to do with the early death of your father — him being ill with cancer for so long — and I’ll always carry guilt that you didn’t have the carefree childhood I felt you should have had. Whether it was my fault or not. You were the good girl who never complained and I always felt that there would be a time when the anxiety would catch up with you. And I was right, it did.
Eve: Maybe. But who knows why it happened. It isn’t anyone’s fault. And at least there’s something good to come out of it — it’s given me a sense of purpose, of passion.
Mum: Absolutely. And for that I am immensely proud of you. I think the way you help other people is wonderful. You want to stop people going through what you did — what we all did.
Eve: But you were worried about me writing about it at first!
Mum: Yes, because I know how journalism works. And I knew that the moment you spoke out, you’d always be ‘the girl with the eating disorder’. I worried that you’d become so consumed with it all, you wouldn’t have a chance to pursue other opportunities and experiences.
Eve: But if anything it’s given me more experiences.
Mum: You’re right. And — having been so private and not told anyone — I realised that my daughter had been so brave in speaking about it to help others, I ought to do the same too.
Eve: As I sit here shovelling spoonfuls of apple tart into my mouth, can you honestly, seriously tell me that you still worry about my relationship with food?
Mum: It’s something I’ll always think could rear its ugly head again. Just like any illness. As your mother, I don’t think I’ll ever stop worrying about that.
If you or someone you love is struggling with an eating disorder, please call on 0808 801 0677. Support and information is available 365 days a year. | https://medium.com/refinery29/an-honest-conversation-with-my-mum-looking-back-at-my-eating-disorder-b8894dd82751 | [] | 2020-03-29 19:01:00.889000+00:00 | ['Wellness', 'Living', 'Eating Disorders', 'Health'] |
Which framework is better: Angular.js, React.js, or Vue.js? | Before I answer, if you‘re reading this article to pick a framework “to learn”, don’t. Read this article instead.
If you want to pick a framework to use (in an actual project), you may proceed :) | https://medium.com/edge-coders/which-framework-is-better-angular-js-react-js-or-vue-js-77c67d00d410 | ['Samer Buna'] | 2019-01-29 22:50:39.831000+00:00 | ['React', 'Programming', 'JavaScript', 'Angularjs', 'Vuejs'] |
The Inspirational Fiction Books that Changed Me More Than Self-Help | The Inspirational Fiction Books that Changed Me More Than Self-Help
The last book changed me for a reason you wouldn’t expect.
Photo by Justin on Unsplash
Escapism helped me cope with a year plagued by its bitter reality. I escaped the fear of being stuck inside by exercising outdoors. I escaped the fear of negative thoughts by minimizing my news intake. I escaped the fear of the world’s certain uncertainty by diving into the undemanding world of inspirational fiction.
Inspirational fiction allowed me to live in a world where hope and prosperity run rampant. They introduced me to characters with which I could empathize, characters with which I could grow for the entirety of the 200-or-so pages. These stories built up my faith muscles and reminded me that there is still positivity; still good; still happiness circulating around the world.
The books below can do the same for you. They can pluck you out of the four walls you’ve been staring at for the last year and introduce you to a world of potential. By reading them, you will feel an unmatched delight near impossible to gain by simply reading self-help books. Inspirational fiction allows you to feel the help in action, not just be told it.
(Note: These are not affiliate links. I just wanted to make it easy for you to go and purchase some darn good reading material that has the possibility of uplifting your day) | https://medium.com/mind-cafe/the-inspirational-fiction-books-that-changed-me-more-than-self-help-2e3bc6e7e0be | ['Jordan Gross'] | 2020-12-29 16:39:31.658000+00:00 | ['Life Lessons', 'Inspiration', 'Self Improvement', 'Creativity', 'Books'] |
Finland’s New Free AI Courses | Finland’s New Free AI Courses
How to get a certificate and take advantage of the course by Elements AI.
Photo by Arttu Päivinen on Unsplash
Besides being the home of Santa Claus, Finland is known as a tech leader, even ahead of the US, according to the UNDP. Indeed, tech operations constitute “over 50% of all Finnish exports.”
We even owe technologies like Linux and the first web browser to Finland. Today, Finland is keeping up its tech legacy with its free Elements of AI online course.
Overview
Elements of AI is a set of two online courses made by Reaktor and the University of Helsinki, combining theory and practice with teaching as many people as possible about AI. The two courses are titled Introduction to AI and Building AI .
The course is well on its way to achieving its mission of making AI accessible, as over 550,000 people have already signed up, as of writing.
Introduction to AI
The first course is split into six chapters:
What is AI?
AI problem solving
Real-world AI
Machine learning
Neural networks
Implications
Screenshot of “Elements of AI” course progress section, captured by the author.
The course is very well designed, with simple explanations, nice visualizations, and exercises at the bottom of most chapters to solidify your learning.
Both courses feature a “Course Progress” ribbon to show you how you’re progressing through the course and to keep you motivated.
Building AI
The second section will take around 50 hours and is split into five chapters:
Getting started with AI
Dealing with uncertainty
Machine learning
Neural networks
Conclusion
This time, the exercises are more in-depth and practical, so they’ll be more challenging than before. Be sure to check out the community below if you get stuck.
Community
Elements of AI come with an awesome, highly active community at Spectrum, where you can discuss and ask questions about each chapter.
As of writing, the community has almost 8,000 members with whom you can ask questions and study. I’ve found it’s an invaluable resource to make sure I truly understand the material. Best of all, it’s free!
Certificate
You can purchase a certificate, upon completion, for each course, for just 50 Euros. This will be a shareable certificate that would make a great addition to any CV or LinkedIn, although it’s totally optional, and the course is normally free.
The Final Project
For the final project, you’re expected to demonstrate your skills and creativity. While it’s not required, it’s a great opportunity to put your skills to practice and share with a community of thousands of other learners.
Elements of AI gives a lot of inspiration and ideas for final projects, such as “Sources Checker” — a bot that checks the sources of news articles online.
Other ideas include noise pollution forecasting, predicting stock criteria like growth and reliability, matching ideas and doers, automating applications to relevant jobs, making expert recommendations, assessing financial risk, recommending healthy meals, and many more.
Perhaps my favorite idea is the “AI credit-risk management for social lending” project, which uses AI to predict credit risk. Models like these are already being used in the real world.
For instance, the micro-loan company Creditt uses Obviously.AI’s API to score customer profiles and find out how much to credit users. | https://medium.com/towards-artificial-intelligence/finlands-new-free-ai-courses-b75c1d53ac84 | ['Frederik Bussler'] | 2020-12-10 19:07:29.138000+00:00 | ['AI', 'Artificial Intelligence', 'Learning', 'Data Science', 'Education'] |
Why you should never agree to use teleportation | Why you should never agree to use teleportation
Spoiler: because it’ll probably kill you…at least for a little while.
If you’ve seen any sort of science fiction movie — you’ve probably come across the notion of teleportation. The ability to instantly be transported from one side of the planet — to the other.
Imagine a world where you could be in Paris for breakfast, Buenos Aires for lunch, and the newest restaurant on the moon for dinner. Pure fantasy right?
It may have been fantasy…until 2018 anyway.
Scientists in China successfully teleported a photon from Earth onto a satellite 300 miles away. This moved the concept of teleportation from being impossible to simply being a herculean endeavour.
Before we start tasting that freshly baked French bread each morning — we first need to work out how to teleport larger particles, small inanimate objects, “lesser” forms of life, and finally humans.
That is to say nothing of the seemingly astronomical amount of computing power and transmission bandwidth we will need to be capable of harnessing in order to teleport a human.
One day, a century or two from now, this technology will be mature. The question then arises — should you use a transporter, or will it mean your instant death with your life being taken over by a doppelganger? How do you know that whoever steps into the transporter is the same person who steps out?
Let us consider four ways in which a transporter might work, and whether that would mean that “you” come out the other end or a copy.
Facsimile
Body Transmission
Mind Transmission
Wormholes
Facsimile
Your body is scanned by the teleporter in your lounge room and deconstructed. You are reprinted at the destination with new “ink”.
Whilst atomically (and genetically) identical — the person at the destination would be a copy as the base materials used are different “instances” of those elements. You, of course, are dead — and will stay dead.
To demonstrate with another example — imagine transporting a house from point A to point B using this method. The house in point A has been destroyed, and while the bricks being printed in Point B look identical — they are mere copies.
Body Transmission
Your body is scanned, and deconstructed into its constituent “Lego blocks” (read: atoms). These same blocks are then fed through some sort of pipe (or via quantum entanglement) and drop out at the destination — where they are reassembled into yourself.
Unlike the previous example, the very same atoms in the original you have made it to the destination.
In this scenario — you were definitely killed but were you brought back to life and consciousness. Or was a new instance of your consciousness that was “booted up”?
Does it even matter if it’s a different instance of consciousness?
Mind Transmission
Your body is scanned. A replica is reprinted at the destination — including all the data in your brain (memories, facts, relationships, and neural pathways).
The electrochemical impulses that course through your brain are transmitted (similar to a data file over Bluetooth or wi-fi) and into your new brain.
This way, while the body is new, the original “spark of life” has been transmitted over to Point B. The consciousness of the individual may have effectively just blanked out (as you would under a coma or deep sleep) for a few milliseconds.
Wormholes
The teleportation device creates and opens a wormhole under your feet that creates a tunnel through space-time, with the other end of the wormhole terminating at your destination.
In this way — you and your atoms remain wholly intact, and you effectively walk through a door or get onto a slide which takes you to where you need to go.
This solution saves you from any death and preserves the continuity of your consciousness. | https://medium.com/predict/why-you-should-never-agree-to-use-teleportation-cec3a3de58f2 | ['Kesh Anand'] | 2019-06-26 20:20:59.707000+00:00 | ['Consciousness', 'Future', 'Science Fiction', 'Technology', 'Science'] |
This is why your read-eval-print-loop is so amazing | One of the things that makes the tech community so special is that we are always looking for ways to work more efficiently. Everyone has their favorite set of tools which makes them run better. As a professional UI dev, the Chrome DevTools and the Node.js read-eval-print-loop (REPL) became my favorite tools early on. I noticed that they enabled me to work more efficiently and allowed me to learn new things more quickly.
The three phases of the REPL process
This actually made me curious to investigate why this tool is so useful. I could easily find plenty of blog posts which explained what REPLs are and how to use them, for example here or here. But this post here is dedicated to the why (as in why are REPLs such a great tool for developers).
“The number one reason that schools move away from Java as a teaching language is the high bars to Hello-world programs.” — Stuart Halloway
What is a REPL?
REPL stands for read-evaluate-print-loop and this is basically all there is to it.
Your application runtime is in a specific state and the REPL helps you to interact with it. The REPL will read and evaluate the commands and print the result and then go back to the start to read your next input. The evaluate step might change your runtime. This process can be seen as an interview with your application to query its current state.
In other words, the REPL makes your runtime more tangible and allows you to test hypotheses about it.
According to Stuart Halloway, the absence of a REPL in Java is the most significant reason why schools started to move to other languages to teach programming. Some people even use the REPL to write better unit tests.
Do I already use a REPL (-like tool) today?
This basic explanation might have reminded you of some tools which you use every day. If you know and use one of the following tools, the answer is “yes”:
The dev tools of your browser (like Chrome DevTools)
Your terminal/shell
Jupyter Notebooks
The REPL process in Clojure
Repl.it, jsfiddle.net, or jsbin.com
Online regex validators
Why is the REPL so helpful?
This question kept me up at night because I didn’t understand what makes us inefficient in the first place. I started to research some common psychological effects and tried to link them to my daily interactions with the REPL. Here are my top three hypotheses:
Being in the flow
Flow is the mental state of operation in which a person performing an activity is fully immersed in a feeling of energized focus, full involvement, and enjoyment in the process of the activity. (source)
I think all of us are familiar with this state, it makes us extremely productive and time flies basically. Unfortunately, it’s fairly easy to “lose” the flow, for example when you get interrupted or when you have to wait for some period. I learned this can happen very fast: Researchers found out that one second is about the limit for the user’s flow of thought to stay uninterrupted.
The REPL doesn’t need to compile or deploy your code. This leads to a very short response time (<100ms). Thus, you are able to test your hypotheses without losing the flow.
This is what we want to avoid (source: XKCD)
Positive Reinforcement
Positive reinforcement involves the addition of a reinforcing stimulus following a behavior that makes it more likely that the behavior will occur again. (source)
This is the effect that appeals the most to me. Your brain learns to favor certain actions when they were rewarded in the past. This reward could be a bonus from your boss after an outstanding month or a simple “Great job!” from your skiing instructor.
Every time your REPL experiment succeeds and you solved a puzzle/problem, your brain feels rewarded as well! This also takes place when you code in a common IDE. But the REPL responds way faster and allows you to iterate more often. So, more experiments lead to more reinforcement. This effect makes you use the REPL more often and keeps your eye on the ball (instead of distracting yourself by checking for emails).
Digital Amnesia
The tendency to forget information that can be found readily online by using Internet search engines. (source)
I have to admit, I often mix Java, Python and JavaScript syntax, because that information can be found all over the internet. I would ask myself “Do I need to use add(), append() or push() to add a new element to an array in JavaScript?”. Thus for me, an example of this effect is recalling method names of API and language references.
In the REPL, I can see the available functions immediately with autocomplete:
The code-completion feature of the Node.js REPL
The great thing is, this works beyond the standard objects of programming languages. This works for all frameworks and modules, which makes the REPL more mighty than your IDE! There’s no need to compare the version numbers of modules and API references anymore:
“Truth can only be found in one place: the code.” – Robert C. Martin, Clean Code
I hope this article helped you to understand how your brain works and how the REPL can help you to be more productive.
I’m curious to see if you agree with my hypotheses or if you know more tools to be a more efficient developer.
Update 2/13/2019:
I’ve also written a blog post about the usage of REPLs in Cloud Foundry Environments.
Check out this video by DJ Adams if you’d like to see the REPL in action :) | https://medium.com/free-code-camp/this-is-why-your-read-eval-print-loop-is-so-amazing-cf0362003983 | [] | 2019-02-13 17:29:32.137000+00:00 | ['Programming', 'JavaScript', 'Psychology', 'Tech', 'Productivity'] |
Creating Good UX for Better AI | Creating Good UX for Better AI
How to design a product that benefits both the user and the AI model
As you’ve probably noticed, Machine Learning and Artificial Intelligence are here to stay and will continue to disrupt the market. Many products have inherently integrated AI functions (i.e., Netflix’s suggestions, Facebook’s auto-tagging, Google’s question answering), and by 2024, 69% of the manager’s routine workload, will be automated, as Gartner forecasts.
A lot of work has been done around designing products that make AI accessible for users, but what about designing a product that improves the AI model? How does UX approach the development of better AI?
I’ve always been very excited about AI, and for the past couple of months, I’ve been working on the Product Management and UX of several highly technical and advanced AI products. In my experience, bridging the gap between the science behind Machine Learning(ML) and the end-user is a real challenge, but it’s crucial and valuable. Humans have a huge responsibility when it comes to teaching the different models — it can either turn into something great or go horribly wrong.
In this article, I will focus on the two sides of an AI product, and then combine them into one approach that will benefit both the end-user and the ML model.
So, first, let’s focus on the two sides of the experience:
User-centered design Model-centered design
After becoming familiar with these, I’ll combine them into one Machine Learning Experience — Model-User Design.
User-Centered Design — Creating a good product
User-centered design is the shared goal of everyone interested in UX. If the product is centered around a real user’s needs, it is far more likely to create a product-market fit and generate happy customers.
AI is pretty new to people. Many people are afraid of it for many reasons — from giving false predictions to taking away their jobs (not to mention their lives, but that’s some Terminator stuff). That’s why creating a good experience for the user is crucial.
There are a couple of tools we can use in order to create a good experience in AI products. We’ll cover some of them, including finding the right problem to solve in order to provide value, how to explain the model running “under the hood”, keeping the user involved in the learning process and preparing for mistakes.
Find a good problem to solve
The basic rule of product-market fit, which applies to all other products, applies to AI. For the product to succeed, a real problem needs to be solved. If we create the most complicated state-of-the-art AI product that predicts the flying route of a fly, that would be a great model, but no problem is being solved and no value is being created. AI should add value to users and optimize the way they work.
“The only reason your product should exist is to solve someone’s problem.” — Kevin Systrom, Co-Founder of Instagram
Explainability
Explainable AI explains what AI does to the user. The user has the right to understand why the algorithm predicted something. Explaining the why creates a more reliable connection and a feeling of trust. There are many examples such as product content suggestions on Netflix and YouTube — “Because you liked X:”, or “Based on your watch history:”.
These sentences make you understand why Netflix suggested Ozark — because you watched Breaking Bad!
You should also be aware that it’s not just about the experience, but that it’s a regulation ‘thing’. GDPR includes the right of an individual to ask for a human review of the AI’s prediction, to understand if the algorithm has made a mistake.
Control & User feedback
We should keep in mind that the model doesn’t always know what’s best for the user, and that users should feel they have the power to affect the model and “teach” it. For example — create opportunities for the user to provide feedback if the prediction is right or not.
These types of messages enable feedback from the user, which will eventually help the prediction improve.
Prepare for mistakes
An AI algorithm won’t be 100% correct all the time. That’s why the algorithm should be able to project its confidence in a prediction —if a prediction isn’t very confident, the user should know about it and take it with a grain of salt. Also, be ready to handle mistakes and errors. The user is more likely to accept mistakes in AI if they are followed with an explanation of why the model came to its prediction (as mentioned before — explainability!). This statement should also be followed by information on how to improve the model in the future.
It’s really important to remember AI has a huge impact on people’s lives. That’s why AI models’ predictions and mistakes have a colossal effect on people’s lives — wrong predictions may be highly offensive to the user (e.g., Google’s horrible false classification) or cause physical damage and even death (e.g., accidents made by self-driving cars).
Model-Centered Design — Creating a good AI
Now that we’re aligned about what user-centered design is, let’s talk about how to make the design centered around the ML model — how to improve the model and make the learning process as efficient and beneficial as possible.
When we talked about user-centered design, our goal was to make the model understand the user. Now, let’s try to make sure the user understands the model.
To make this generic and straightforward, let’s establish a very high-level flow of the machine learning process:
In order to think about Machine Learning Experience, let’s forget for a second what we know about user interface components. Let’s talk about the process and how it meets humans.
Training a model
The training part of the ML model is essentially taking a lot of data and uploading it so that the algorithm can learn from it. Let’s say we want to train a model to identify lemurs in pictures. A training process can include uploading 1,000 images, some labelled and some not. Then, waiting for the model to learn. At the end of the process the model will be trained and can identify a lemur!
As users, we’d like to make sure the algorithm learned. That’s why it’s important to visualize and clarify the training process — things like the accuracy of the model, the number of epochs that it took for it to learn, etc.
Also, if we want to make sure the model works as we want it to, we can move to inference phase.
Inference
In this part, we’d like to test the understanding of the model. Inferring, to put it in very simple words, is pressing the “run” button on the AI model, with a given input. If we take the lemur example from before, at this point, we would upload a picture and check that the model understands what a lemur is and what isn’t.
After seeing the result, the user should have the ability to provide feedback, so the model will learn and improve.
Monitoring
In order to make sure the model is performing well, monitoring is needed. It’s essential to understand the relevant metrics in order to monitor the model well. For a deeper understanding of the subject, I highly recommend reading this article:
Model-User Design — Creating a good AI Product
Now, when we know both sides of the AI-Product equation, we’re able to identify the guidelines for creating a good AI product:
When thinking about the product’s users, we need to take into consideration the ML researcher who will feed and train the algorithm. With that in mind, we have some key takeaways:
Quality Control — Help the user understand the model
To give good predictions and provide an actual value, the top motivation for the ML researcher is to make sure the algorithm is as accurate as possible. For that to happen, we need the user to have comprehensive understanding of the model’s inputs and outputs. e.g., users should understand the importance of labelling training data and giving feedback to the predictions. The better users understand the important metrics of the model, the better they’ll be able to improve the model and get better results. In other words, in order to improve the model, users need to understand the “needs” of the model.
Feedback Feedback Feedback — Help the model understand the user
In order to improve the model, it’s important to make the user’s feedback as intuitive as possible and make it a big part of the user flow. There’s only so much an algorithm can understand about human needs without actual human input (imagine expecting a baby to learn how to speak without teaching it what’s right and what’s wrong).
Make it personal
Making users feel like they’re taking an active part in a product’s functioning is highly beneficial, for two reasons:
If the users feel their contribution is making the model’s improvement, they will be much more invested. The more the users feel the model knows them and understands their needs, the more they will enjoy the effects of the model, get precise predictions, and trust the model.
Extra reading on the subject can be found on this great post about the IKEA effect:
Learn from the best (inputs)
It’s a shared motivation for the model to learn from the best quality of input. A good design can encourage the user to upload high-quality inputs and remark when and why low-quality inputs aren’t good enough. e.g., a message saying the input image’s quality is too low in a way that the user understands and “believes”, therefore, wants to upload better images. | https://medium.com/beyondminds/creating-good-ux-for-better-ai-fefae1d9ac2f | ['Omri Lachman'] | 2020-10-01 07:50:46.432000+00:00 | ['AI', 'Artificial Intelligence', 'Technology', 'UX', 'Machine Learning'] |
Hierarchical Clustering on Categorical Data in R | Dissimilarity Matrix
Arguably, this is the backbone of your clustering. Dissimilarity matrix is a mathematical expression of how different, or distant, the points in a data set are from each other, so you can later group the closest ones together or separate the furthest ones — which is a core idea of clustering.
This is the step where data types differences are important as dissimilarity matrix is based on distances between individual data points. While it is quite easy to imagine distances between numerical data points (remember Eucledian distances, as an example?), categorical data (factors in R) does not seem as obvious.
In order to calculate a dissimilarity matrix in this case, you would go for something called Gower distance. I won’t get into the math of it, but I am providing a links here and here. To do that I prefer to use daisy() with metric = c("gower") from the cluster package .
#----- Dummy Data -----#
# the data will be sterile clean in order to not get distracted with other issues that might arise, but I will also write about some difficulties I had, outside the code library(dplyr) # ensuring reproducibility for sampling
set.seed(40) # generating random variable set
# specifying ordered factors, strings will be converted to factors when using data.frame() # customer ids come first, we will generate 200 customer ids from 1 to 200
id.s <- c(1:200) %>%
factor()
budget.s <- sample(c("small", "med", "large"), 200, replace = T) %>%
factor(levels=c("small", "med", "large"),
ordered = TRUE) origins.s <- sample(c("x", "y", "z"), 200, replace = T,
prob = c(0.7, 0.15, 0.15)) area.s <- sample(c("area1", "area2", "area3", "area4"), 200,
replace = T,
prob = c(0.3, 0.1, 0.5, 0.2)) source.s <- sample(c("facebook", "email", "link", "app"), 200,
replace = T,
prob = c(0.1,0.2, 0.3, 0.4)) ## day of week - probabilities are mocking the demand curve
dow.s <- sample(c("mon", "tue", "wed", "thu", "fri", "sat", "sun"), 200, replace = T,
prob = c(0.1, 0.1, 0.2, 0.2, 0.1, 0.1, 0.2)) %>%
factor(levels=c("mon", "tue", "wed", "thu", "fri", "sat", "sun"),
ordered = TRUE) # dish
dish.s <- sample(c("delicious", "the one you don't like", "pizza"), 200, replace = T)
# by default, data.frame() will convert all the strings to factors
synthetic.customers <- data.frame(id.s, budget.s, origins.s, area.s, source.s, dow.s, dish.s) #----- Dissimilarity Matrix -----# library(cluster)
# to perform different types of hierarchical clustering
# package functions used: daisy(), diana(), clusplot() gower.dist <- daisy(synthetic.customers[ ,2:7], metric = c("gower")) # class(gower.dist)
## dissimilarity , dist
Done with a dissimilarity matrix. That’s very fast on 200 observations, but can be very computationally expensive in case you have a large data set.
In reality, it is quite likely that you will have to clean the dataset first, perform the necessary transformations from strings to factors and keep an eye on missing values. In my own case, the dataset contained rows of missing values, which nicely clustered together every time, leading me to assume that I found a treasure until I had a look at the values (meh!).
Clustering Algorithms
You could have heard that there is k-means and hierarchical clustering. In this post, I focus on the latter as it is a more exploratory type, and it can be approached differently: you could choose to follow either agglomerative (bottom-up) or divisive (top-down) way of clustering.
Agglomerative clustering will start with n clusters, where n is the number of observations, assuming that each of them is its own separate cluster. Then the algorithm will try to find most similar data points and group them, so they start forming clusters.
In contrast, divisive clustering will go the other way around — assuming all your n data points are one big cluster and dividing most dissimilar ones into separate groups.
If you are thinking which one of them to use, it is always worth trying all the options, but in general, agglomerative clustering is better in discovering small clusters, and is used by most software; divisive clustering — in discovering larger clusters.
I personally like having a look at dendrograms — graphical representation of clustering first to decide which method I will stick to. As you will see below, some of dendrograms will be pretty balanced , while others will look like a mess.
# The main input for the code below is dissimilarity (distance matrix)
# After dissimilarity matrix was calculated, the further steps will be the same for all data types
# I prefer to look at the dendrogram and fine the most appealing one first - in this case, I was looking for a more balanced one - to further continue with assessment #------------ DIVISIVE CLUSTERING ------------#
divisive.clust <- diana(as.matrix(gower.dist),
diss = TRUE, keep.diss = TRUE)
plot(divisive.clust, main = "Divisive")
#------------ AGGLOMERATIVE CLUSTERING ------------#
# I am looking for the most balanced approach
# Complete linkages is the approach that best fits this demand - I will leave only this one here, don't want to get it cluttered # complete
aggl.clust.c <- hclust(gower.dist, method = "complete")
plot(aggl.clust.c,
main = "Agglomerative, complete linkages")
Assessing clusters
Here, you will decide between different clustering algorithms and a different number of clusters. As it often happens with assessment, there is more than one way possible, complemented by your own judgement. It’s bold and in italics because your own judgement is important — the number of clusters should make practical sense and they way data is divided into groups should make sense too. Working with categorical variables, you might end up with non-sense clusters because the combination of their values is limited — they are discrete, so is the number of their combinations. Possibly, you don’t want to have a very small number of clusters either — they are likely to be too general. In the end, it all comes to your goal and what you do your analysis for.
Conceptually, when clusters are created, you are interested in distinctive groups of data points, such that the distance between them within clusters (or compactness) is minimal while the distance between groups (separation) is as large as possible. This is intuitively easy to understand: distance between points is a measure of their dissimilarity derived from dissimilarity matrix. Hence, the assessment of clustering is built around evaluation of compactness and separation.
I will go for 2 approaches here and show that one of them might produce nonsense results:
Elbow method: start with it when the compactness of clusters, or similarities within groups are most important for your analysis.
Silhouette method: as a measure of data consistency, the silhouette plot displays a measure of how close each point in one cluster is to points in the neighboring clusters.
In practice, they are very likely to provide different results that might be confusing at a certain point— different number of clusters will correspond to the most compact / most distinctively separated clusters, so judgement and understanding what your data is actually about will be a significant part of making the final decision.
There are also a bunch of measurements that you can analyze for your own case. I am adding them to the code itself.
# Cluster stats comes out as list while it is more convenient to look at it as a table
# This code below will produce a dataframe with observations in columns and variables in row
# Not quite tidy data, which will require a tweak for plotting, but I prefer this view as an output here as I find it more comprehensive library(fpc) cstats.table <- function(dist, tree, k) {
clust.assess <- c("cluster.number","n","within.cluster.ss","average.within","average.between",
"wb.ratio","dunn2","avg.silwidth")
clust.size <- c("cluster.size")
stats.names <- c()
row.clust <- c() output.stats <- matrix(ncol = k, nrow = length(clust.assess))
cluster.sizes <- matrix(ncol = k, nrow = k) for(i in c(1:k)){
row.clust[i] <- paste("Cluster-", i, " size")
} for(i in c(2:k)){
stats.names[i] <- paste("Test", i-1)
for(j in seq_along(clust.assess)){
output.stats[j, i] <- unlist(cluster.stats(d = dist, clustering = cutree(tree, k = i))[clust.assess])[j]
}
for(d in 1:k) {
cluster.sizes[d, i] <- unlist(cluster.stats(d = dist, clustering = cutree(tree, k = i))[clust.size])[d]
dim(cluster.sizes[d, i]) <- c(length(cluster.sizes[i]), 1)
cluster.sizes[d, i]
}
} output.stats.df <- data.frame(output.stats) cluster.sizes <- data.frame(cluster.sizes)
cluster.sizes[is.na(cluster.sizes)] <- 0 rows.all <- c(clust.assess, row.clust)
# rownames(output.stats.df) <- clust.assess
output <- rbind(output.stats.df, cluster.sizes)[ ,-1]
colnames(output) <- stats.names[2:k]
rownames(output) <- rows.all is.num <- sapply(output, is.numeric)
output[is.num] <- lapply(output[is.num], round, 2) output
} # I am capping the maximum amout of clusters by 7
# I want to choose a reasonable number, based on which I will be able to see basic differences between customer groups as a result stats.df.divisive <- cstats.table(gower.dist, divisive.clust, 7)
stats.df.divisive
Look, average.within, which is an average distance among observations within clusters, is shrinking, so does within cluster SS. Average silhouette width is a bit less straightforward, but the reverse relationship is nevertheless there.
See how disproportional the size of clusters is. I wouldn’t rush into working with incomparable number of observations within clusters. One of the reasons, the dataset can be imbalanced, and some group of observations will outweigh all the rest in the analysis — this is not good and is likely to lead to biases.
stats.df.aggl <-cstats.table(gower.dist, aggl.clust.c, 7) #complete linkages looks like the most balanced approach
stats.df.aggl
Notice how more balanced agglomerative complete linkages hierarchical clustering is compared on the number of observations per group.
# --------- Choosing the number of clusters ---------# # Using "Elbow" and "Silhouette" methods to identify the best number of clusters
# to better picture the trend, I will go for more than 7 clusters. library(ggplot2) # Elbow
# Divisive clustering
ggplot(data = data.frame(t(cstats.table(gower.dist, divisive.clust, 15))),
aes(x=cluster.number, y=within.cluster.ss)) +
geom_point()+
geom_line()+
ggtitle("Divisive clustering") +
labs(x = "Num.of clusters", y = "Within clusters sum of squares (SS)") +
theme(plot.title = element_text(hjust = 0.5))
So, we’ve produced the “elbow” graph. It shows how the within sum of squares — as a measure of closeness of observations : the lower it is the closer the observations within the clusters are — changes for the different number of clusters. Ideally, we should see a distinctive “bend” in the elbow where splitting clusters further gives only minor decrease in the SS. In the case of a graph below, I would go for something around 7. Although in this case, one of a clusters will consist of only 2 observations, let’s see what happens with agglomerative clustering.
# Agglomerative clustering,provides a more ambiguous picture
ggplot(data = data.frame(t(cstats.table(gower.dist, aggl.clust.c, 15))),
aes(x=cluster.number, y=within.cluster.ss)) +
geom_point()+
geom_line()+
ggtitle("Agglomerative clustering") +
labs(x = "Num.of clusters", y = "Within clusters sum of squares (SS)") +
theme(plot.title = element_text(hjust = 0.5))
Agglomerative “elbow” looks similar to that of divisive, except that agglomerative one looks smoother — with “bends” being less abrupt. Similarly to divisive clustering, I would go for 7 clusters, but choosing between the two methods, I like the size of the clusters produced by the agglomerative method more — I want something comparable in size.
# Silhouette ggplot(data = data.frame(t(cstats.table(gower.dist, divisive.clust, 15))),
aes(x=cluster.number, y=avg.silwidth)) +
geom_point()+
geom_line()+
ggtitle("Divisive clustering") +
labs(x = "Num.of clusters", y = "Average silhouette width") +
theme(plot.title = element_text(hjust = 0.5))
When it comes to silhouette assessment, the rule is you should choose the number that maximizes the silhouette coefficient because you want clusters that are distinctive (far) enough to be considered separate.
The silhouette coefficient ranges between -1 and 1, with 1 indicating good consistency within clusters, -1 — not so good.
From the plot above, you would not go for 5 clusters — you would rather prefer 9.
As a comparison, for the “easy” case, the silhouette plot is likely to look like the graph below. We are not quite, but almost there.
ggplot(data = data.frame(t(cstats.table(gower.dist, aggl.clust.c, 15))),
aes(x=cluster.number, y=avg.silwidth)) +
geom_point()+
geom_line()+
ggtitle("Agglomerative clustering") +
labs(x = "Num.of clusters", y = "Average silhouette width") +
theme(plot.title = element_text(hjust = 0.5))
What the silhouette width graph above is saying is “the more you break the dataset, the more distinctive the clusters become”. Ultimately, you will end up with individual data points — and you don’t want that, and if you try a larger k for the number of clusters — you will see it. E.g., at k=30, I got the following graph:
So-so: the more you split, the better it gets, but we can’t be splitting till individual data points (remember that we have 30 clusters here in the graph above, and only 200 data points).
Summing it all up, agglomerative clustering in this case looks way more balanced to me — the cluster sizes are more or less comparable (look at that cluster with just 2 observations in the divisive section!), and I would go for 7 clusters obtained by this method. Let’s see how they look and check what’s inside.
The dataset consists of 6 variables which need to be visualized in 2D or 3D, so it’s time for a challenge! The nature of categorical data poses some limitations too, so using some pre-defined solutions might get tricky. What I want to a) see how observations are clustered, b) know is how observations are distributed across categories, thus I created a) a colored dendrogram, b) a heatmap of observations count per variable within each cluster.
library("ggplot2")
library("reshape2")
library("purrr")
library("dplyr") # let's start with a dendrogram
library("dendextend") dendro <- as.dendrogram(aggl.clust.c) dendro.col <- dendro %>%
set("branches_k_color", k = 7, value = c("darkslategray", "darkslategray4", "darkslategray3", "gold3", "darkcyan", "cyan3", "gold3")) %>%
set("branches_lwd", 0.6) %>%
set("labels_colors",
value = c("darkslategray")) %>%
set("labels_cex", 0.5) ggd1 <- as.ggdend(dendro.col) ggplot(ggd1, theme = theme_minimal()) +
labs(x = "Num. observations", y = "Height", title = "Dendrogram, k = 7")
# Radial plot looks less cluttered (and cooler)
ggplot(ggd1, labels = T) +
scale_y_reverse(expand = c(0.2, 0)) +
coord_polar(theta="x")
# Time for the heatmap
# the 1st step here is to have 1 variable per row
# factors have to be converted to characters in order not to be dropped clust.num <- cutree(aggl.clust.c, k = 7)
synthetic.customers.cl <- cbind(synthetic.customers, clust.num) cust.long <- melt(data.frame(lapply(synthetic.customers.cl, as.character), stringsAsFactors=FALSE),
id = c("id.s", "clust.num"), factorsAsStrings=T) cust.long.q <- cust.long %>%
group_by(clust.num, variable, value) %>%
mutate(count = n_distinct(id.s)) %>%
distinct(clust.num, variable, value, count) # heatmap.c will be suitable in case you want to go for absolute counts - but it doesn't tell much to my taste heatmap.c <- ggplot(cust.long.q, aes(x = clust.num, y = factor(value, levels = c("x","y","z", "mon", "tue", "wed", "thu", "fri","sat","sun", "delicious", "the one you don't like", "pizza", "facebook", "email", "link", "app", "area1", "area2", "area3", "area4", "small", "med", "large"), ordered = T))) +
geom_tile(aes(fill = count))+
scale_fill_gradient2(low = "darkslategray1", mid = "yellow", high = "turquoise4") # calculating the percent of each factor level in the absolute count of cluster members
cust.long.p <- cust.long.q %>%
group_by(clust.num, variable) %>%
mutate(perc = count / sum(count)) %>%
arrange(clust.num) heatmap.p <- ggplot(cust.long.p, aes(x = clust.num, y = factor(value, levels = c("x","y","z",
"mon", "tue", "wed", "thu", "fri","sat", "sun", "delicious", "the one you don't like", "pizza", "facebook", "email", "link", "app", "area1", "area2", "area3", "area4", "small", "med", "large"), ordered = T))) +
geom_tile(aes(fill = perc), alpha = 0.85)+
labs(title = "Distribution of characteristics across clusters", x = "Cluster number", y = NULL) +
geom_hline(yintercept = 3.5) +
geom_hline(yintercept = 10.5) +
geom_hline(yintercept = 13.5) +
geom_hline(yintercept = 17.5) +
geom_hline(yintercept = 21.5) +
scale_fill_gradient2(low = "darkslategray1", mid = "yellow", high = "turquoise4") heatmap.p
Having a heatmap, you see the how many observations fall into each factor level within initial factors (variables we’ve started with). The deeper blue corresponds to a higher relative number of observations within a cluster. In this one, you can also see that day of the week / basket size have almost the same amount of customers in each bin— it might mean those are not definitive for the analysis and might be omitted. | https://towardsdatascience.com/hierarchical-clustering-on-categorical-data-in-r-a27e578f2995 | ['Anastasia Reusova'] | 2019-03-26 16:08:19.820000+00:00 | ['Data Science', 'Clustering', 'Segmentation', 'Visualization'] |
My Agoraphobic Life | My name is Heather, and I have a problem
I live with agoraphobia, and it keeps me bound to geographical and emotional areas.Agoraphobia directly translated means “Fear of the market place.” Interesting. I dream of a life where I could enjoy the marketplace or any public place. But I can’t. Not yet, anyway.
It started when…
Many therapists are confident my agoraphobia is a result of childhood sexual trauma. Makes sense. I did live with ongoing sexual abuse between ages six and 14. Those years, I spent most of my life looking over my shoulder and gauging my perpetrator’s true intentions. But I got out of that situation, and I feel pretty healed.
I think my agoraphobia is the result of a medical problem. I was 18 years old, and nursing my firstborn when suddenly, my heart started pounding wildly in my chest. My friend drove me to the hospital, where doctors and nurses acted very suspicious that I was on drugs. The male nurse leaned over my shoulder and whispered into my ear, “Would it be alright if I undress you with my hands?” Then, they left me hooked up to a heart monitor for a few hours. My heart rate fluctuated around 230 beats per minute. That is fast. Too fast, they told me as they quickly pushed a bit of adenosine into my IV. I remember the feeling of bricks on my chest, and my vision fading into one pin-point of light that reminded me of turning off a tube-television. I lost consciousness for a moment.
The doctors said this was likely an isolated episode. Still, I was traumatized. I was constantly aware of my heartbeat. Was it too fast or too slow? Would tonight be the night it stops altogether? I was what my first therapist would call hypervigilant before she diagnosed me with panic disorder. It was a good call, but no amount of therapy or medication seemed to help.
The panic grew into insomnia, and it didn’t take long for me to lose the desire to leave my home. I couldn’t locate the source of this panic. It wasn’t a tangible thing that I could hit or run from. What I could do is retreat into my zone.
Defining the Zone
When people learn that I suffer from agoraphobia, I think they imagine me cowering in a dark corner of my hoarder house and mumbling to myself. This is not the case at all. I have built my life in such a way that I can actually live it.
Here’s a little known fact: There is a somewhat secret border surrounding Los Angeles’ Jewish communities called an eruv. This line is made up of walls, hills, and partially by a thin string. The string, somewhat like a fishing line, is secured inconspicuously between existing poles. Orthodox Jews aren’t permitted to push, pull, or carry outside of their home on Sabbath. So rather than stay inside, the eruv expands the idea of “home.” Not that these people live in the street, but the eruv marks a common vs. public area where they are safe to conduct necessary activities like carrying their child to temple or pushing grandma in a wheelchair. They are safe to do so without fear of sin. They are safe.
I have done something similar to contain my fear and expand my home. I selected a house that I love and filled it full of comfort. I don’t have a cowering corner, but I do have a little Harry Potter closet beneath the stairs if I need to be in a small space. I can walk to the park with my children. My doctor, the grocery store, hospital, pharmacy, and library are all within steps of each other. I have picked a coffee shop and a restaurant where the people aren’t threatening, and the environment is cozy. This is my zone. I am never far from professional help, should I need it, and I am safe.
Yet somehow, this system is imperfect. I have been married for ten years, and have never been to my inlaws’ home because it is eight hours away. Not even once. My relationships suffer.
Goals. Photo by Matic Kozinc on Unsplash
Relationships
Living with agoraphobia makes it nearly impossible to forge new relationships. The second I set foot out of my zone, I am overcome with panic — my heart rate increases, I am suddenly starved for air, my head is dizzy, and I am positive I will die. Not figuratively. I am suddenly faced with a few choices: Fight it, run from it, or retreat. So I choose to retreat, and when I do, I forgo meeting new people.
Getting around is a bit of a problem. I drive, like most everybody. But I can’t drive out of the zone, and I definitely cannot use the freeway. Try that out sometime in Southern, California.
Even as a passenger outside of the zone, I get the urge to jump out of a moving car. Let’s be real clear- I don’t want to die, and I am not suicidal. Since I do want to live, I have to be child-locked into a car.
The rough part is, not everyone wants to be my chauffeur. Especially when I’m in the back seat having an entire come apart. And public transportation? Nope. I can’t do it. For this reason, my circle has become extremely small.
I have friends. Like…two friends. No, seriously. Their names are Tanya and Amy. I used to have many, but a friendship with me lacks a particular quid-pro-quo element common to typical relationships. You might want me to drop by, for instance. Only I can’t. Not unless you want to meet me at my coffee shop, where it is safe. The friends I do have know they need to come to my house, and any excursion may end with me begging to go back home. And they love me anyway.
Photo by Joseph Pearson on Unsplash
Making new friends can be hard on me too. Once a person finds out about my situation, they want to fix me. Whether it be through prayer or multi-level-marketing snake oil that they think I should buy. Strangers are sincere in their desire to fix me, I guess. I just feel like they should try to understand me first. Am I really that broken?
In life, though, sometimes stuff happens — unavoidable stuff like funerals, weddings, and grandbabies. These are milestones that you can’t miss — even an agoraphobic mess such as myself.
So I get into a car, or a plane and grit my teeth, white-knuckling it the whole way. There is not enough Xanax in the entire world to make me go if I didn’t have to.
It isn’t all sadness and tears, though. I can promise that Tanya and Amy are genuinely my friends. And my husband, boy, does he ever love me. They are an integral part of my tribe, along with my family. My tribe respects my self-imposed boundaries and sees my panic attacks as just some quirky thing that comes with loving me. They understand I am not trying to have a one-sided relationship.
My health suffers a little. My most recent bloodwork showed my Vitamin D to be a whopping 13. So I spend more time writing on my balcony or at my park.
The future
My future is brighter than it has ever been. No one should feel sorry for me, because I have progressed in leaps and bounds. I’ve come from not leaving my home, to creating a zone. Every day, I try to push those boundaries out, just a little. It doesn’t take long for a bit of work to become a lot of progress. The world isn’t trying to hurt me. It is merely waiting for me to become part of it. | https://heathermonroe.medium.com/my-agoraphobic-life-af38d326ea22 | ['Heather Monroe'] | 2019-10-22 23:03:18.003000+00:00 | ['Self-awareness', 'Community', 'Mental Health', 'Self Improvement', 'Abuse'] |
Dispelling Three Common Myths of Machine Learning Personalization | Photo by Glenn Carstens-Peters on Unsplash
Before we get all worked up about the future of AI and the inevitable singularity, we should be clear about what exactly machine learning personalization (MLP) is. It turns out that what it is and what it does is probably not what you thought. In what follows, I’ll try to explain and dispel three of the most common myths I see when reading and discussing MLP with academics and practitioners.
Keep in mind that these myths apply to collaborative filtering and hybrid approaches to personalized recommendations, which rely on behavioral big data and make up most of what we see deployed in industry today.
Myth 1: MLP Works by Predicting Your Needs, Preferences, and Desires
This misconception is understandable as we generally view persons to have both inner desires, needs, and preferences and outer-facing behavior. If we use the word personalization, then we might assume we are referring to one’s inner world of needs and preferences — that unique, narrative soup of personal history, values, goals, desires, and wants that make you, you. But we aren’t and we can’t. Many academic articles and in patents for recommender systems by companies like Google and IBM make this mistake. For example, a highly cited paper by Basu et al. (1998) states:
This paper presents an inductive learning approach to recommendation that is able to use both ratings information and other forms of information about each artifact in predicting user preferences.
Or another, more recent example from the first sentence of Yeomans et al. (2019):
Computer algorithms are increasingly being used to predict people’s preferences and make recommendations.
I can very quickly tell you why this cannot possibly be how MLP works. Preferences don’t buy things. Needs don’t click ads. Desires don’t churn. People do these things and these things are recorded in the form of observable behaviors. The training data used in MLP is really just a thin slice of your observed behavior, behavior which is afforded by the design of the app or device and happens to be measured. Smart designers and data scientists can collect the right kind of measurements to make the inferential leap from observed behavior to mental state fairly accurate, however.
Another reason why MLP can’t predict your preferences is because we have no method for actually knowing what your “true” preferences are. Do we mean your considered, conscious, verbally-reported preferences, or those unconscious goal-directed preferences we share with our evolutionary ancestors? Without a ground truth, we cannot compute a loss function and we therefore cannot optimize the parameters of the predictive model to minimize this cost function.
Despite this discrepancy, many in industry and academia have seemingly fallen into the trap of radical behaviorism, whether they’re aware of it or not.
Here’s a more concrete example. When you train a machine learning model, there is no outcome column labeled “needs” or “interests” and a list of possible discrete values such a variable might have, such as “toilet paper” or “trip to Italy.” Instead the outcome column will simply say “Buy” or “Add” or “Churn” and the value for these columns will be typically either a 1/0. These are all very narrowly defined behaviors that are the result of a near infinity of prior mental states. But, strictly speaking, mental states are not equivalent to behaviors. (See Daniel Dennett’s Inverted Spectrum thought experiment for a nice example of why the “meaning” of behaviors is over-determined.)
Conflating a behavior with a mental state is sloppy thinking and, at worst, scares laypeople into believing that machines can predict their thoughts. MLP should not be mistaken for a predictive theory of mind.
Myth 2: Personalized Recommendations are Unique to You
This myth might take the most unpacking, and there are several angles to this, but I will focus here on just a couple. In many cases, the predictions and recommendations are based on models trained (optimized) on aggregate data that may not even include any of your personal data. If you can be said to receive a “personalized” recommendation or prediction at all, then it is only because recommendations were not made using a pre-set list and given to all at once. This is roughly how advertising was done prior to the Internet, when everyone saw the same billboards and newspaper ads. Let’s call this the “naive view” of MLP.
Many people seem to believe that if each row in a data table is assigned a prediction (instead of globally assigning one to everyone, say based on a global average), then that prediction is personalized. But this is an extremely thin understanding of personalization. The concept of personalization deserves a much richer examination.
A New Taxonomy For Evaluating Personalization
I suggest we instead think of personalization using a dual taxonomy of properties of either the 1) data or 2) the model, or some combination of both. Data used in MLP can be broken down into input and output properties. For example, a personalized recommendation uses X% behavioral input features, or is the proportion of input features classified as personal data under the GDPR. It stands to reason that personalization requires personal data, and personal data are defined differently according to the particular legal regime (GDPR or CCPA or FTC’s Fair Information Practice Principles, for instance).
Conversely, we might quantify the degree of personalization by reference to output data properties, such as the uniqueness of values. A more personalized prediction means that fewer people share the same recommendation. Surely a system that resulted in everyone getting the same recommendation isn’t really personalized (or is it?). This will of course depend on the size of ranked lists, the inventory size, and the number of users you’re recommending for.
Substantive vs. Procedural Personalization
Another interesting way of viewing personalization is by considering whether it is substantive or procedural. I’ve borrowed this idea from political philosophy, where scholars debate whether procedural or substantive justice if preferable. I’ll sidestep those thorny questions for now.
Substantive personalization refers to properties of the output (e.g., all unique outputs), irrespective of the process which led to this output. Procedural personalization refers to the process (e.g., all rows are input to the same process), irrespective of the particular output such a process might generate. This distinction is useful because there might be cases where we have a highly homogenous group of data subjects and, even though we have trained a model on each unique data subject’s data, we end up with the same (or very similar) output recommendations. From an outsider’s perspective, it might seem like we haven’t personalized our recommendations since nearly every data subject got the same recommendation.
But we could reply by saying our recommendations were procedurally personalized.
Various Other Ways We Might Conceptualize Personalization
The following list is neither exhaustive nor mutually exclusive. We might decide that our personalized recommendations are personalized because they combine personal data (defined under the GDPR) and each user receives a unique recommendation, for example.
With this in mind, we might also classify a prediction or recommendation as personalized based on properties of the model used to generate it. At the most basic level, learning an individual model for each user using just that user’s own data would seem to be very personalized, though not practical for most organizations. Most data controllers simply don’t collect this much behavioral data (…yet). One consequence of this view is that users with the exact same profiles would get the exact same model. Again, this would be an example of input-based procedural, not substantive, personalization.
Another approach we might take is to quantify personalization as the uniqueness of model parameter values. So if our models have different parameter values, then the resulting predictions are personalized, even if the results are the same. This would represent input-based substantive personalization. Currently, most industry models trained on aggregate data wouldn’t satisfy this criterion.
Or we might quantify personalization as the type of model used to generate the prediction: maybe some will get a neural network, while others get a random forest. Perhaps some users do not care so much about the “best possible prediction” and a linear regression would be preferable to a deep neural network (it would also be more explainable…). As long as data subjects were input into a unique procedure for assigning the particular model, we might call this model-based procedural personalization, even if the resulting recommendations were all similar (perhaps because we only can recommend a small set of items).
Finally, maybe a personalized prediction means that the model generating a personalized recommendation for you had a unique set of input features. This will increasingly occur as data subjects under the GDPR opt-out of specific forms of data collection (e.g., certain kinds of tracking cookies or GPS locations). The behavioral data you permit the data controller to collect may mean you need different models based on different feature sets, determined by regulatory pressures. We could classify this case as an instance of input-based substantive personalization.
Myth 3: MLP Knows You Better Than You Know Yourself (or Your Friends)
Be wary when researchers and industry claim that their MLP systems “outperform humans.” In some cases, the researchers may have artificially reduced the scope of the prediction context to make it more amenable to a machine. Doing this contextual sleight of hand can make MLP seem more powerful and accurate than it really is, especially when predictive performance is evaluated. For example, Yeomans et al. (2019) compared the predictions by friends and spouses to those from a simple collaborative filtering (CF) system for predicting a focal user’s ratings of jokes. The study found a basic CF system was able to predict more accurately than a friend or spouse.
Yet, the experiment included a set of 12 jokes pre-selected by the researchers. The much more difficult problem of selecting 12 jokes from a nearly infinite set of possible jokes in all cultures and languages was left to the humans. In essence, the researchers had already personalized a list of jokes to each subject in the study, given their linguistic background, country of origin, and current location. Once narrowed to such a small recommendation space, the algorithm’s performance appears quite impressive, but nevertheless hides the fact the hardest task had already been done by humans. A similar argument can be made for personalization on e-commerce sites: by going to a website, a person has already self-selected into a group who would be interested in products offered by the website. Consequently, when we hear impressive accuracy or recall scores, we need to keep in mind how specific and narrow the prediction context is. | https://medium.com/datadriveninvestor/three-myths-surrounding-machine-learning-personalization-9b1a7133e6db | ['Travis Greene'] | 2020-04-17 10:17:41.935000+00:00 | ['AI', 'Advertising', 'Marketing', 'Data Science', 'Machine Learning'] |
These Allusions Are Real | Photo by Julius Drost on Unsplash
How would you feel if someone referred to you as “Scrooge”? Or, how would you react if you were speaking and a listener said, “Your nose is getting longer”? Finally, what would you say if someone called you “The Scarecrow”?
In each case, you would probably be offended — and rightfully so. After all, the first person is comparing you to the miserly employer in the Charles Dickens’ novel entitled The Christmas Carol. The second person is calling you a liar by referring to the classic children’s story “Pinocchio” by Carlo Lorenzini. And the third person is saying you need a brain, like Dorothy’s friend in The Wizard of Oz by Frank Baum. No, the purpose of this essay is not to teach you how to trade literary insults, but to emphasize the use of allusions.
An allusion is an indirect reference to a well-known person, place, or event from history, from mythology, from literature, or from other works of art. Allusions are often used for three reasons: to catch the reader’s attention, to provide a short but vivid description, and to make a strong connection.
Photo by Matt Popovich on Unsplash
To Catch the Reader’s Attention. People who write newspaper and magazine headlines use allusions frequently to catch the reader’s attention. For instance, articles about Daylight Savings Time might allude to the Biblical verse “Let there be light” (Genesis 1:3). Stories of betrayal might refer to William Shakespeare’s line in Julius Caesar: “Et Tu, Brutus.” And situations that defy logic might be described as a “Catch 22,” after the 1961 novel by Joseph Heller. One more obvious example is the title of this essay which alludes to the homonym “illusion,” which, like a mirage, is not real.
To Provide a Short but Vivid Description. Speakers and authors often use allusions as a shortcut. Instead of having to describe how cheap someone is, the speaker or author can just say the person is a “Scrooge.” Then, the listener or reader who is familiar with The Christmas Carol will immediately understand the comparison.
Photo by JC Gellidon on Unsplash
One example of an allusion that appears every spring involves the National Collegiate Athletic Association’s basketball tournament. Certain schools — like Duke, Michigan, and Kansas — are traditional powerhouses, and they usually qualify for the tournament each year. Other schools, however, seldom make it to the tournament. As a result, when these schools unexpectedly qualify, sportswriters across the country refer to them as “Cinderella” teams. “Cinderella,” of course, is the fairy tale about the young housemaid who wasn’t even expected at the ball. Yet, when she arrived in a beautiful dress and glass slippers, she attracted the attention of the handsome prince. When these Cinderella teams eventually lose, the allusion is extended. The sportswriters will write that the clock has struck midnight, and these teams have to return to reality.
To Make a Strong Connection. As a writer, you, too, may want to use an allusion occasionally to make a strong connection with your reader. If you want to emphasize an extremely important day in your life, for instance, you might refer to it as “D-day.” This allusion applies to the World War II Allied invasion that liberated France from German occupation and served as a major turning point in the War (June 6, 1944). Or, if you want to describe a particular failure in your life, you may call it your “Waterloo,” a reference to Napoleon Bonaparte’s final defeat in Belgium on June 18, 1815.
An allusion is similar to an inside joke between the writer and the reader. Thus, before you use an allusion, you should be reasonably sure that your intended reader will understand it. If, for instance, your reader is young and not interested in history, references to D-day and Waterloo will not be understood or appreciated. But, if your reader is young and familiar with popular music, you could introduce a story about failure by alluding to the Britney Spears’ song “Oops, I Did It Again.”
If you use an allusion, do you have to document the source? No. If you’re simply referring to a person, place, event, or work of art, no documentation is necessary. Thus, allusions can add life to your writing without making you feel as if you’re writing a research paper.
Photo by John Ruddock on Unsplash
As a baseball fan, I am tempted to conclude this essay by saying this is the “bottom of the ninth,” an allusion to the last inning of a typical game. However, since this may be the first time some of you have ever thought about using allusions in your writing, I’d rather refer to the beginning of the game. Thus, as the umpire says right after the playing of the national anthem, “Play Ball!” | https://jimlabate.medium.com/these-allusions-are-real-b28af318100d | ['Jim Labate'] | 2019-06-20 11:01:01.227000+00:00 | ['Literary', 'Writing Prompts', 'Writing', 'Imagination', 'Creativity'] |
2020 AI Open-Source Software and Mission-Critical Platforms | As we are approaching the end of an unusual year, a difficult 2020 with a global pandemic and high unemployment that has disrupted the lives of so many people, I’m reflecting on some of the positives that we can take from this year. In my world of technology and open-source software, innovation didn’t stop; in fact, we can argue that there was an increase in productivity by having millions of people working from home, reducing commute times, travel, and unnecessary meetings.
Software innovations are happening in the open; yes, this year, again, most of the latest innovations are open-source software projects built with one or many other open-source software components. Augmented reality, virtual reality, autonomous cars, artificial intelligence (AI), machine learning (ML), deep learning (DL), and more are all growing as open-source software. Needless to say, all programming languages and frameworks are open-source, too. Open-source building blocks such as Python, Tensorflow, and Pytorch to name a few, are powering the latest innovations.
I like to keep an eye on the growth of the different open registries and repositories. GitHub has surpassed 100 million repositories and more than 50 million users this year. NPM, where JavaScript/Node.js open-source packages are available, surpassed 1.4 million packages; Nuget for open-source .NET code surpassed 220,000 packages; and Python packages available in PyPI surpassed 270,000 [1]
The number of open-source projects in the AI and Data space is growing exponentially. It is now hard to create categories to classify all the open-source software available in this space, from libraries, frameworks, databases, and automation to directly infused AI and tooling.
With a growing number of open-source software to create AI applications, we also have an increase in real-life use cases. Businesses across industries are adopting AI to address real business challenges and opportunities. Healthcare providers using ML and DL for faster and better diagnoses, telcos using AI to optimize network performance, the financial services industry reducing fraud, and generating better predictions are just a few examples of use cases we see now every day across every industry vertical.
There are many more examples to add for insurance, transportation, government, and the utility industries. One common denominator across these important industries is that all have mission-critical applications with very valuable data running on mission-critical platforms.
Traditionally known as mainframes, IBM Z and IBM LinuxONE platforms host the most crucial business functions in all of these industries. For decades, they have continued to improve their technology in high-speed transaction processing, capacity for very large volumes of transactions, best-in-class security, and second-to-none resiliency.
In the banking industry, 44 of the top 50 global banks are running IBM Z and 2/3 of the Fortune 100 use IBM Z or LinuxONE. This is an impressive coverage that tells us that our daily lives are supported by these mission-critical platforms.
All of this mainframe information brings us back to AI. When enterprises need AI applications in the best platform for I/O intensive transactions of structured or unstructured data, there is an ideal mission-critical platform; when AI applications need high-performant access to storage and databases, there is an ideal mission-critical platform; when AI applications need to secure data in transit, at rest and in use with confidential computing, there is an ideal mission-critical platform; when AI applications need a resilient platform that provides 99.999% availability or more, there is an ideal mission-critical platform designed to deliver on all of these criteria.
Mainframes are this ideal mission-critical platform that can tightly integrate AI/ML/DL applications with data and core business systems that reside in the same platform. In other words, they provide a secure high-performance environment to bring AI, ML, and DL to existing transactional applications and deliver real-time insights and predictions.
The ecosystem of open-source software for IBM Z and LinuxONE (s390x processor architecture) continues to grow. I believe it is at its best in 2020, and I have great hopes for the upcoming 2021 to be a year of continuous growth in the open-source software ecosystem for this mission-critical platform.
The most popular open-source software for AI has only existed for a few years. As we are coming to the end of this difficult 2020, we see that it has been a strengthening year for many open-source projects. Tensorflow and PyTorch are used more than ever, and a number of open-source projects are becoming very popular, for example, Egeria, Pandas, Jupyter Notebook, Elyra, ONNX, Kubeflow, and others that I hope will continue to grow and be available across all platforms in 2021.
Open source is not a trend; it is here stronger than ever. We are going to continue to see innovation and enhancements in the AI and Data open-source ecosystem. The data that resides in mission-critical platforms such as IBM Z and LinuxONE is a valuable asset for businesses and can be used for creative AI solutions.
AI open-source software and mission-critical platforms introduce exciting possibilities in 2021 and beyond.
The LF AI & Data landscape explores open source projects in Artificial Intelligence and Data and their respective sub-domains
[1] Source: Nov 2, 2020 www.modulecounts.com
[2] Free image by iXimus from Pixabay | https://medium.com/ibm-data-ai/2020-ai-open-source-software-and-mission-critical-platforms-ecdc69475193 | ['Javier Perez'] | 2020-12-08 20:26:46.597000+00:00 | ['Open Source', 'AI', 'Artificial Intelligence', 'Mainframe', 'Mission Critical'] |
Artificial Intelligence in Construction — TechVirtuosity | [Copyright : Pop Nukoonrat] © 123RF.com
Revolutionizing Construction
Construction and the methods we use are crucial to our success in modern architecture. We build houses and massive structures using our computers and we harness the that processing power to create new solutions. But artificial intelligence in construction takes things to a whole new level!
It’s a tool that can help us push the boundaries further and it can do a lot to the industry as a whole. So then why haven’t we seen more innovation?
The Construction Industry is Stagnating
This isn’t to say that there hasn’t been a lot of improvements throughout the years, but construction has remained slower to adapt.
In the past we often assumed that productivity equaled larger machines and that theory worked for a while. But now days we need something more than bigger machines, we need smarter machines and solutions.
And while several other industries such as retail, medical and businesses in general have expanded, construction has fallen a bit behind. We simply need to adapt and use more technology.
But what if we had more artificial intelligence in construction? Would this technology help lead us to a utopia?
How Artificial Intelligence Helps Construction
While it’s still early on some parts, AI has proven to show some promise in reducing costs. There is also software out there known as building information modeling, or BIM for short. AI can be trained to help suggest improvements and build solutions early on.
It can also be used in risk management/mitigation, by providing safer alternatives. Construction robots are becoming more popular along with 3D printing, but add AI to it and we have a new advantage.
AI can do the things that are too risky for us to risks our lives with. While using AI in this way is still new and very early, there’s a multitude of other areas it can help in. Of course, being a young technology also brings on risks for those using the technology early on…
The Early Risks Involved
Artificial intelligence in construction is a great solution but more technology also brings on different risks that also need to be considered. Anytime technology is involved we can typically also inherent the risk of getting hacked.
If construction software or robots were to get hacked it could jeopardize an entire project. The argument is that it’s safer to have physical workers doing the actual work than to have robots or AI trying to take over. This is only partially true though.
Construction steadily accounts for 20%+ of yearly deaths at work. AI poses the risk of hackers but as it stands the death toll is high right now, without the involvement in these life saving technologies.
Hacking risks aside, AI isn’t perfect and does make mistakes too! This isn’t always the case but it’s important to recognize that mistakes happen with new technologies. The costs to implement AI could cost more money if it’s done wrong. But it’s not all bad!
[Copyright : Kittipong Jirasukhanont] © 123RF.com
Machine Learning can Mitigate Risks
Machine learning is an important aspect of using artificial intelligence in construction. It allows a program to continually test and essentially “learn”. We’ve seen AI used in the field of medicine with success in the recent past which shows promise. But machine learning gives us some control.
But first, in case you didn’t know what machine learning is… It’s a method used to teach an AI how to accomplish something. It is given parameters to gauge success and failure, a way to remember the results and a way to improve the numbers. Think of it like a race car, if it crashes it fails, if it completes the course it succeeds.
Machine learning can take it a step further though. It can take that concept of the race car and find the most optimal way to complete the course, and that’s why it can be more productive than humans. We use machine learning to run thousands of trials and errors to succeed.
This makes the AI more capable than a human which is why it can benefit construction. A single person can only try so many times, whereas an AI can run tests or simulations thousands of times endlessly. Construction can benefit from an AI that can continually learn the most efficient and safest way to build or solve problems.
Machine learning can also incorporate previous knowledge in its tests, improving the outcome or immediate start of the training. As the AI grows it’ll also show us improved ways of doing things.
Smart AIs Equal Smarter Solutions
The future demands solutions that are safe, cheaper and ultimately faster. Construction is always a time sensitive task because a lot of it happens outside! The weather impacts it just as much as the efficiency of the workers doing it.
AI provides solutions that are faster and cheaper which means there is less risk of weather delaying a project. The faster a task is completed the less likely something will go wrong, in theory.
AI can help with many different areas by…
Providing a solution to labor shortages through the use of software automation and solutions.
Reducing risks to safety by spotting flaws or creating safer alternatives.
Actively monitoring work environments and regulations.
Can be used to collaborate on building plans while making smart suggestions (BIM software mentioned above).
Providing analytics and statistics online for clients or workers.
[Copyright : Preechar Bowonkitwanchai] © 123RF.com
Artificial Intelligence in Construction Should be Embraced
We talked about a lot of the areas that AI can help with but it needs to be given the opportunity. While some companies are already using this technology, there are still many more that are not quite there.
AI can be used to help keep track of projects and to reassure clients. Whether it’s the city contracting the work or a variety of other businesses. Clients could benefit from seeing the progress online. AI has a lot of potential and even the user interface could have AI to learn what clients find most useful to view online.
In the end, AI is going to be in the future of the construction industry. Whether we want it or not, it will help push innovation forward. But what do you think? Should we use more AI or avoid using those technologies in construction? Drop a comment below! | https://medium.com/swlh/artificial-intelligence-in-construction-techvirtuosity-124de131f26 | ['Brandon Santangelo'] | 2019-09-21 17:14:25.582000+00:00 | ['Construction', 'AI', 'Artificial Intelligence', 'Technology', 'Machine Learning'] |
The Overlooked Conservative Case for Reining in Big Tech | The Overlooked Conservative Case for Reining in Big Tech
Democrats aren’t the only ones ready to rewrite the antitrust rules for internet platforms
Photo: SOPA Images/Getty Images
Never in world history has one sector of the global economy risen to such global dominance, so fast, as Big Tech has in the past 20 years.
In 2000, Amazon was an online bookseller, Apple was still an underdog, Google was a scrappy startup with little revenue, and Facebook didn’t exist. Today, along with Microsoft, they are the world’s five most valuable companies, and their decisions carry a level of global influence rivaled only by nation-states. They exert control over what we can say, how we can say it, what we buy, and what we read, and they wield unilateral power over the countless smaller businesses that rely on their platforms.
Until about five years ago, a prevailing 21st-century view was that the internet sector was so dynamic that upstarts could come along at any point and depose the giants: Just look at how Google and Apple blew past Microsoft, or how Facebook conquered MySpace. That view is no longer tenable, as the top platforms’ network effects, lock-in, access to data, diversification of business lines, and ability to buy or copy rivals has given them advantages that now appear nearly insurmountable. The relevant business question is no longer, “Will they stay on top?”, but rather, “What markets will they conquer next?” (The one competitive threat that still looms is that China-based giants could outmaneuver them with products such as WeChat and TikTok. But the Trump administration’s crackdown on Chinese tech has abruptly curtailed that threat domestically, and India’s crackdown has mitigated it in the largest non-aligned market.)
What to do about that concentration of power, if anything, is a question that has rapidly grown in urgency. There is an emerging consensus that antitrust action in some form is warranted, including among Republicans who are naturally skeptical of government intervention in markets. But there has been little clarity or agreement as to what form that action should take — until now.
The Pattern
We finally have a blueprint for regulating Big Tech. Or rather, two blueprints.
Undercurrents
Under-the-radar trends, stories, and random anecdotes worth your time.
Facebook and Twitter are taking some precautionary measures ahead of the U.S. election. The most interesting came from Twitter, which announced on Friday that it will take three previously untried steps to pump the brakes on misinformation and polarizing content, starting October 20. First, it will default to a quote-tweet when you go to retweet something, encouraging you to stop and think about what you want to add to the conversation rather than simply amplifying a viewpoint. Second, it will stop surfacing tweets from people you don’t follow in your feed or notifications. Finally, it will only show trending topics that come with editorial context. You can read its full announcement here. Facebook, for its part, announced an indefinite ban on political ads starting after November 3, along with other measures aimed at thwarting misinformation around who won the election or incitements to violence in its wake.
The most interesting came from Twitter, which announced on Friday that it will take three previously untried steps to pump the brakes on misinformation and polarizing content, starting October 20. First, it will default to a quote-tweet when you go to retweet something, encouraging you to stop and think about what you want to add to the conversation rather than simply amplifying a viewpoint. Second, it will stop surfacing tweets from people you don’t follow in your feed or notifications. Finally, it will only show trending topics that come with editorial context. You can read its full announcement here. Facebook, for its part, announced an indefinite ban on political ads starting after November 3, along with other measures aimed at thwarting misinformation around who won the election or incitements to violence in its wake. Cambridge Analytica didn’t unduly influence Brexit, a U.K. commission concluded, wrapping a three-year investigation into the political consultancy’s use of Facebook data in the campaign. The Financial Times reports that probe found that the methods used by a Cambridge Analytica affiliate were “in the main, well recognised processes using commonly available technology,” and that the resulting targeting of voters was not uniquely effective. The report was taken as vindication by some who felt the Cambridge Analytica scandal was overblown all along. Some privacy advocates were quick to reply that the real scandal was always more about how the data was gathered and obtained than how it affected election outcomes. (Both can be true; I made a version of this argument in 2018.)
Headlines of the week
Five Years of Tech Diversity Reports — and Little Progress
— Sara Harrison, Wired
How Excel may have caused loss of 16,000 Covid tests in England
— Alex Hern, The Guardian
QAnon high priest was just trolling away as a Citigroup tech executive
— William Turton and Joshua Brustein, Bloomberg | https://onezero.medium.com/the-overlooked-conservative-case-for-reining-in-big-tech-5d1942d79a26 | ['Will Oremus'] | 2020-10-10 12:55:38.805000+00:00 | ['Pattern Matching', 'Antitrust', 'Facebook', 'Technology', 'Apple'] |
There Are 3 Big Misconceptions About Medium Going Around | There Are 3 Big Misconceptions About Medium Going Around
Don’t let them confuse you.
I just wrote a piece about how each Medium writer should do their own legwork when it comes to finding their way on the platform. And it’s true, the more you learn by yourself, the better. There are, however, many misconceptions going around about Medium, especially on outside forums, which can seriously impede a writer’s progress on this journey.
And that’s not good. When it comes to using Medium to further your writing career, any misunderstanding can set you back a lot when not corrected in time. So let’s get into it.
A few Medium concepts a lot of people have been getting wrong
It’s time to stop the confusion once and for all.
1. Fans and followers are not the same thing
Sometimes you’ll see a successful Medium writer talking about how the stat she pays more attention to is her number of fans, so you think she must be obsessed with her follower count, right? Wrong!
On Medium, followers and fans are NOT the same thing. We don’t use those two words interchangeably because they are completely different concepts.
A follower is a person who went into your profile and clicked on the “follow” button. This person is more likely to receive your content on her feed because she is actively indicating to Medium that she likes your writing and would like to see more from you.
When you click on Follow, you become a follower
A fan is simply someone who claps for your story. Everyone who claps for one of your stories becomes a fan, it doesn’t matter if they gave you 1 clap or 50. It also doesn’t matter if they’re following you or not.
Clicking on the clap button makes you a fan.
Therefore, sometimes your followers will be your fans because they clapped for your story, but not every one of your fans will necessarily be a follower, they can be just people who came across your article and happened to like it.
You can go to your profile to see who follows you. To check your fans, go to your stats page. It will show you total fans for the month (nº of people who clapped for your stories) and number of fans per story.
2. Publications, Medium magazines, member features and curation are not the same thing
Publications
Anyone on Medium can create a publication. Just go on your round profile picture on the top right corner, click “publications,” then “create new.”
I’ve created one. It’s called Mariposa and it’s awesome.
There are Medium publications of all sizes, each with its own editors and catering to its own specific niche.
Each publication has its own submission guidelines and rules to accept writers. I haven’t yet come across a publication which doesn’t feature their “how to submit” page in a very obvious place on their homepage. If you wish to submit, read and follow the instructions carefully.
Medium Magazines (or Collections)
These are especially put together by Medium following a theme. Some of the most recent ones were: “Can we Talk?”; “For the Record”; “Office Politics” and “Reasonable Doubt.”
The good news is that Medium will occasionally send out emails to its Partner Program Members with specific calls for submissions, but unlike publications, these magazines or collections don’t have easily accessible guidelines or submissions open year-round. All you can do, really, is keep checking your inbox.
Member Feature Story
These are the stories Medium editors pick to feature at the homepage. When you click on one of them, it will have the nice “Member Feature Story” up there near the title. There’s no way to submit or apply for those. All you should do is to write and post a story as usual, then hope Medium editors will see it and like it enough to want to feature it. If they do, you’ll get a notification by email.
It never happened to me, but other writers who had their pieces featured have confirmed that this is the process.
Update: when I wrote this story, I was under the impression that Member Feature meant featured stories BY members, which meant you’d have to be a member to have a story features. As I have recently learned, Member Feature means the story is feature TO the members, which means the writer herself doesn’t have to be a member to be featured.
Curation (or Story Distributed by Curators)
Curation is the term for having your story picked by the Medium curators to be distributed under one or more tags. Getting a story curated means it will show up on thousands of people’s feeds, including those who don’t follow you.
Getting curated is also an endorsement of your story by Medium editors. It means they have read it and found it worthy of sharing. Medium now notifies writers when their stories are curated. You can also know if a story has been curated when you look at individual story stats and see something like this:
Medium makes it pretty obvious when you’re curated.
Any story posted behind the paywall can get curated, whether you post them on publications or just on your profile, just make sure you keep the box for the Partner Program checked when you publish. It looks like this:
For more detailed insights into curation, make sure to read Shannon Ashley’s piece on the subject here.
3. There’s no “normal” when it comes to Medium — each writer has her own journey
Another common misconception about Medium is the idea that you can predict how your experience is going to be like (or how much money you’re going to make) based on the experiences of others.
Because each voice here is unique, each writer is going to have a different experience.
You can ask however many questions you want.
Is it normal that I haven’t been curated yet? How much can I expect to make in my first week? Is it normal to only get 3 claps on your first story?
These questions don’t even make sense. Or they do, only they all have the same answer: when it comes to Medium, there is no normal.
Some writers sign up on Medium with Facebook, and bring along their friends as their first audience. Some writers sign up for Medium and start with a 0 follower count. Some will submit to publications, some won’t. Some will get accepted, some won’t. Some will have well-received stories, some won’t.
What’s normal? All of it is.
We’re all unique people, with unique voices and a particular way to experience the platform. There’s no way to predict how your experience is going to be like based on someone else’s. You can achieve similar results by taking similar steps, but please, don’t get attached to comparisons, and forget the idea that there is a “normal.”
You make your own Medium journey. You make your own normal. | https://medium.com/sunday-morning-talks/there-are-3-big-misconceptions-about-medium-going-around-3f63e090f3c3 | ['Tesia Blake'] | 2019-02-28 17:33:33.395000+00:00 | ['Medium', 'Writing', 'Self', 'Creativity', 'Writing Tips'] |
A Look Behind the Mask | Not all of these characteristics need be present to constitute an abusive relationship, and there are certainly others that were likely not mentioned. Although abuse follows a similar pattern, it is important to note they can manifest in certain individualized behaviors. Understanding our personal experience is key to moving forward and planning a safe exit.
As victims, we can use our knowledge and awareness of our partners’ behavior patterns during our unique cycle of violence. From this, we can determine indicators of upcoming episodes and plan suitable responses to keep us safe. In this way, we learn to adapt in order to survive. But eventually as time goes on, the cycle of violence becomes shorter, faster, and more intense.
Breaking the cycle of abuse means breaking the denial that something is wrong. It means that we must forfeit the illusion of what we have accepted our lives to be. It means we have to gain a conscious awareness that we are actually being abused. It means we have to finally let go of the fairytale that turned into a nightmare.
No one wants to face that pain, no one. Healing takes time, and it hurts, especially at the beginning. If you are considering that now is the time for you to leave, more than likely you have been depressed, felt trapped, or even felt like death was your only way out. I am here today as living proof that you can survive this. You are brave and strong. Look what you have already endured. We must accept this disillusionment. It is only when we are at the end of the road that we can truly begin to heal.
If you feel your life is in eminent danger or you are being threatened or physically harmed, call local law enforcement for immediate assistance. Please reach out to your local domestic violence shelter or call the national hotline at 1–800–799–7233 to begin safety planning and to obtain information on domestic violence restraining and protective orders. | https://medium.com/we-are-warriors/behind-the-mask-profiling-a-narcissistic-abuser-dbbbfe972104 | ['Samantha Clarke'] | 2019-05-12 11:12:41.186000+00:00 | ['Mental Health', 'Wellness', 'Domestic Violence', 'Abuse', 'Narcissism'] |
How to Implement Logging in Your Python Application | Enter Python’s Logging Module
Fortunately, the importance of logging is not a new phenomenon. Python ships with a ready-made logging solution as part of the Python standard library. It solves all the aforementioned problems with using print . For example:
Automatically add context, such as line number and timestamps to logs.
It’s possible to update our logger at runtime by passing a configuration file to the app.
It is easy to customise the log severity and configure different logging levels for different environments
Let’s try it out and set up a very basic logger:
Running this gives:
INFO:__main__:Getting some docs...
INFO:__main__:Doc count 2
INFO:__main__:Finished
Easy peasy!
Here, we have imported the logging module from the Python standard library. We then updated the default basic log level to log INFO messages. Next, logger = logging.getLogger(__name__) instantiates our logging instance. Finally, we passed an event to the logger with a log level of INFO by calling logger.info("") .
At first glance, this output might appear suspiciously similar to using print() . Next, we’ll expand our example logger to demonstrate some of the more powerful features that the Python standard logging module provides.
Log levels
We can configure the severity of the logs being output and filter out unimportant ones. The module defines five constants throughout the spectrum, making it easy to differentiate between messages. The numeric values of logging levels are given in the following table:
Logging levels from Python’s documentation.
It’s important not to flood your logs with lots of messages. To achieve concise logs, we should be careful to define the correct log level for each event:
logger.critical("Really bad event" )
logger.error("An error")
logger.warning("An unexpected event")
logger.info("Used for tracking normal application flow")
logger.debug("Log data or variables for developing")
I tend to use the debug level to log the data being passed around the app. Here is an example of using three different log levels in the few lines of code responsible for sending events to Kafka:
Formatting logs
The default formatter of the Python logging module doesn’t provide a great amount of detail. Fortunately, it is easy to configure the log format to add all the context we need to produce super-useful log messages.
For example, here we add a timestamp and the log level to the log message:
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
It’s best practice to add as much context as possible to your logs. This can easily be achieved by adding structured data to the log message’s metadata. For example, you may have scaled your application to run with multiple workers. In this case, it might be important to know which worker was logging each event when you’re debugging, so let’s add a worker ID context to the log metadata:
# Create the log formatter formatter = logging.Formatter( '%(asctime)s - %(worker)s %(levelname)s - %(message)s')
handler.setFormatter(formatter) logger.info('Querying database for docs...', extra={'worker':
'id_1'})
The output becomes:
2020-09-02 22:06:18,170 - id_1 - INFO - Querying database for docs...
Log handlers
Now that we have perfectly formatted logs being fired at us from all over our application code, we need to consider where those logs are ending up. By default, the logs are being written to stdout , but Python’s logging module provides us with the functionality to push logs to alternative locations. For example, to save logs to the example.log file on disk:
# create a file handler
handler = logging.FileHandler('example.log')
handler.setLevel(logging.INFO)
There are several types of handlers that can be used. For the complete list, see the documentation for handlers. It is also possible to define custom logging handlers for different use cases. For example, this library defines a log handler for pushing logs to Slack!
To summarise. We’ve set up the Python standard logging module and configured it to log to different locations with custom log formats. You can find the final code for the example logger below: | https://medium.com/better-programming/how-to-implement-logging-in-your-python-application-1730315003c4 | ['Leo Brack'] | 2020-09-09 14:33:01.296000+00:00 | ['Programming', 'Software Development', 'Python', 'Startup', 'Data Science'] |
Veganism in 5 Easy Steps | Photo Credit: Creatv-Eight--Unsplash
Veganism in 5 Easy Steps
A simple guide to eating more eco-friendly
This is not another one of those preachy “convert to veganism” articles. This is for people who are sincerely unaware of what is happening to animals on a daily basis. This is for the people who say, “I love animals,” but continue to eat them turning a blind eye to the injustice their eating is doing to these sentient beings, our bodies, and the planet.
It was twenty years ago when I first learned about the dangers of cow’s milk and the effect dairy products have on humans bodies, ie, extra mucous, skin problems, inflammation to name a few.
It was 15 years ago that I watched the documentary,” Super Size Me’’ and gave up McDonald’s indefinitely. My 4-year-old son had never had McDonald’s and went to a friend whose parents took him there without me knowing for pancakes. He literally threw up the food. It is not fit for human consumption, and when the body is not used to it, it will reject it. This is what many kids are eating day after day.
It was a decade ago that I took a class called, “Food and Mental Health” taught by a Naturopath in Seattle.
I was a meat-eater and a dairy consumer at this point. I had given up “red meat” for the most part and thought it was better to use ground turkey, but had no idea where the turkeys came from. I didn’t think twice about my “healthier” alternative to red meat. I was fortunate in this class as I learned about these turkeys all stuffed together in their metal cages. I saw footage of chickens and turkeys being debeaked so they wouldn’t peck each other to death and them being so “plumped” up with hormones they could no longer walk. I watched them try to lurch themselves over other dead birds, while they sat in piles of their own excrement.
I saw footage showing the baby male chickens being dumped in a huge garbage bin alive with their soft yellow down to later be incinerated as they were no use to the factory farm as they couldn’t lay eggs to be sold.
Photo by Jason Leung on Unsplash
I won’t continue, but you can imagine scenarios like this across the board within the meat and dairy industries and if you still think dairy is okay because it doesn’t kill the animal, listen to the cries of the baby cows and mothers before slaughter and when being taken away to be bottle-fed while the mother is milked so that humans can use her milk?! It doesn’t make much sense when you really stop to think about it. We harm innocent creatures for a taste or flavor that goes well with our nightly glass of wine or our dinner out, but put no connection to the sentient beings we are massacring on a daily basis.
But, what will I eat?
For those of you still with me, with curiosity, what does this meat-free life look like?
What will I eat? What will my family eat? How will I get my protein?
I’ve got you. Below are some resources to get started on your journey. Do you have to be perfect? No, just a beginning would be learning about the animals we say we care so much about. Watch a documentary, start following vegetarian and vegan recipe bloggers, Give up meat one day a week (Meatless Mondays is a thing). Give up meat and dairy for a month (Veganuary). You won’t die and you might just like it! What is the worst thing that can happen when you incorporate more whole foods into your diet and maybe trying something that you traditionally wouldn’t have? | https://medium.com/illumination/veganism-in-5-easy-steps-cb1bece2173c | ['Melissa Steussy'] | 2020-12-07 04:11:29.308000+00:00 | ['Veganism', 'Health', 'Vegan', 'Plant Based', 'Animal Rights'] |
Abbott’s Rapid-Response Covid-19 Test; Is the Approval Good News? | Abbott’s Rapid-Response Covid-19 Test; Is the Approval Good News?
Unanswered questions may impede the rollout
What if millions of people could get a quick, reliable test and find out if they are Covid-19 carriers? The Food and Drug Administration granted emergency-use authorization to Abbott Laboratories for a $5 rapid-response Covid-19 test.
Reliable high-frequency testing may present the world with a viable path forward. A widely available test would help kids get back to school safely and allow workers to return to the office. A rapid test might enable us to eat inside a restaurant, take a vacation, or go to a football game.
Is BinaxNOW Covid-19 Antigen Card the solution we have all been waiting for?
Maybe, but we need answers to some critical questions before we hop on a cruise ship.
BinaxNow’s emergency use authorization is approved for use in symptomatic patients in a healthcare setting. But the coordinated release of a free digital health app along with Abbott’s claim to be able to test “millions of people per day” acknowledges this test will be used beyond its limited approval.
The “who, what, when, where, and how” of BinaxNow utilization must be addressed.
Abbott’s BinaxNOW Covid-19 Ag Card is about the size of a credit card and doesn’t require added equipment. Photo: Abbott Laboratories
What Covid-19 tests are available now?
There are three categories of Covid-19 tests. Each works in different ways to detect evidence of SARS-CoV-2 infection.
Antibody testing detects a past infection and potential immunity. Molecular testing(PCR) detects genetic material from the virus to determine if someone has the virus right now. Antigen testing detects the fragmented pieces of the virus that trigger an immune response. Like PCR testing, antigen testing is used to detect an active infection but can be done much faster. The recently FDA-approved rapid testing BinaxNow uses antigen detection.
BinaxNow is a step in the right direction
Rapid-responding tests are certainly a positive step in the right direction. Getting reliable results as fast as possible will help us reopen our economies and stop the pandemic spread.
Here are the valuable BinaxNow features:
Fast results. Abbott’s rapid antigen test provides results within 15 minutes. Accurate results. The test is highly accurate when testing symptomatic patients within seven days of the onset of symptoms. The data reported to the FDA shows a sensitivity of 97.1% and a specificity of 98.5%. Pain-free nasal swab. This technology does not require the tickle-your-brain deep nasopharyngeal swab like the PCR tests. A simple, painless nose swab is used to collect the testing specimen. No instrumentation required. This test does not require a medical practice to purchase expensive or complicated equipment. The lack of capital investment makes it ideal for CLIA-waived point-of-care testing. NAVICA™ app. Abbott released a complimentary digital health tool to pair with the new COVID-19 antigen test to facilitate use.
Let’s tap the breaks on BinaxNow
Before we get too excited, we need to understand the limitations of this specific rapid antigen testing technology.
This specific test has a few problems and unanswered questions.
Who performs the test? BinaxNow is only approved for clinical use by health care professionals. The Abbott press release makes it clear this test is not approved for use by the general public outside of a healthcare provider’s oversight. The press release states millions of tests can be done per day. If these tests require a healthcare provider, then infrastructure for a scalable rollout of “millions of tests per day” needs to be implemented. Antigen testing has limitations. Antigen tests look for pieces of the virus. They are less accurate than traditional molecular PCR testing, which looks for the virus’s genetic material. BinaxNow is not FDA-approved as a screening test. The test is meant to be used only on people with symptoms of COVID-19 and within seven days of the onset of their symptoms. The accuracy of asymptomatic patients is unpublished. The FDA approved this test based on a study of 102 symptomatic patients. The results show a sensitivity of 97% and a specificity of 98%. These patients, who were within seven days of the onset of symptoms, would have had high levels of viral shedding. These numbers indicate BinaxNow is an accurate way to test sick people, but how effective is it when testing asymptomatic individuals?
5. BinaxNow will be used off-label on asymptomatic people. The entire world has been waiting for a low-cost test. BinaxNow can and will be used legally off-label on asymptomatic individuals. Health professionals need to know the accuracy beyond the reported specificity and sensitivity in symptomatic patients suspected of having Covid-19. Before off-label use occurs, we must know how to interpret the results.
6. The NAVICA™ app creates a blurry line between screening and diagnostics. The NAVICA™ Press Release makes an excellent case “to help facilitate easier access to organizations and other locations where people gather.” If BinaxNow is limited to the symptomatic individuals within seven days of the onset of symptoms, the app would have limited utility. The creation of NAVICA™ reveals Abbott Lab’s is counting on the widespread use of their rapid antigen test. If so, we need to know the accuracy of testing asymptomatic individuals.
7. The economics of BinaxNow is unclear. The Abbott press release highlights the test will cost $5, but this test is not a direct-to-consumer product. What will this test actually cost, and who is paying for it?
Doctors and hospitals purchase tests through a supply chain and then bill a third-party payer for the cost. BinaxNow is not being released as a direct-to-consumer product. Essential questions for an effective, scalable, and rapid roll-out must be answered before medical offices, hospitals, and consumer lab companies can begin to offer this potentially game-changing testing option.
Here are the practical questions for medical office integration:
What is the appropriate Current Procedural Terminology (CPT code)? There are currently two approved antigen testing codes (86328 and 86769). Will BinaxNow use one of these or a new one? Will Medicare, Medicaid, and private insurance companies honor and reimburse for BinaxNow? What is the rate of reimbursement for the CPT code?
The reimbursement rates must justify the costs. Medical practices have to evaluate the financial impact of any new technology. If BinaxNow costs $5/unit and Medicaid reimburses $4, then a medical office will not be able to afford to offer the service.
Medical practices will be highly motivated to provide rapid testing to their patients. Without a fair reimbursement rate, practices may find themselves testing their way to bankruptcy.
Rapid antigen testing through BinaxNow could be a game-changing technology. As with many things in Operation Warp Speed, we are missing a nationally coordinated strategic plan.
Binaxnow will be welcomed by the public and the medical community, but we deserve to know how well it works as a screening test and who is going to pay for it.
— | https://medium.com/beingwell/abbotts-rapid-response-covid-19-test-is-the-approval-good-news-2b27c0b536b3 | ['Dr Jeff Livingston'] | 2020-09-01 21:59:56.722000+00:00 | ['Covid 19', 'Health', 'Testing', 'Coronavirus', 'Pandemic'] |
Is Technology Destabilizing Reality? | Yes and no. Nature is destabilizing reality.
Nature (Conservation of a Circle) (NASA)
Is technology destabilizing reality? Yes and no. Nature is destabilizing reality, for sure.
How do we know this? The constant (re) circulation in Nature destabilizes everything.
It looks like (has to look like) (can only look like) this:
Nature
Reality
Technology
Where, reality is stabilized, and, also destabilized, by the conservation of a circle.
Conservation of a circle.
Explaining the genesis of technology (the basis for, both, and-or, either, zero, and-or, one).
Zero and-or One (Both and-or Either)
Eliminating (exposing) the redundancy present in any ‘gate’ (and-or) (if-then).
If-then.
And-or.
Corrupting our ‘understanding’ of a circuit. And, therefore, then, eventually, disrupting, everything we ‘know’ about (are relying on in) technology
Circuit.
And there it is. Technology. Reality. Nature stabilizing, and, also, destabilizing, both.
Conservation of the circle is the core (only) dynamic in Nature (reality included). | https://medium.com/the-circular-theory/is-technology-destabilizing-reality-d45a51bcde92 | ['Ilexa Yardley'] | 2019-08-03 16:07:41.363000+00:00 | ['Society', 'Quantum Computing', 'Culture', 'Digital Transformation', 'Books'] |
How to Maintain a State of Creative ‘Flow’ | Josh Waitzkin, chess prodigy and author of The Art Of Learning, once described a conversation he had with skiing legend Billy Kidd, in which he asked Kidd about the three most important turns on a ski run:
… the three most important turns of the ski run are the last three before you get on the lift. And it’s a subtle point. That’s when the slope is leveled off, there’s less challenge. Most people are very sloppy. They’re taking the weight off the muscles they’ve been using. They have bad form. The problem with that is that on the lift ride up, they’re unconsciously internalizing bad body mechanics. As Billy points out, if your last three turns are precise, you’re internalizing precision on the lift ride up.
And so it goes with flow. When we walk away from our work drained, dazed, and confused, we internalize those feelings. That all-nighter where you worked until you literally couldn’t anymore? It may have yielded production, but the brain drain you felt when you walked away followed you back to your desk the next day.
The bitter gambler will always tell you the same story, “If it wasn’t for that last hand, I’d be rich!” But the gambler who laughs all the way home after doubling her money knows it’s because she walked away before her luck ran out.
Hemingway knew that flow wasn’t a ghost to be strangled to death on every chance encounter. He walked away from his typewriter while he still had gas in the tank and inspiration on his side.
Many artists and entrepreneurs think of beating their head against the wall in search of inspiration as a rite of passage. But Hemingway never allowed those feelings to enter his workspace. He walked away long before those feelings of brain drain could be internalized. This helped him return to his work knowing exactly where to start again.
I always worked until I had something done, and I always stopped when I knew what was going to happen next. That way I could be sure of going on the next day. —Ernest Hemingway
There is still a strong undercurrent in our society, particularly amongst entrepreneurs, that continues to celebrate and glamorize the grind. Just like with conversations around how much sleep is best to have each night, there is an unspoken competition around who can stay in the pressure cooker, working the longest and the hardest.
But anyone can learn how to outlast the others. The real discipline comes from walking away before you’re cooked. It takes a cool, Hemingway-like confidence to tell the muses, “We’ve worked enough today. I’m sure I’ll see you around tomorrow.”
So the question remains, for athletes, creatives, writers, producers, and thinkers alike: When you find your flow today, will you have the discipline to walk away before it’s all gone? | https://medium.com/s/story/how-to-master-the-flow-state-one-simple-yet-difficult-trick-56854fca9109 | ['Corey Mccomb'] | 2018-09-11 20:19:01.331000+00:00 | ['Life Lessons', 'Inspiration', 'Personal Development', 'Creativity', 'Productivity'] |
GAN — CycleGAN (Playing magic with pictures) | In addition, the set of images are not paired, i.e. we do not have the real images corresponding to the same locations where Monet painted the pictures. CycleGAN learns the style of his images as a whole and applies it to other types of images.
CycleGAN
The concept of applying GAN to an existing design is very simple. We can treat the original problem as a simple image reconstruction. We use a deep network G to convert image x to y. We reverse the process with another deep network F to reconstruct the image. Then, we use a mean square error MSE to guide the training of G and F.
However, we are not interested in reconstructing images. We want to create y resembling certain styles. In GAN, a discriminator D is added to an existing design to guide the generator network to perform better. D acts as a critic between the training samples and the generated images. Through this criticism, we use backpropagation to modify the generator to produce images that address the shortcoming identified by the discriminator. In this problem, we introduce a discriminator D to make sure Y resemble Van Gogh paintings.
Network design
CycleGAN transfers pictures from one domain to another. To transform pictures between real images and Van Gogh paintings. We build three networks.
A generator G to convert a real image to a Van Gogh style picture.
to convert a real image to a Van Gogh style picture. A generator F to convert a Van Gogh style picture to a real image.
to convert a Van Gogh style picture to a real image. A discriminator D to identify real or generated Van Gogh pictures.
For the reverse direction, we just reverse the data flow and build an additional discriminator Dx to identify real images. | https://jonathan-hui.medium.com/gan-cyclegan-6a50e7600d7 | ['Jonathan Hui'] | 2018-07-28 15:09:53.586000+00:00 | ['Deep Learning', 'Artificial Intelligence', 'Computer Vision', 'Data Science', 'Machine Learning'] |
Stop Trying to be Original | This week I had the privilege of attending the International Boys’ Schools Coalition annual conference. The topic of the conference was the arts. How can we engage boys in the arts and integrate the arts meaningfully into our students’ experiences to help them succeed personally, academically, socially, and emotionally? It was an inspiring few days that filled me with ideas to bring back to my own school, and it got me thinking about the difference between creativity and originality.
Several times I heard presenters offer the disclaimer that what they were presenting “wasn’t original.” Some said that they were offering ideas they have adapted from other sources. Others explained that they had done something in their classes that felt very creative, and then they learned lots of other teachers do similar things. They seemed to think that this diminished their “originality,” even though they were combining ideas in a new way or came up with an idea with no knowledge that others shared it. Further, they seemed to feel that this perceived lack of originality diminished their personal creativity.
The first session where I heard a presenter make a self-deprecating remark about his lack of originality was a session about how we can teach students to think creatively. The presenter demonstrated an exercise he does with his classes and then explained that after doing this activity for several years, he learned about a famous art teacher who had been something very similar back in the 1960s. The presenter implied that, because someone else had had this idea before him, he wasn’t being original. Maybe he wasn’t, although he didn’t even know about this other teacher from the 1960s, but that fact seemed totally beside the point to me. Whether or not he was original, he was undoubtedly creative.
Originality is not the same thing as creativity.
We could debate whether it is even possible for anything to be original. To conflate being creative with being original is to make creative thought impossible for most mere mortals, and so, I humbly suggest, that we don’t make that mistake.
Creative thinking requires that we take what we know — things we have learned through experience, through reading and study, through witnessing the lives of those around us — and apply our own unique perspective to those things to produce something that makes our individual way of seeing understandable to others.
Being creative is not inventing new ideas out of thin air. None of us exists in a vacuum. We live in a rich social context that informs our thoughts. We are shaped by the world around us. Sometimes we are conscious of this shaping, and sometimes we are not. Even if you were raised by wolves with no human contact, this would be true.
Thus, being creative is seeing old ideas new ways or combining two or more existing ideas in ways that are unexpected, surprising, and interesting.
Take, for example, Kurt Vonnegut’s masterpiece Slaughterhouse-Five, which celebrates its fiftieth anniversary this year. It is a stunning work of creative genius. In it, Vonnegut combines his first-hand experience in war, anti-war satire, and science fiction in the form of both space aliens and time travel. There are many war stories, but how many have aliens? There are many science fiction books, but how many are satires? There are many books with time travel, but how many also comment on the author’s lived experience? What makes Slaughterhouse-Five unique is a combination of familiar genres. He created something unique and exciting by mixing ingredients that are not usually paired together, much like a chef creating a new dish. This is the genius of his creativity.
The structure of the novel is incredibly complex and, at a quick glance, it seems unlike anything else I’ve ever read. But on closer inspection, I see that it is a frame story, a structure at least as old as 1001 Nights, which dates back to the ninth century. Vonnegut took an existing structure and used it creatively, with the first and final chapters narrated in the first person by Vonnegut himself, speaking directly to the reader, and the interior chapters narrated primarily from the third person point-of-view describing the life of Billy Pilgrim, a character who is “unstuck in time.” Because of Billy’s strange experience of time, the novel is told out of sequence, jumping from present to past and, at one point, to the future. As creative as Vonnegut is in conveying Billy’s “unstuck” nature through a story divorced from linear narrative (I could go on and on about the patterns he employs in what at first seems like a random smattering of events), this is hardly the only story ever written where the events are conveyed out of chronological order. It’s not that Vonnegut has done something totally original, but rather that he has executed a concept with such skill that we feel as if we’re experiencing something brand new.
Lest you misunderstand, my comments are not meant as a criticism of Vonnegut. Not in the least. Slaughterhouse-Five is one of my favorite novels of all time (not something an English teacher can say lightly). It’s one of few books that becomes more interesting with each rereading, not because it’s original, but because it is creative.
And isn’t that good news for the rest of us creative-types? If the standard for being creative is originality, we can’t possibly begin to measure up. We can’t learn or teach originality. Originality requires divine intervention.
But if the standard of creativity is finding new angles and new combinations, we can practice ways of seeing and we can look at the world with curiosity and wonder, always seeking to connect the dots between disparate areas of our experience and knowledge. To be creative is to be interested in everything, to be hungry for information, to be willing to try new things. Being creative is not just about spending hours in an art studio or at your computer. Being creative is a way of life. | https://dianevmulligan.medium.com/stop-trying-to-be-original-e3fa4179cad0 | ['Diane Vanaskie Mulligan'] | 2019-07-01 00:06:45.386000+00:00 | ['Authenticity', 'Advice and Opinion', 'Writing', 'Kurt Vonnegut', 'Creativity'] |
How Self-Driving Vehicles Think: Navigating Double-Parked Cars | Written by Rachel Zucker, Software Engineer, and Shiva Ghose, Staff Software Engineer
Every day, San Franciscans drive through six-way intersections, narrow streets, steep hills, and more. While driving in the city, we check mirrors, follow the speed limit, anticipate other drivers, look for pedestrians, navigate crowded streets, and more. For many of us who have been driving for years, we do these so naturally, we don’t even think about it.
At Cruise, we’re programming hundreds of cars to consider, synthesize, and execute all these automatic human driving actions. In SF, each car encounters construction, cyclists, pedestrians, and emergency vehicles up to 46 times more frequently than in suburban environments, and each car learns how to maneuver around these aspects of the city every day.
To give you an idea of how we’re tackling these challenges, we’re introducing a “How Self-Driving Vehicles Think” series. Each post will highlight a different aspect of teaching our vehicles to drive in one of the densest urban environments. In our first edition, we’re going to discuss how our Cruise self-driving vehicles handle double-parked vehicles (DPVs).
How Cruise autonomous vehicles maneuver around double-parked vehicles
Every self-driving vehicle “thinks” about three things:
Perception: Where am I and what is happening around me? Planning: Given what’s around me, what should I do next? Controls: How should I go about doing what I planned?
One of the most common scenarios we encounter — that requires the sophisticated application of all three of these elements — is driving around double-parked vehicles. On average in San Francisco, the probability of encountering a double-parked vehicle is 24:1 compared to a suburban area.
The Cruise fleet typically performs anywhere between 200 to 800 oncoming maneuvers each day!
Since double-parked vehicles are extremely common in cities, Cruise cars must be equipped to identify and navigate around them as part of the normal traffic flow. Here is how we do it.
Perception
Recognizing whether a vehicle is double-parked requires synthesizing a number of cues at once, such as:
How far the vehicle is pulled over towards the edge of the road
The appearance of brake and hazard lights
The last time we saw it move
Whether we can see around it to identify other cars or obstacles
How close we are to an intersection
We also use contextual cues like the type of vehicle (i.e. delivery trucks, who double-park frequently), construction activity, and scarcity of nearby parking.
To enable our cars to identify double-parked vehicles, we collect the same information as humans. Our perception software extracts what cars around the Cruise autonomous vehicle (AV) are doing using camera, lidar, and radar images:
Cameras provide the appearance and indicator light state for vehicles, and road features (such as safety cones or signage)
Lidars provide distance measurements
Radars provide speeds
All three sensors contribute to identifying the orientation and type of vehicle. Using advanced computer vision techniques, the AV processes the raw sensor returns to identify discrete objects: “human,” “vehicle,” “bike,” etc.
By tracking cars over time, the AV infers which maneuver the driver is making. The local map provides context for the scene, such as parking availability, the type of road, and lane boundaries.
But to make the final decision — is a car double-parked or not — the AV needs to weigh all these factors against one another. This task is perfectly suited for machine learning. The factors are all fed into a trained neural network, which outputs the probability that any given vehicle is double-parked.
In particular, we use a recurrent neural network (RNN) to solve this problem. RNNs stand out from other machine-learning implementations because they have a sense of “memory.” Each time it is rerun (as new information arrives from the sensors), the RNN includes its previous output as an input. This feedback allows it to observe each vehicle over time and accumulate confidence on whether it is double-parked or not.
Planning & Controls
Getting from A to B without hitting anything is a pretty well known problem in robotics. Comfortably getting from A to B without hitting anything is what we work on in the Planning and Controls team. Comfortable isn’t just defined by how quickly we accelerate or turn, it also means behaving like a predictable and reasonable driver. Having a car drive itself means we need our vehicles’ actions to be easily interpretable by the people around us. Easy-to-understand (i.e. human-like) behavior in this case comes from identifying DPVs and reacting to them in a timely manner.
Once we know that a vehicle in front of us is not an active participant in the flow of traffic, we can start formulating a plan to get around the vehicle. Often times, we try to lane change around or route away from the obstacle. If that is not possible or desirable, we try to generate a path that balances how long we are in an oncoming lane with our desire to get around the DPV. Every time the car plans a trajectory around a double-parked vehicle, the AV needs to consider where the obstacle is, what other drivers are doing, how to safely bypass the obstacle, and what the car can and cannot perceive.
Here, we’re navigating around a double-parked truck in the rain, with other vehicles approaching in the oncoming lane. During this maneuver, the AV yielded right-of-way to the two vehicles, which in turn were going around a double-parked vehicle in their own lane.
Every move we plan takes into account the actions of the road users around us, and how we predict they will respond to our actions. With a reference trajectory planned out, we are ready to make the AV execute a maneuver.
There are many ways to figure out the optimal actions to perform in order to execute a maneuver (for example, Linear Quadratic Control); however, we also need to be mindful of the constraints of our vehicle, such as how quickly we can turn the steering wheel or how quickly the car will respond to a given input. To figure out the optimal way to execute a trajectory given these constraints, we use Model Predictive Control (MPC) for motion planning. Under the hood, MPC algorithms use a model of how the system behaves (in this case, how we have learned the world around us will evolve and how we expect our car to react) to figure out the optimal action to take at each step.
Finally, these instructions are sent down to the controllers, which govern the movement of the car. Putting it all together, we get:
In this example, after yielding to the cyclist, we see an oncoming vehicle allowing us to complete our maneuver around the double-parked truck. It is important to recognize these situations and complete the maneuver so we support traffic flow.
San Francisco is famously known to be difficult to drive in, but we at Cruise cherish the opportunity to learn from the city and make it safer. With its mid-block crosswalks, narrow streets, construction zones, and steep hills, San Francisco’s complex driving environment allows us to iterate and improve quickly, so we can achieve our goal of making roads safer.
Over the coming months, we look forward to sharing more “How Self-Driving Vehicles Think” highlights from our journey.
If you’re interested in joining engineers from over 100 disciplines who are tackling one of the greatest engineering challenges of our generation, join us. | https://medium.com/cruise/double-parked-vehicles-4f5ac8fc05a9 | ['Rachel Zucker'] | 2020-02-13 18:33:29.853000+00:00 | ['Software Engineering', 'San Francisco', 'Self Driving Cars', 'Engineering', 'Robotics'] |
AWS CLI— Know its Applications and Benefits | AWS CLI — Edureka
Amazon Web Services(AWS) is the market leader and top innovator in the field of cloud computing. It helps companies with a wide variety of workloads such as game development, data processing, warehousing, archive, development and many more. But, there is more to AWS than just the eye-catching browser console. It’s time that you check out Amazon’s Command Line Interface — AWS CLI.
Before digging in, let’s take a look at the topics covered in this article.
What Is AWS CLI ?
Uses of AWS CLI
Installing AWS CLI
How to use AWS CLI?
What is AWS CLI?
AWS Command Line Interface(AWS CLI) is a unified tool using which, you can manage and monitor all your AWS services from a terminal session on your client.
Although most AWS services can be managed through the AWS Management Console or via the APIs, there is a third way that can be very useful: the Command Line Interface (AWS CLI). AWS has made it possible for Linux, MacOS, and Windows users to manage the main AWS services from a local terminal session’s command line. So, with a single step installation and minimal configuration, you can start using all of the functionalities provided by the AWS Management Console using the terminal program. That would be:
Linux shells: You can use command shell programs like bash, tsch and zsh to run commands in operating systems like Linux, macOS, or Unix
You can use command shell programs like bash, tsch and zsh to run commands in operating systems like Linux, macOS, or Unix Windows Command Line: On Windows, you can run commands in PowerShell or in the Windows command prompt
On Windows, you can run commands in PowerShell or in the Windows command prompt Remotely: You can run commands on Amazon EC2 instances through a remote terminal such as PuTTY or SSH. You can even use AWS Systems Manager to automate operational tasks across your AWS resources
Apart from this, it also provides direct access to AWS services public APIs. In addition to the low-level API equivalent commands, the AWS CLI offers customization for several services.
This article will tell you everything that you need to know to get started with the AWS Command Line Interface and to use it proficiently in your daily operations.
Uses of AWS CLI
Listed below are a few reasons which are compelling enough to get you started with AWS Command Line Interface.
Easy Installation
Before AWS CLI was introduced, the installation of toolkits like old AWS API involved too many complex steps. Users had to set up multiple environment variables. But the installation of AWS Command Line Interface is quick, simple and standardized.
Saves Time
Despite being user-friendly, AWS Mangement Console is quite a hassle sometimes. Suppose you are trying to find a large
Amazon S3 folder. You have to log in to your account, search for the right S3 bucket, find the right folder and look for the right file. But with AWS CLI, if you know the right command the entire tasks will take just a few seconds.
Automates Processes
AWS CLI gives you the ability to automate the entire process of controlling and managing AWS services through scripts. These scripts make it easy for users to fully automate cloud infrastructure.
Supports all Amazon Web Services
Prior to AWS CLI, users needed a dedicated CLI tool for just the EC2 service. It worked properly, but it didn’t let users control other Amazon Web Services, like for instance the AWS RDS (Relational Database Service). But, AWS CLI lets you control all the services from one simple tool.
So now that we have understood what AWS CLI is let’s get started with the installation process.
Installing AWS CLI
AWS Command Line Interface can be installed in three ways:
Using pip
Using a virtual environment
Using a bundled installer
In this article, we will see how to install AWS CLI using pip.
Prerequisites
Python 2 version 2.6.5+ or Python 3 version 3.3+ Windows, Linux, macOS, or Unix Operating System
Installing the AWS CLI Using pip
The common way to install AWS CLI is using pip. pip is a package management system which is used to install and manage software packages written in Python.
Step 1: Install pip (on Ubuntu OS)
$ sudo apt install python3-pip
Step 2: Install CLI
$ pip install awscli --upgrade --user
Step 3: Check installation
$ aws --version
Once you are sure that AWS CLI is successfully installed, you need to configure it to start accessing your AWS Services through AWS CLI.
Configure AWS CLI
Step 4: Use the below command to configure AWS CLI
$ aws configure
AWS Access Key ID [None]: AKI************
AWS Secret Access Key [None]: wJalr********
Default region name [None]: us-west-2
Default output format [None]: json
As a result of the above command, the AWS CLI will prompt you for four pieces of information. The first two are required. Those are your AWS Access Key ID and AWS Secret Access Key, which serve as your account credentials. The other information that you will need is region and output format, which you can leave as default for time being.
NOTE: You can generate new credentials within AWS Identity and Access Management (IAM) if you do not already have them.
All set! You are ready to start using AWS CLI now. Let’s check out how powerful AWS CLI can be with help of few basic examples.
How to use AWS CLI?
Suppose you have got some services running on AWS and you made it happen using the AWS Management console. The exact same work can be done, but with a whole lot less effort using Amazon Command Line Interface.
Here’s a demonstration,
Let’s say you want to launch an Amazon Linux instance from EC2.
If you wish to use AWS Management Console, to launch an instance, you’ll need to:
Load the EC2 Dashboard
Click Launch Instance
Select AMI and instance types of choice
Set network, life cycle behavior, IAM, and user data settings on the Configure Instance Details page
Select storage volumes on the Add Storage page
Add tags on the Add Tags page
Configure a security group on the Configure Security Group page
Finally, review and launch the instance
And, don’t forget the pop up where you’ll confirm your key pair and then head back to the EC2 Instance dashboard to get your instance data. This doesn’t sound that bad, but imagine doing it all when working with a slow internet connection or if you have to launch multiple instances of different variations multiple times. It would take a lot of time and effort, wouldn’t it?
Now, let’s see how to do the same task by using AWS CLI.
Step 1: Creating a new IAM user using AWS CLI
Let’s see how to create a new IAM group and a new IAM user & then add the user to the group using AWS Command Line Interface
First, use create-group to create a new IAM group
$ aws iam create-group --group-name mygroup
Use create-user to create a new user
$ aws iam create-user --user-name myuser
Then add the user to the group using add-user-to-group command
$ aws iam add-user-to-group --user-name myuser --group-name myiamgroup
Finally, assign a policy (which is saved in a file) to the user by using command put-user-policy
$ aws iam put-user-policy --user-name myuser --policy-name mypoweruserole --policy-document file://MyPolicyFile.jso
If you want to create a set of access keys for an IAM user, use the command create-access-key
$ aws iam create-access-key --user-name myuser
Step 2: Launching Amazon Linux instance using AWS CLI
Just like when you launch an EC2 instance using AWS Management Console, you need to create a key pair and security group before launching an instance
Use the command create-key-pair to create a key pair & use –query option to pipe your key directly into a file
$ aws ec2 create-key-pair --key-name mykeypair --query 'KeyMaterial' --output text > mykeypair.pem
Then create a security group and add rules to the security group
$ aws ec2 create-security-group --group-name mysecurityg --description "My security group" $ aws ec2 authorize-security-group-ingress --group-id sg-903004f8 --protocol tcp --port 3389 --cidr 203.0.113.0/24
Finally, launch an EC2 instance of your choice using the command run-instance
$ aws ec2 run-instances --image-id ami-09ae83da98a52eedf --count 1 --instance-type t2.micro --key-name MyKeyPair --security-group-ids sg-903004f8
There appears to be a lot of commands, but you can achieve the same result by combining all these commands into one and then save it as a script. That way you can modify and run the code whenever necessary, instead of starting from the first step, like when using AWS Management Console. This can drop a five-minute process down to a couple of seconds.
So, now you know how to use AWS CLI to create an IAM user and launch an EC2 instance of your choice. But AWS CLI can do much more.
So folks, that’s an end to this article on AWS CLI. If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, DevOps, Ethical Hacking, then you can refer to Edureka’s official site.
Do look out for other articles in this series which will explain the various other aspects of AWS. | https://medium.com/edureka/aws-cli-9614bf69292d | ['Vishal Padghan'] | 2020-09-10 10:02:58.196000+00:00 | ['Amazon Web Services', 'AWS', 'Cloud Computing', 'Aws Certification', 'Aws Cli'] |
React and MobX — Lessons Learned. Get started with MobX as your state… | Observables
What allows that to happen is the use of observables. Quite simply, an observable adds to an existing data structure the possibility of being “observed” by someone. It is similar to the pattern design of Pub/Sub or Mediator, where part A asks to be notified when something happens in part B, but here, in addition to all of this happening automatically (without the necessity of “subscribing”), what is observed is the value in itself, rather than callbacks created by you.
The use of decorators in MobX is optional: It is just a way to write a little less code and maintain the current structure. Every decorator has a corresponding function. To enable decorators, you may need to change the Babel settings.
An example of that is the use of decorators @observable and @observer :
Please note that without having to write anything specific, the observer will react alone when the observable name changes its value. Even though you have a lot of complex observables, MobX internally only records what is being used by you in the render method.
Cool, right? Very easy and straightforward.
Previous example without the decorator:
You can find more examples of proceedings with or without the decorator in the documentation. | https://medium.com/better-programming/react-and-mobx-lessons-learned-427a8e223c93 | ['Caio Vaccaro'] | 2020-10-01 18:22:36.884000+00:00 | ['Mobx', 'React', 'Programming', 'JavaScript', 'Development'] |
A Tale of a Journey Across Low-Code | Last year I had just landed my first job as Software Developer at Signify (the former Philips Lighting) and, after a few weeks, a colleague asked me if I wanted to go to a No-Code conference with him. “A conference with everything paid? Nice!” I ran to my laptop and started my desk-research on what Low-Code was. It kind of reminded me of the MIT App Inventor, but wider, more feature-complete. It triggered my curiosity: “Is this the future of the job?”. I did not expect that I was about to embark on a one-year-long exploration, that would have exposed me to external vendors, to other departments, and could have potentially changed the way we develop in our company.
In this article, I am going to describe the exploration process we followed, what were our expectations and learnings, and what we see looking forward.
How and Why it started
Low Code Platforms: Software that provides an environment to create end-to-end web or mobile applications and lift infrastructure through graphical user interfaces and (possibly) traditional programming.
In the continuous strive to improve our process and technology competencies, to increase our productivity and reduce the time to get from an idea to a prototype, some colleagues started looking into the Low-Code world. The choice for this technology came from past experiences in different companies in which the technology was successfully adopted, and its impressive presence at the Gartner’s Conference in 2018.
Gartner Magic Quadrant for Enterprise High-Productivity Application Platform as a Service, 2019, from DZone
There were two main desires: the ability to quickly build prototypes that could be easily integrated with the existing backend infrastructures, and the ability to co-create with UX/UI designers and let them use the tools to co-create with the customers.
Desk research depicts Low-Code in conflicting ways, from the future of the development to a disaster. A lot depends on the context, and on how these tools fit in the company culture.
From many to few
There are many Low-Code platforms in the market. Trying them all would take just too much time.
Low-Code selection process
We started by walking around some conferences, talking with employees, partners, customers. At first, we were impressed: big audiences, from half a thousand people to a few thousands, a big number of applications running on the platforms, speed and agility repeated in almost every keynote. Later, we realized that there was something missing: many demos and claims were quite generic, we left each conference without a real feeling of what is possible and how. Something more was needed.
We picked some of the vendors from the Quadrant and got in touch with them. We asked for some technical information (Does your platform support REST calls? And OAuth2?) and a short face-to-face demo. Not the nicest process, but it already started highlighting some differences:
Native Low(No)-Code platforms (created to do that and that only) and platforms that are evolving towards Low-Code : the former with greater flexibility and complexity, the latter with some flexibility on top of their earlier scope. We did not have an application scope in mind, as we usually don’t have when we come up with a new idea. So we picked the first category.
(created to do that and that only) and platforms that are : the former with greater flexibility and complexity, the latter with some flexibility on top of their earlier scope. We did not have an application scope in mind, as we usually don’t have when we come up with a new idea. So we picked the first category. “Citizen Developer”, “Citizen Developer and Developer”, “Developer” platforms: ranging from the most graphical/blocks-oriented and less flexible, to the ones that seemed more like graphical coding. Citizen developer is a recurrent expression when looking at Low-Code, and it represents an application developer without a software background. Given the complexity we were looking for, the platforms for the “Citizen Developer and Developer” category and the “Developer” one suited better.
We chose two platforms and moved to the next step: development.
Hands-On: some premises
So, we picked two platforms: the fun could start. The attention points:
The feeling : we really wanted to feel the platforms . As a bunch of developers, we wanted to do some training, read some documentation, and bend the technology to our needs. We absolutely did not want any consultant developing for us or sitting next to us daily to help us develop. Most developers learn one/two frameworks per year, and it’s highly uncommon to have consultants help you do that. Why would we treat Low-Code differently?
: we really wanted to feel the platforms As a bunch of developers, we wanted to do some training, read some documentation, and bend the technology to our needs. We or sitting next to us daily to help us develop. Most developers learn one/two frameworks per year, and it’s highly uncommon to have consultants help you do that. Why would we treat Low-Code differently? The community : we wanted to join the community. Relying on the platform technical support is nice, but on the day-to-day development, you need a community. If I am stuck writing JS code, I know the solution is on StackOverflow. Is there a SO for Low-Code?
: we wanted to join the community. Relying on the platform technical support is nice, but on the day-to-day development, you need a community. If I am stuck writing JS code, I know the solution is on StackOverflow. The learning curve : we wanted to perceive the learning curve. If we would have adopted the platform, we would have brought on board as many colleagues as possible. How much time would that have taken?
: we wanted to perceive the learning curve. If we would have adopted the platform, we would have brought on board as many colleagues as possible. How much time would that have taken? Flexibility : what can we do on these platforms? How far can we take our application?
: what can we do on these platforms? Ease of design: can we give the platform to designers and let them put the text box in the right position instead of sending JPG designs to the developers? How cool would that be!
can we and let them put the text box in the right position instead of sending JPG designs to the developers? How cool would that be! Best practices: if the prototype application becomes a product, can best practices (peer review, testing, …) be enforced?
Hands-On: Planned vs Realized | https://medium.com/swlh/a-tale-of-a-journey-across-low-code-248facb897f7 | ['Massimo Tumolo'] | 2020-06-27 14:12:58.502000+00:00 | ['Innovation', 'Software Development', 'Development', 'Technology', 'Productivity'] |
How Many Startups Can You Manage At Once? | “I’m managing three companies,” “Jason” said to me. It was our first conversation, and warning lights started going off.
Picture: Depositphotos
“Tell me about the three companies?” I said.
Jason said, “The first company is a software company which is doing about $5 million in revenue. The second company is a SaaS company doing about $1 million in revenue. And the third company is my law practice.”
“Interesting,” I responded. “How many direct reports do you have across the three companies?”
Jason said, “Let me think about that.” Then after a very long pause, he said, “17.”
“Wow. That’s a lot.”
You’ll need to leverage yourself if you’re going to manage multiple startups.
“Think of your multiple companies like one big company,” I said to Jason. “Each of the companies are then like divisions of the main company, run by you.
“Ideally, you want to develop infrastructure in each division (the individual companies), so that you’re free to manage all three divisions at once. The only way this is going to happen is if you reduce the number of direct reports you have.”
Jason nodded his head in agreement. “I know. 17 is too many. How many should I reduce it to?”
“My magic number is seven,” I said. “Things usually break down for most people when they get above seven direct reports.
“Normally, you’d have a management team for each of the businesses. My bet is this hasn’t been built out yet.”
Jason quickly realized the true problem he had. “I don’t really have the teams I need, so they can manage the more junior people. I’m having to do that myself.”
You’ll need great teams at each startup.
“I’m not surprised,” I said. “What you’re going through is normal if you were managing just one startup.
“It’s pretty common that somewhere between $1 million and $10 million in revenue, you end up building out your management team. In your case, you have to build out two management teams, maybe three.”
“I get it,” Jason said.
Fortunately, Jason truly did get it. Over the next several months, Jason recruited the management teams he needed. Slowly, but surely, the number of direct reports Jason had dropped to the magic number of seven.
In addition, Jason divested his law practice. Now, Jason at least had a manageable problem.
However, you’ll need to remain vigilant to keep your leverage.
About six months later, Jason said, “I’m worried again because my direct reports are up to eleven. I know what I need to do.”
That’s the challenge for you as a CEO, regardless of whether you’re running multiple companies or one company. You’ll need to start anticipating when you’ll need more senior managers, and your senior managers will need more mangers, to maintain your leverage and their leverage.
In short, it’s a never ending battle. The best CEOs plan ahead, so they are constantly recruiting or building their management talent pool.
It takes discipline to pull this off. And you need to teach your team to have the same discipline.
No matter how hard you try, one of the startups will demand most of your attention.
Jason successfully got his direct reports down to seven again. Then, the inevitable happened.
Jason had the high class problem where one of the companies growth went hyper, doubling in revenue each year. And, of course, eighty percent of his time went to running the hyper growth startup.
That’s okay. As long as you have follow the rule of seven, keep you and your team focused on recruiting top talent, then you can keep managing multiple businesses as you scale.
For more, read: https://www.brettjfox.com/what-are-the-five-skills-you-need-to-be-a-great-ceo | https://medium.com/swlh/how-many-startups-can-you-manage-at-once-c05227e86ad6 | ['Brett Fox'] | 2020-12-30 05:55:54.156000+00:00 | ['Leadership', 'Entrepreneurship', 'Business', 'Startup', 'Venture Capital'] |
Former Google CEO Eric Schmidt: Let’s Start a School for A.I. | Former Google CEO Eric Schmidt: Let’s Start a School for A.I.
Uncle Sam might want you… to code.
If you’re interested in becoming a technologist for the federal government, former Google CEO Eric Schmidt wants to teach you how to code.
According to OneZero, Schmidt has partnered up with former U.S. Secretary of Defense Robert O. Work to create a school for folks who want to become government coders. This U.S. Digital Service Academy would operate like a regular school, offering coursework and degree tracks, and focus on cutting-edge technology subjects such as cybersecurity and artificial intelligence (A.I.).
As OneZero points out, the federal government is very interested in technologists who can craft new innovations in A.I. “We are engaged in an epic race for A.I. supremacy,” the publication quotes Rick Perry, secretary of the Department of Energy, as telling an NSCAI conference in 2019. “As I speak, China and Russia are striving to overtake us. Neither of these nations shares our values or our freedoms.”
Despite that urging, however, the U.S. government has “fallen short” when it comes to actually funding artificial intelligence research, according to a report issued by NSCAI: “AI is only as good as the infrastructure behind it. Within DoD in particular this infrastructure is severely underdeveloped.”
But the U.S. Digital Service Academy isn’t a done deal; first, Congress must approve NSCAI’s recommendation that the university be created. Then, it would actually need to be built, staffed, accredited, and launched. In order to fulfill the vision presented by Schmidt, the school would also need to forge partnerships with a variety of private companies and public institutions, in order to give students the necessary internships and other opportunities.
And even if all those goals are met, the U.S. Digital Service Academy would need to persuade young technologists to opt for it over other schools that are specializing in A.I. instruction, including Stanford and MIT.
Over the past several years, Eric Schmidt has shaped himself as an expert and advisor on U.S. technology policy. Last year, for example, he suggested that the U.S. government’s attempts to restrict hiring from China wouldn’t do this country’s technology industry any good.
“I think the China problem is solvable with the following insight: we need access to their top scientists,” he told the audience, according to Bloomberg. He also added that “common frameworks” such as Google’s TensorFlow benefit from input from scientists and researchers in other countries.
The U.S. Digital Service Academy is clearly his latest attempt to try to guide policy and discussion. If he can actually get it off the ground, though, it could provide yet another venue for technologists to learn intensely valuable A.I. and machine learning skills. | https://medium.com/dice-insights/former-google-ceo-eric-schmidt-lets-start-a-school-for-a-i-1a709e61e22b | ['Nick Kolakowski'] | 2020-07-31 13:01:01.564000+00:00 | ['Artificial Intelligence', 'Google', 'Eric Schmidt', 'Education', 'Machine Learning'] |
TikTok’s Most Recent Viral Trend Is Headed To Broadway | The so- called Ratatouille musical is based on the the 2007 Disney-Pixar film that tells the story of Remy, a talented French rat with an impressive palate, who learns to cook from old TV shows and cookbooks made by a famous human chef, Auguste Gusteau, whose motto is that “anyone can cook”.
It all started when an audio from Tiktok User @e_jaccs started to make rounds on TikTok. The now viral audio is of user @e_jaccs singing about how Remy, the rodent protagonist of the film, was the rat of our dreams. The Audio soon started to gain traction and now has over 18.5 thousand videos using it.
The Musical, organised by production company Seaview, will supposedly star credited Broadway performers. Proceeds from ticket sales to the digital event will raise money for the Actors Fund.
The musicals origination and creation, however is not accredited to one individual.Unlike traditional broadway shows, the composition of the Ratatouille musical is one that relies on the collaboration of strangers over the internet to create canonical music and lyrics.
The power and populariy of tiktok as a free creagtive space has birthed a collaborative enivromnet like no other. Hundreds of fans of the pixar movie along with musicl theatre lovers have built off eachother’s songs and ideas to create professional running orders and song instrumentation.
Tiktok being the home to creatives from evry relm and genre has allowed there to be collaboriom regarding every step of the process including the set and playbill designs.
The cast of the Broadway version has not yet been announced and it is still unclear whether it will be staged inside a physical Broadway theatre or from the homes of the actors chosen.
Disney has historically used many of its tales such as Beauty and the Beast and The Lion King for musical adaptations but has clarified that it will not be doing the same for ratatouille saying,
“ Disney does not have development plans for the title”.
While that still may be true, Disney hasnot caused any trouble with the fan made adaptation of the film In fact the company has even given its blessing to the production saying,
“we love when our fans engage with Disney stories” and “we thank all of the online theater makers for helping to benefit the Actors Fund in this unprecedented time of need”.
It has also been confirmed that the creators of songs to be used in the musical will be credited and “compensated”. You can buy Tickets to the musical at Today Tix ! | https://medium.com/illumination/tiktoks-most-recent-viral-trend-is-headed-to-broadway-72ca9d457539 | ['Zo Sajjad'] | 2020-12-28 16:33:17.027000+00:00 | ['Pop Culture', 'Broadway', 'Tik Tok', 'Startup', 'Music'] |
Using Map Bearings and Trigonometry to Style Custom Mapbox GL Draw Tools | As a team that builds tools for the often-complicated world of Urban Planning in NYC, we run into a number of unique engineering challenges typically related to web mapping. For our newest application, Applicant Maps, we built our own custom draw tools through combining mapbox-gl-draw line and symbol layers. Users are able to draw five different “annotations” on their project map — such as our Parallel Measurement tool, which we created by placing a custom symbol on both ends of a line.
Our Parallel Measurement Tool which consists of a line and symbols on both ends
Symbol layers in Mapbox GL are point/marker layers on which developers can define their own icon image. Our custom arrow symbols ➤ are PNG files we created specifically for our annotations. We set the location of the arrows to match the coordinates of the line. We then rotate the arrows using the bearing of the line, lineBearing , which is the angle of a line from true north.
Here is our symbol layer, startArrowLayer , which was placed at the first coordinate of our line.
const { coordinates } = lineFeature.geometry;
const lineBearing = bearing(coordinates[0], coordinates[1]); const startArrowLayer = {
type: 'symbol',
source: {
type: 'geojson',
data: {
type: 'Feature',
geometry: {
type: 'Point',
coordinates: lineFeature.geometry.coordinates[0],
},
properties: {
rotation: lineBearing + 180,
},
},
},
layout: {
'icon-image': 'arrow',
'icon-size': 0.04,
'icon-rotate': {
type: 'identity',
property: 'rotation',
},
'icon-anchor': 'top',
'icon-rotation-alignment': 'map',
'icon-allow-overlap': true,
'icon-ignore-placement': true,
},
};
Learn more about how to style layers with the Mapbox GL Style Specification. And check out how we build the entire annotation in this JavaScript file.
Centerline Annotation
A drawing displaying the new centerline tool from a meeting we had with planners
I ran into an interesting engineering issue while building the Centerline annotation tool. Planners from our Technical Review Division wanted this tool to consist of an arrow as well as a custom centerline icon. I built the tool to mirror that of the Parallel Measurement annotation shown above, by placing an arrow on one end of the line and our centerline symbol on the other end.
It was easy enough to replicate the code we used for the Parallel Measurement tool, and replace the startArrowLayer with the centerlineLayer .
And there it was! All I had left to do were a couple of minor styling changes: resize the icon and move it a little further away from the line. While the size modification only required a simple fixed value change, offsetting the icon ended up being a little more complicated.
Dynamic Offsetting with Trigonometry
Mapbox GL’s icon-translate property allows developers to offset an icon relative to its anchor (the location where the point is originally placed) based on fixed x and y values. Because our users can draw a line in any direction, a fixed offset would produce something like this:
Example of a fixed offset [10, 0] with icon-translate
Similarly to how we used lineBearing to calculate the rotation of arrows, we can use this same angle to calculate a dynamic offset for our centerline icons and avoid the above situation.
After console logging the lineBearing of several lines in different directions, I created this graphical depiction. I drew in the x and y input values that would translate the icon, an example of the line bearing (represented by 45°), and the distance between the initial location of the icon and the offset location (represented by c).
In Mapbox GL, a negative y value implies a translation UP, and a positive y value implies a translation DOWN
While we have to calculate new x and y values every time the line is drawn, there are two variables that are always known: (1) the distance in pixels that the icon should travel from the end of the line, which I called c , and (2) the angle of the line from true north, or the lineBearing , represented by ɵ.
Revisiting my trigonometry days, I then calculated x and y using the pythagorean theorem and the equation of the tangent.
Using the substitution method, I was able to isolate y , remove x , and produce an equation with just the lineBearing (ɵ) and c .
I then plugged this new y value into the pythagorean theorem in order to find x . Note: I had to convert the lineBearing to radians before finding its tangent and a double asterisk ** represents exponents in JavaScript.
const radiansBearing = (lineBearing * Math.PI) / 180; let x = null;
let y = null; y = Math.sqrt((c ** 2) / ((Math.tan(radiansBearing) ** 2) + 1));
x = Math.sqrt((c ** 2) - (y ** 2));
I now had formulas for the x and y values needed to situate the icon correctly on the map. In Mapbox GL, a positive x value means a translation to the RIGHT, and a negative x value means a translation to the LEFT. A positive y value means a translation DOWN, and a negative y value means a translation UP. In order to assure that the icon was being translated appropriately based on the quadrant where the line existed, I had to set some of the x and y values to negative.
Depending on the quadrant, the x and y values will need to be made negative or positive. Quadrant 1: the icon will be translated right and up [+x, -y]. Quadrant 2: right and down [+x, +y]. Quadrant 3: left and down [-x, +y]. Quadrant 4: left and up [-x, -y].
icon-translate is a weird property. It’s defined by Mapbox as: “Distance that the icon’s anchor is moved from its original placement. Positive values indicate right and down, while negative values indicate left and up.” As mentioned earlier, the anchor is the location where the point was originally placed by the user. So while we are physically translating the icon away from the line (the line will not move unless the user explicitly moves it), icon-translate is measuring the translation as a movement of the anchor not a movement of the icon. Therefore, I had to set the x and y values to the opposite of what I initially expected.
if (lineBearing > 0 && lineBearing < 90) { // quadrant I
x = -x;
} else if (lineBearing < -90) { // quadrant II
y = -y;
} else if (lineBearing > 90 && lineBearing < 180) { // quadrant IV
y = -y;
x = -x;
}
I then added these x and y values to the icon-translate paint property on the centerline symbol layer.
const centerlineLayer = {
type: 'symbol',
source: {
type: 'geojson',
data: {
type: 'Feature',
geometry: {
type: 'Point',
coordinates: lineFeature.geometry.coordinates[0],
},
},
},
layout: layoutCenterline,
paint: {
'icon-translate': [
x,
y,
],
},
};
The offset distance will now be the same regardless of the direction of the line. And that’s how we were able to create this cool centerline annotation on our maps! | https://medium.com/nyc-planning-digital/using-map-bearings-and-trigonometry-to-style-custom-mapbox-gl-draw-tools-455123abb68c | ['Taylor Mcginnis'] | 2019-03-29 19:00:18.288000+00:00 | ['Design', 'Ember', 'Mapbox', 'Engineering', 'Nyc Planning Labs'] |
What I Wish Someone Told Me When I Had My First Abnormal Pap | What I Wish Someone Told Me When I Had My First Abnormal Pap
The mantra I didn’t know I needed.
Photo by Gemma Chua-Tran on Unsplash
The day I got my first abnormal Pap results is in the top 10 worst days of my life. I was at work sitting at my desk at work when I got a call from an unknown number. I answered and was told I had atypical squamous cells and was positive for HPV.
I knew zero about what this meant, but I knew it wasn’t what I wanted to hear. I immediately walked back to my desk, told my boss I was leaving, drove home, and cried. I called my mom after some google searches proclaiming, “I think I have cancer”.
Growing up female, we all learn the dreaded pap smears will come. They are scary at first but become routine. What we don’t do though is educate young females what it will be like if they are abnormal.
Because here is a scary statistic according to Dr Hugh DePaulo, “as many as one in 10 pap smears come back abnormal nationwide”.
I repeat — one in ten.
That is a lot of abnormal pap smears daily in a population of over 328 million people. This also means there is a lot of fear if we don’t start talking about what an abnormal pap is and what it means and does not mean about you.
Here is what I wish someone told me the second I received the bad news about my abnormal pap.
It’s very unlikely to be cancer — don’t think the worst.
I just wish someone would have used those exact words, immediately. When I was given my results, a lot of medical jargon was used, jargon that sounded like cancer.
It was not until two more doctors appointments, and a procedure later, did I finally just straight up ask, “Do I have cancer?”. Only then was I given the answer clear and straight, “No, you do not have cancer” and could finally breathe again.
By the time I had heard it though, I had shed plenty of tears and lost many nights of sleep over the matter, so I am here to tell you that an abnormal pap does not equate to cancer. And only 1% of abnormal pap smears ever do.
Let your abnormal pap stay just that, a pap smear that is not normal. One you and the doctors are going to look more into. Do not let the scary medical terms and talk of changing cells make you fear the worst.
It is not the C-word until it is the C-word. And when it is, you will be told it.
You are not alone — women of all ages, including people you know have been through this.
I felt like I was the only 20-something-year-old who had ever gotten this news. I had never heard from a friend or a family member that they had an abnormal pap. I had never seen one on TV.
I immediately felt alone. I immediately felt dirty. I immediately felt like something was very wrong with me. Then that same day, similar stories started coming out of the wood-works.
My mom told me she had an abnormal pap after I was born. My roommate told me she had gone through a similar experience a few years back. A good friend let me know she had her first abnormal pap that year too.
Again, 1 in 10 women go through this — we just don’t talk about it. We don’t post our abnormal results on our Instagram reels, nor bring it up at girls' brunch. But I bet you if you are brave enough to ask, you will find so many women who will share they have been through the same exact thing.
You are not alone in this experience.
You will never know what “caused” this — so don’t waste your energy digging through your past.
My thoughts immediately spiralled into my sexual history. I felt like my abnormal past must be due to something I did or something I had not done in my past.
I went through thoughts of every male I had ever slept with and thought maybe him? Maybe he gave it to me! I thought of the HPV vaccine I had as a child and thought, goshdarnit, it must be that.
I even thought about the sexual abuse I had been through — my trauma. I was sure my past negative experiences manifested this into my body. This is where all that unhealed pain was going to be showing up.
Here is the kicker though: I, nor anyone, will ever know the root of these abnormal cells in our bodies. Sure, we can jump to conclusions, but our bodies are miraculous things beyond our understanding. Often beyond scientists understandings as well.
Do not waste your precious energy, trying to find the root of these abnormal cells. Waste your precious energy instead on healing.
Control what you can — and let your body and doctors take care of the rest.
Similarly to how you will never know the cause of the abnormal pap, now that you know they are there, you can’t control them either. You likely cannot control the medical treatment you will receive or how long it may take to get that desired “normal” pap again.
The good news, however, is that you can control many things about your body, such as what you put into it and the tools you use to heal. You also can maintain your mindset and stress level while you go through this experience. Hello, meditation, yoga, and lots of sleep.
I google-searched a lot. I wanted to find the cure and control these cells deep inside my body. If someone would have told me to stick some herbs up my vagina, and this abnormal pap talk would be over — I probably would have.
But the way out of this situation is through it… and through it depends on you and your specific circumstance. Lean into your doctor. Do your research, so you feel empowered about your body and choices. Control your stress levels.
And let go of the rest. Your mental health will thank you later.
You stressing about your pap smear (now yearly) will only cause anxiety— twice.
I wish someone would have told me what a journey an abnormal pap is. There are paps and re-paps and procedures and waiting. For me, it was a three-year journey to finally hear those magic words: your pap is normal.
According to CDC guidelines on Pap smears, the recommendation for those with normal pap results is every 3 years. Unfortunately, but necessarily, that changes to every year for those with abnormal results.
Thus, you get to stress about hearing about your pap results every single year. But don’t worry about it twice! Stress about the results, sure but don’t stress about making the appointment or the appointment itself.
Yes, the process is not fun, but it is to make sure you are healthy. It is to make sure it does not become the big scary C word. And you know what a leading factor of cancer is? Stress.
The number one thing you can do to support yourself through this is to manage that. | https://medium.com/fearless-she-wrote/what-i-wish-someone-told-me-when-i-had-my-first-abnormal-pap-10b9042f7fc1 | ['Alexandra Ringer'] | 2020-09-20 02:44:25.617000+00:00 | ['Mental Health', 'Women', 'Health', 'Self', 'Medicine'] |
Kesamaan Perilaku antara React dan Vue | Easy read, easy understanding. A good writing is a writing that can be understood in easy ways
Follow | https://medium.com/easyread/persamaan-perilaku-antara-react-dan-vue-f16ae8644e98 | ['Alif Irfan Anshory'] | 2019-01-21 15:42:24.964000+00:00 | ['React', 'JavaScript', 'Front End Development', 'Web Development', 'Vuejs'] |
4 Ways To Help Your Employees Build Their Confidence | Do you have a high achieving performer on your team that is talented, hard-working, and intelligent; but remains silent in group meetings and freezes in crucial calls? Freezing happens, but for some, freezing is a frequent obstacle to professional well-being. Every employee wants to feel seen, heard, and celebrated in the workplace, but for some, sharing ideas, thoughts, and accomplishments creates a total body and mind shut down.
They could be experiencing “destructive perfectionism.” Brené Brown defined this kind of perfectionism in her book, The Gifts of Imperfection, as ‘a self-destructive and addictive belief system that fuels this primary thought:
‘If I look perfect, live perfectly, and do everything perfectly, I can avoid or minimize the painful feelings of shame, judgment and blame.’
While that keeps them safe, it prevents them from showing up with vulnerability and courage to step into feeling confident and connected in the workplace. Here are four suggestions to create a supportive and connected environment for employees to thrive:
1. Improv Exercises to Get Out of the Head & Into the Body
Curio specializes in creating expert-led interactive virtual experiences to help employees get out of their heads and into their bodies through creative, low-pressure improv exercises. These moments give employees permission to relax and show up unrehearsed.
2.Quieting the Loud Inner Dialogue
A relaxed mind feels at ease to connect authentically in a casual conversation or a board room. Some inner dialogues are so critical and loud, it leads to constant overdrive for the mind, and it drowns out all other thoughts. Mindfulness is not about clearing the mind. Mindfulness techniques slow down and quiet the chatter, leaving room for present connection with the self and others. Choose a breath awareness meditation, shifting awareness from the stream of thoughts to the breath.
3.Body-based Relaxation Techniques
The body goes into fight, flight, freeze mode when it feels that a situation is a high risk. For someone with ‘destructive perfectionism’, the body sees many moments (like meetings, calls, interviews, presentations) as another risk to be seen as imperfect. The threat feels high, so they shut down. Choose a body scan meditation, shifting awareness to the physical body.
4.Rewiring Through Journaling
Popular Psychologist Dr. Nicole LePera @The.Holistic.Psychologist has an incredible free resource called the Future Self Journal that takes you through a daily writing practice of rewiring thought patterns and creating new pathways to help achieve new habits and mindset.
Try sharing these techniques with your team or adding a new well-being activity to your employee health programs. Creating a supportive and open environment builds more creative and collaborative teams. | https://medium.com/joincurio/4-ways-to-help-your-employees-build-their-confidence-573ddcf64a81 | ['Melissa Schwartz'] | 2020-11-23 15:01:10.665000+00:00 | ['Leadership', 'Mental Health', 'Culture', 'Mindfulness', 'Creativity'] |
The Amazing Benefits of Being in Nature | The Amazing Benefits of Being in Nature
Better health. Lower stress. Enhanced creativity. Sheer joy. It’s all out there, and it doesn’t take long.
When my son recently announced he wanted to try fishing, I jumped all over it, dug out my old fly rod, and we headed out beyond the city and suburbs to a stretch of river reputed to have some nice trout. We didn’t catch a thing, and we’ve had little luck on multiple, marvelous return trips.
See, it’s not just about the fish.
The Lower Salt River near Phoenix near sunset. Photo by Robert Roy Britt
At our favorite little stretch of river, red rock walls rise gloriously from the surprisingly verdant desert canyon, poking into predawn clouds one morning, glowing like fire one evening. The flutter of water lapping over rocks is interrupted by the sharp squawk of a heron. A bald eagle swoops down to outfish us in a real live David Attenborough moment. There’s no cell reception. The mind drifts like the laziest sections of river. Thoughts come unexpectedly, or not at all. The next riffle beckons. We breathe deep and move on.
“Nature holds the key to our aesthetic, intellectual, cognitive and even spiritual satisfaction,” said E.O. Wilson, the Pulitzer Prize-winning Harvard biologist.
That’s what I mean to say. And after decades of accumulating evidence, science suggests he’s onto something.
From hiking in the wilderness to living near urban green spaces, experiences with nature are linked to everything from better physical health and longer life to improved creativity, lower stress levels and outright happiness. One new study even suggests brief interludes in natural green spaces should be prescribed, like a nature pill, for people who are stressed. With the number of people around the globe living in urban areas expected to grow from 54 percent in 2015 to 66 percent by 2050, preserving or creating green space will be a key to overall human well-being.
We know all this intuitively. It’s why so many vacations center around camping, hiking or putting toes in the sand. We crave a connection with nature from deep in our primordial beings. And for good reason.
The Good of Green
In 2012, a group of backpackers set out on a multi-day excursion into the wild, with no phones or other electronics. Before departing, they took a test measuring creativity and problem-solving ability. After four days in the wild, the test was given again. Scores were up by 50 percent, from an average of 4.14 correct answers out of 10 before the hike to 6.08.
Like many psychology studies, this one could not prove cause and effect. It could not determine whether the improvement owed to nature itself, or if the disengagement from technology boosted scores, or if the physical activity perhaps played a role. But the researchers — University of Kansas psychologists Ruth and Paul Atchley and David Strayer of the University of Utah — shared their intuition at the time:
“Our modern society is filled with sudden events (sirens, horns, ringing phones, alarms, television, etc.) that hijack attention,” they wrote in the journal PLOS ONE. “By contrast, natural environments are associated with gentle, soft fascination, allowing the executive attentional system to replenish.”
“Spending time in, or living close to, natural green spaces is associated with diverse and significant health benefits.”
Other studies by then had already shown that the benefits of green space, however they accrue, are not reserved for the likes of Marlin Perkins or Bear Grylls. Any bit of green seems to help.
Back in 2006, research led by Jolanda Maas, a behavioral scientist now at Vrije University Amsterdam, found that the amount of green space within a roughly 2-mile radius “had a significant relation to perceived general health.” The conclusion was based on actual measurements of greenery compared to questionnaires filled out at doctor’s offices by 250,782 people in the Netherlands.
Maas and her colleagues did a similar study in 2009, looking instead at morbidity data. Of 24 diseases considered, the prevalence of 15 was lower for people living in areas with more green space. “The relation was strongest for anxiety disorder and depression,” they reported in the Journal of Epidemiology & Community Health.
Other research has shown that a room with a garden view and other access to green space can reduce stress and pain among hospital patients, boosting their immune systems and aiding recovery.
Likewise, gardening can reduce stress, one small study found in 2011. Interestingly, it outdid reading as a destresser. In the test, 30 people were made to perform a stressful task, then spent 30 minutes outside gardening or indoors reading. Levels of cortisol, a hormone released by stress, were measured repeatedly, and the subjects were asked about their mood before and after.
“Gardening and reading each led to decreases in cortisol during the recovery period, but decreases were significantly stronger in the gardening group,” the scientists wrote in the Journal of Health Psychology. “Positive mood was fully restored after gardening, but further deteriorated during reading.”
Fast forward to last year, when the benefits of nature on physical health were spelled out in a broad review of studies that involved data on more than 290 million people in 20 countries.
“We found that spending time in, or living close to, natural green spaces is associated with diverse and significant health benefits,” said lead author Caoimhe Twohig-Bennett of the University of East Anglia in England. “It reduces the risk of type II diabetes, cardiovascular disease, premature death, and preterm birth, and increases sleep duration. People living closer to nature also had reduced diastolic blood pressure, heart rate and stress,” as measured by cortisol levels, Twohig-Bennett said.
Nature or Nurture?
There’s an important caveat to many of these studies: Being outdoors often means being active.
Whether backpacking, gardening or simply walking briskly through an urban park, the subjects of studies like these may also be engaging in what other scientists call “moderate physical activity,” which even in small doses is known to improve mood, boost cognitive ability, benefit physical health and up the odds of living longer.
“People living near green space likely have more opportunities for physical activity and socializing,” Twohig-Bennett said, acknowledging the struggle to determine cause-and-effect.
The science indeed remains inconclusive on whether it’s nature itself or the physical activity associated with being in nature that brings health benefits, said Douglas Becker, a grad student at the University of Illinois who just published a study on the effects of nature on health care costs.
“Although it is strongly suggestive of both of those things… proximity and contact with nature leading to improved health outcomes and being around nature promoting physical activity,” Becker told me.
Becker examined health and environmental data from nearly all of the 3,103 counties in the continental U.S. He found that counties with more forests and shrublands had lower Medicare costs per person. The difference was not tiny. Each 1 percent of a county’s land covered in forest was associated with $4.32 in savings per person per year, on average.
Becker kindly did some additional math that I don’t fully understand, but it adds up to a boatload of money:
“If you multiply that by the number of Medicare fee-for-service users in a county and by the average forest cover and by the number of counties in the U.S., it amounts to about $6 billion in reduced Medicare spending every year nationally,” Becker said.
So. Plant more trees, right? Well…
The analysis, to be detailed in the May 2019 issue of the journal Urban Forestry and Urban Greening, does not prove that having more trees and shrubs directly lowers health care costs, Becker said. Rather, it’s one more bit of evidence pointing to possible proof that green spaces (especially forests, he notes) are good for our health.
“Being in sight of nature does indeed confer benefits,” he said.
Twohig-Bennett added another potential factor, gleaned from her review of the literature, suggesting trees may have as-yet unrecognized value in promoting well-being.
“Exposure to a diverse variety of bacteria present in natural areas may also have benefits for the immune system and reduce inflammation,” she said, pointed out that research has suggested there may be benefits to “forest bathing,” a popular therapy in Japan that involves just walking or even lying down in a forest.
“Much of the research from Japan suggests that phytoncides — organic compounds with antibacterial properties — released by trees could explain the health-boosting properties of forest bathing,” Twohig-Bennett said. The jury is still out on this therapy, but “our study shows that perhaps they have the right idea,” she said.
All in Your Head
In 2015, researchers at Stanford University added to evidence there are distinct benefits to nature itself, not just the walking that might get you there and back. They looked at the effects of hiking in a natural area (oak trees and shrubs) versus hiking in an urban setting (along a four-lane road). Before and after the hikes, they asked the participants a bunch of questions, and, importantly, they measured participants’ heart rates and respiration and did brain scans.
There were no notable differences in the physiology of the two groups after their hikes, the researchers reported in the Proceedings of the National Academy of Science. But those who hiked in nature had, afterward, less activity in a part of the brain called the subgenual prefrontal cortex. That’s where we ruminate repeatedly on negative emotions. Less is good.
“It demonstrates the impact of nature experience on an aspect of emotion regulation — something that may help explain how nature makes us feel better,” said lead author Gregory Bratman, then a graduate student at the university.
Bratman’s co-author, Stanford psychology professor James Gross, took the interpretation a step further, looking at the flip side of all this:
“These findings are important because they are consistent with, but do not yet prove, a causal link between increasing urbanization and increased rates of mental illness.”
And apparently, it’s never too soon to start an immersion in nature.
People who grew up in greener surroundings have up to a 55 percent lower risk of mental disorders as adults, according to a study of nearly 1 million Danes published earlier this year in the US journal Proceedings of the National Academy of Sciences.
“There is increasing evidence that the natural environment plays a larger role for mental health than previously thought,” said study leader Kristine Engemann of Aarhus University. “With our dataset, we show that the risk of developing a mental disorder decreases incrementally the longer you have been surrounded by green space from birth and up to the age of 10.”
Educators have long recognized the benefits of nature on childhood well-being. And as science increasingly supports the premise, the number of nature-based preschools and so-called “forest kindergartens” in the US has grown 60 percent or more in each of the past two years. More and more children are getting almost their entire early education in the great outdoors.
Nature Pill?
How much time do you need to spend in nature to see benefits? While few would argue that more isn’t better, it doesn’t take much, a new study finds.
Slipping away for just 20–30 minutes to sit or stroll in a natural environment reduces levels of cortisol, the stress hormone, according to a small study published April 4, 2019 in the journal Frontiers in Psychology.
Researchers had 36 urban dwellers take a break for 10 minutes or more, three times a week over eight weeks, and go to a place that “made them feel like they’ve interacted with nature.” Importantly, the volunteers were instructed not to do any aerobic exercise during the breaks and to avoid reading, conversations and using their phones.
The stress-reducing efficiency of the outings was greatest among those who spent 20 to 30 minutes in their happy places, the researchers concluded.
“We know that spending time in nature reduces stress, but until now it was unclear how much is enough, how often to do it, or even what kind of nature experience will benefit us,” said the lead author of the paper, MaryCarol Hunter of the University of Michigan. “Our study shows that for the greatest payoff, in terms of efficiently lowering levels of the stress hormone cortisol, you should spend 20 to 30 minutes sitting or walking in a place that provides you with a sense of nature.”
Hunter and her colleagues suggest healthcare practitioners could prescribe a “nature pill” based on this finding.
Combined with exercise, good sleep and a good diet, a nature pill — or whatever you prefer to call it — could be viewed as a pillar of science-based well-being. For my son and I, trekking to our favorite fishing hole every day isn’t practical. But there’s a hiking trail that starts not far from our home, leading out into the desert and up a mountain. We’ll be out there. | https://medium.com/luminate/the-amazing-benefits-of-being-in-nature-e998d93f51a0 | ['Robert Roy Britt'] | 2019-04-19 12:54:12.689000+00:00 | ['Nature', 'Happiness', 'Health', 'Wellbeing', 'Science'] |
A Guide to Medium Curation | I love the word curation. It makes me think of museums, of course. But also, it makes me think of the art of pulling together disparate things and figuring out how they fit together.
Curation is the biggest buzz word in the community of Medium writers right now. Several times a day, questions about curation pop up in Facebook groups I belong to for Medium writers.
What is it? Why does it happen? How does it happen? What if it doesn’t happen? Is there some kind of magic bean involved?
I thought I’d see if I could curate a post that answers some of those questions. (Cute, right?)
Let’s start with a definition.
Curation is when Medium’s elevators — the people tasked with checking out posts and ‘elevating’ them by curating them into the platform’s many topics — choose a post to share more widely with readers.
When a post is curated, it shows up on the page for the topic or topics that it has been curated in. It can also be distributed to Medium members who follow those topics.
For instance, this post of mine was recently curated into the writing topic.
I can see that it’s been curated because the word ‘writing’ appears above it on my stats page.
And when I click on ‘details’ I can see it there, too. If it was curated into more than one topic, all of the topics would show up on the detailed stats page.
And, when I click on that little boxed word ‘writing’ I’m taken to the topics page, where I can see my post listed.
You don’t have to do anything to submit your post for curation. Elevators automatically look at posts and decide whether or not to curate them.
Some posts are passed without being looked at, due to time constraints on Medium’s part. I’m not sure what causes this or how posts are sorted into this category.
I do know that if my posts were regularly getting that note, I would work toward figuring out how to stop that from happening by increasing the quality of my posts over time.
The way a post is actually shared is called distribution.
When a post link shows up at the bottom of the post you’re reading or you get an email or text notification about a new post — that’s Medium distributing your post to readers.
Medium will share this particular post by distributing it to some readers who follow the ‘writing’ topic, as well as people who follow me and people who follow the publication I posted this article in.
If people read and respond to it, they’ll distribute it more.
Medium offers guidelines for curation.
Medium cares most about the quality of the post. Is your writing clear? Is it grammatically correct and free of errors? Is it an interesting read?
They also like to see posts with clear headlines, subheads, and photographs that are properly cited.
At least as important as all of that technical stuff is this: Medium strives to be ad-free. They charge their members a monthly subscription, and those readers expect an ad-free reading experience.
If you have affiliate links or less-than-subtle calls to action (say, to join your email list), or you are selling something in your post, it is unlikely to be curated even though you’re not breaking any rules.
Medium allows sign up forms, for instance, in posts that are part of the Medium Partnership Program (behind the paywall.) They are just less likely to curate those posts.
Medium also allows affiliate links in posts behind the paywall, as long as you use a disclosure letting readers know that you’ve used those links. But, again, they are unlikely to curate those posts.
And Medium is also unlikely to curate any post that looks like it’s part of a series that it not their own. If you write a weekly series, for instance, those posts aren’t against any rule, but Medium will probably not curate them into their topics.
Medium rarely curates posts that are about writing on Medium, by the way. I do not expect this post to be curated.
Medium does a good job of letting you know whether you’ve been curated.
On the detailed stats page for each of your posts, you’ll see a message like this if a post has been curated:
If your post was not curated, you’ll see a message like this:
Medium does not curate every post. Sometimes well-written posts that meet all the criteria are passed over. However if you’re finding that most of your posts aren’t being curated, here are a few ideas.
First — not being curated is not the end of the world.
Your post is still made available to your followers. It’s still comes up in searches or if someone flips through posts in a tag you’ve used.
You’ll likely get less traffic if your post isn’t curated — but your post hasn’t been shipped off to Siberia.
Take a hard look at the quality of your writing.
Your posts should be clearly written and as free from grammatical and spelling errors as possible.
Large chunks of narrative are hard to read online (and in print, actually), so make sure you’re breaking your posts up with lots of white space.
Use subheads in your text, to help with the white space and add to the reading experience. Bullet points help with this as well.
Make sure that you’re digging deep enough in our work. If you’re writing something that has been said lots of times, by lots of writers, and not adding anything new to the conversation, that could be why your posts are not curated.
Medium suggests asking for peer feedback on your writing and that’s a good idea. It might be tough to hear, but if you’re not being curated it could be because your writing isn’t up to par.
That doesn’t mean you’re a bad writer or that you should quit. It means that you should read a lot and implement what you learn into your work. Medium is a unique platform that lets you publish while you’re learning. Take advantage of that.
Use proper formatting.
Medium has let us know that they like a clear headline written with title case (most of the words capitalized, no end punctuation.) They also like a subhead that gives more information and is written in sentence case (just like it sounds, like a sentence that starts with a capital and ends with punctuation.)
They like an interesting photograph at the top of your post that’s properly cited. They even have a built in way to do that.
If your posts are not being curated and you’re not following these basic formatting guidelines, that could be why.
Make sure you’re not advertising.
This one impacted me quite a lot. I sometimes use affiliate links in my posts and building my email list is very important to me. Having multiple income streams is always high on my priority list.
I had to decide which posts I wanted to optimize for Medium curation. For those posts, I don’t include any email sign-up forms or affiliate links. For a while I had a link in the bio I put at the bottom of each of my posts that I finally realized was keeping me from being curated more often. When I took that link out, my curation rate increased.
Write stand-alone posts.
Medium is unlikely to curate a post that feels like it is part of a series.
I happen to be the kind of writer who really enjoys writing in series. Sometimes I just write my series and realize that Medium isn’t going to help me promote those posts as much as some of my other work.
Other times, I try to keep the fact that I’m writing a series more subtle. If I want my posts to be curated, I don’t name the series, for instance. I try to make the post feel like someone could read it by itself and not feel lost.
I might post those under a tag in my own publications, to make them stand out as a series. Or call it a series in my own promotion efforts (for instance, when I post my links to Facebook or my email list.) But the actual post needs to read as complete all by itself if I want it to have a chance at curation.
Don’t rely on curation as your own form of promotion.
You do not have control over whether or not Medium curates your posts, beyond making sure that you meet their guidelines.
Meeting those guidelines is not a guarantee.
One of the best things you can do is focus on the things you can control. Promote your own posts via your social media channels. Start to build an email list, so that you can distribute your posts to readers on your own. Make a Medium publication for your posts so that you can use Medium’s ‘letters’ feature to reach out to followers.
Also remember that not every post is a great fit for Medium. For instance, I’ve found that reviews, recipes, and tactile how-to articles don’t gain my traction here. I’ve also written some posts here that didn’t get much Medium-specific traffic, but ranked on Google (which brings readers, but usually not much Medium income.) I’ve started moving some of those posts to Hubpages, where SEO and Google ranking matters more, to see how they do there. | https://medium.com/the-write-brain/a-guide-to-medium-curation-7d5be2dd97db | ['Shaunta Grimes'] | 2019-11-28 22:09:17.054000+00:00 | ['Medium', 'Freelancing', 'Writing', 'Money', 'Creativity'] |
Exploring Important Feature Repressions in Deep One-Class Classification | ICLR 2021
Photo by niculcea florin on Unsplash
The data we routinely collect contains only a small amount of anomalous data. This is a very pleasing fact of normal life :-) Only a few defective products are encountered on factory production lines, and medical data on rare cases are presented in papers in medical societies as new discoveries. In other words, collecting anomalous data is a very costly task.
It is obvious to anyone that it is more reasonable to train only normal data to detect anomalous cases than to spend a lot of money to collect various patterns. This learning method with training only normal cases dataset is called one-class classification, as various anomalous cases aimed to be excluded from the actual cases.
In this story, Learning and Evaluating Representations for Deep One-class Classification, by Google Cloud AI, is presented. This is published as a technical paper of ICLR 2021. In this paper, the two-stage framework for deep one-class classification, composed of state-of-the-art self-supervised representation learning followed by generative or discriminative one-class classifiers[]. The major contribution of this paper is the proposal of a novel distribution-augmented contrastive learning method. The framework does not only learn a better representation, but it also permits building one-class classifiers that are more faithful to the target task.
They even made the code available for everyone on their Github!
Let’s see how they achieved that. I will explain only the essence of DROC, so those who want to know more should click on DROC paper.
What does this paper say?
In this paper, anomaly detection approach with a two-stage framework for Deep Representation One-class Classification (DROC) is proposed. In the first stage, training in a deep neural network of self-supervised learning to obtain a high-level representation of the data, a mapping f to a generic high-level latent representation is obtained; then, in the second stage, the mapping f obtained in the first stage is used to map the data to the latent space and OC- Applies to traditional one-class classifiers such as SVM and KDE [Sohn et al.,2020].
Fig. 1 Overview of two-stage framework for building a deep one-class classifier. (a) In the first stage, learning representations from one-class training distribution using self-supervised learning methods, and (b) in the second stage, training one-class classifiers using learned representations.
Where is the novelty in this paper?
・In order to modify Contrastive learning [Cheng et al., 2020] suitable for one-class classification, the author proposes a distribution augmentation contrast learning method. Specifically, the system learns by identifying the type of augmentation applied to the data using geometric transformations of the image [Gidaris et al., 2018], horizontal flips and rotations (0,90,180,270). This allows dealing with images with outliers (anomalous data) that are rotated. Optimize a self-supervised loss function that minimizes the distance between samples from the same image using different data augmentation functions and maximizes the distance between samples from different images using the same augmentation function. This reduces the uniformity across the hypersphere and allows for separation from outliers.
・The idea that ``the less uniformity, the better for One-class Classification’' is wrong!! A fundamental trade-off between the amount of information and the uniformity of representation was identified. It is often thought that ``the lower the uniformity, the better for One-class Classification’’, but DistAug has effectively shown that this is in fact not true.
What is contrastive learning?
Contrastive learning [Cheng et al., 2020, Le-Khac et al., 2020] is an algorithm that formulates a task to find similarities and dissimilarities in ML models. It first learns a general representation of an image on an unlabeled dataset, and then fine-tunes it on a small dataset of labeled images for a specific classification task. Using this approach, a machine learning model can be trained to classify similar and dissimilar images.
The SimCLR framework [Cheng et al., 2020] is a powerful network that uses Contrastive learning to learn representations by maximizing the agreement between different augmented views of the same data example via Contrastive learning in the latent space. SimCLR learns representations by maximizing the agreement between different augmented views of the same data example via Contrastive learning in the latent space. For more details, I refer you to the excellent description by Aakash Nain and Thalles Silva.
Fig. 2 Simple framework for contrastive learning of visual representations.
Distribution-augmented contrastive learning
In some cases, training deep one-class classifier results in a degenerate solution that maps all data into a single representation, which is called hypersphere collapse [Ruff et al., 2018]. The authors propose distribution-augmented contrastive learning, with the motivation of reducing uniformity across the hypersphere to allow separation from outliers.
As shown in Figure 3, DistAug is used to increase the number of images. It’s not only learning to identify different instances from the original distribution, but also identifies the type of augmentation, such as rotation, to identify instances from different distributions.
Fig. 3 Distribution-augmented contrastive learning
Distribution augmentation (DistAug)
Fig. 4 (a) When representations are uniform, isolating outliers is hard. (b) Reducing uniformity makes the boundary between inlier and outlier clear. (c ) Distribution augmentation allows inlier distribution more compact.
Distribution augmentation (DistAug) training is a distribution augmentation approach for one-class contrast learning inspired by RotNet [Golan et al., 2018]. It does not model the training data distribution, but rather the sum of the training data augmented by rotation augmentations such as rotations and horizontal flips. As shown in figure 4, to isolate outliers, it is more effective to increase the distribution as in (c ) than to decrease the uniformity as in (b). The authors make it clear that their argument is not ``less uniformity is better for OCC.’’
Results
From the table, we can see that the Distribution Augmentation contrastive learning method has improved the performance of previous studies in experimental tests of detection and localization in both object and texture categories.
This experiment shows that methods that rely on geometric transformations are particularly effective in detecting anomalies in the “object” category, since they learn to represent visual objects.
Experimental results using the MVTech dataset
Figures 5 and 6 show the visualization of the localization of the defects appearing in the industrial products in the MVTec dataset. All the following figures show, from left to right, the defect input data of the test set, the ground-truth mask, and the heatmap visualization of the localization.
Fig. 5 Visualization of localization using MVTech dataset
Fig. 6 Visualization of localization using MVTech dataset
Reference
[Chen et al., 2020] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. ``A simple framework for contrastive learning of visual representations,’’ arXiv, abs/2002.05709, 2020
[Github][Sohn et al.,2020] Sohn, Kihyuk, C. Li, Jinsung Yoon, Minho Jin and T. Pfister. ``Learning and Evaluating Representations for Deep One-class Classification,’’ ArXiv, abs/2011.02578, 2020
[Gidaris et al., 2018] Spyros Gidaris, Praveer Singh, and Nikos Komodakis. ``Unsupervised representation learning by predicting image rotations,’’ In Sixth International Conference on Learning Representations, 2018.
[Ruff et al., 2018] Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Muller, and Marius Kloft. ``Deep one-class classification,’’ In ¨ International conference on machine learning, pages 4393–4402, 2018
[Golan et al., 2018] Izhak Golan and Ran El-Yaniv. ``Deep anomaly detection using geometric transformations,’’ In Advances in Neural Information Processing Systems, pages 9758–9769, 2018.
Past Paper Summary List
Deep Learning method
2020: [DCTNet]
Uncertainty Learning
2020: [DUL]
Anomaly Detection
2020: [FND]
One-Class Classification
2019: [DOC]
2020: [DROC]
Image Segmentation
2018: [UOLO]
2020: [ssCPCseg]
Image Clustering
2020: [DTC] | https://medium.com/swlh/exploring-important-feature-repressions-in-deep-one-class-classification-droc-d04a59558f9e | ['Makoto Takamatsu'] | 2020-12-19 14:23:05.870000+00:00 | ['Machine Learning', 'Anomaly Detection', 'Artificial Intelligence', 'Computer Vision', 'Deep Learning'] |
Modularizing the logic of your Vue.js Application | As an application grows, it is, unfortunately, common to see poorly designed components, with a lot of duplicate code, business logic scattered across methods, complex logic embedded in the templates, and so on. The components become large, brittle, and hard to change and test. The application becomes increasingly hard to evolve, sometimes reaching a point where the developers are eager to start from scratch, preferring a costly and risky rewrite than handling the current application state.
It doesn’t have to be that way. We can and should do better. In this article, we will discuss moving the bulk of the application’s business logic into a functional core that will be easy to reuse, easy to test and change, and which will lead to smaller, leaner, and more maintainable components.
We will pick up from where we left in our previous article, so you might want to check that first if you still haven’t.
Interfaces and Functional Modules instead of Classes
When we discussed adopting TypeScript in Vue.js applications, we took a somewhat unconventional route. Instead of modeling our data around classes, we have defined very lean interfaces to add type-annotations to our data. We have only used the fields that make up our objects in the interfaces — we have not mentioned methods or any operation over the data yet.
This article does not aim at doing an in-depth debate about Functional vs. Object-Oriented programming paradigms. Both have pros and cons, but I tend to prefer a functional style, because it is easier to follow and to test, in my opinion. Thus, we will use a functional approach to build our application core, and we will try to show how it leads to a modular, testable, and reusable codebase.
We will continue developing the simplified Invoice application that we started in the previous article.
Planning the app functionality
Before we jump right into the code, let’s talk about what functionalities we need in our application. In a real scenario, we would probably receive the requirements from a task description developed by a product team, or, if working on a side-project that we fully control, we would define that ourselves.
For our simple app, we will need ways to create and manipulate invoices. This will involve adding, removing, and changing line items, selecting products, and setting rates and quantities. We will also need a way to instantiate User and Product objects easily.
As we did for the types definitions, we want a modular way of building these functionalities.
Building our modules
We will put our modules inside a modules directory under src . We will split the functionality into as many files as it is sensible to do, grouping related functionality into single modules.
Let’s start with the User and Product modules:
User module
Product module
These two modules are very simple and similar, but they serve as a container for all the functionality related to users or products we might need down the road. Even though it looks that we are repeating code, we should not try to unify these create functions in any way — that would cause coupling between unrelated concepts and would make the code harder to change.
Notice how we have defined default values for all the parameters. This will allow us to call the create functions without passing arguments and still have a valid object of the appropriate type.
One thing you might be concerned about the code above is that we are listing all of the fields as individual parameters. We only have a couple of arguments in each of the create function, but the number of parameters could grow a lot as we make our models more complex. We will ignore it for now, but we will revisit this when we discuss defining a clear application boundary in a future article.
Even though we have declared the LineItem interface in the same file as the Invoice , we will use a separate file for the Invoice and LineItem modules. We could group the invoice and the line item modules using a directory, but we will keep it simple and flat for now. You can use any folder structure that suits your particular situation.
The lineItem module will be pretty simple as well:
LineItem module
Let’s move on to the Invoice module now. It will be a more complex module, so we are going to stub out the functions before implementing them.
Invoice module stub
Developing the Invoice module with TDD
When we modify the line items in an invoice, by adding, removing, or changing a line item, we have to recalculate the invoice total. This is critical data in our application — we cannot afford to have the wrong amount calculated for the invoice — so we should test the invoice module thoroughly. With our modular core logic, it will be pretty straightforward to add tests.
When we scaffolded this app, we didn’t add any of the unit test features available, but vue-cli makes it very easy to add plugins to existing projects. We will use jest to write our tests, and we can add it to our project by running:
$ vue add unit-jest
That will take care of installing and configuring jest to work in a Vue project. Let’s write a few tests for our Invoice module.
Invoice module tests
These tests are a little bit lengthy, but they are easy to follow. We start by ensuring that our create function in the invoice module returns an empty invoice. Then we move on to test the other parts of our Invoice module. We have added a testData function to help creating objects used in the tests.
In a production-grade application, we would add more tests, especially to cover edge cases, making sure our module would work in every possible scenario. But for this article, this is good enough.
We should now run these tests:
Failing tests
As expected, the tests fail because we haven’t implemented our functions yet. Let’s do that now.
Invoice module implementation
We have created two helper functions to avoid repeating code. The first one was the calculateTotal function. It takes the invoice and returns the total amount. It does so by first calculating the subtotal for each line item, using a new function we have added to the LineItem module, then summing all the line item totals. Let's see what the LineItem module looks like now.
Adding the calculateLineTotal function to the LineItem module
The calculateLineTotal function is very simple. It just multiplies the rate by the quantity. Still, having it in a separate function makes our code easier to follow and easier to change.
Back to the invoice module, we can see that the setLineItem helper function takes an invoice and a list of line items and then returns an updated invoice with the given line items and the calculated total amount.
With these helper functions in place, implementing our public functions is very simple — they just need to generate the new list of line items (based on the operation) and use the helper functions to return an updated invoice.
And now our tests pass!
Tests now succeed
Using the modules in a Vue component
Let’s rewrite our createInvoice method in the HelloWorld.vue component, just to have a taste of how we use our modules in a component.
Again, this is a contrived example, but it already looks better than before. We now have the objects with the appropriate type from the modules’ create functions (instead of having just the type inference). In a more realistic scenario, the user would be the authenticated user; the product would come from some selector that reads from a product list; the rate and quantity would be set in the UI using inputs; and it would be possible to add/remove/update line items directly in the UI. We will build those components in the next article.
Wrapping up
At this point, we can have a fair degree of confidence that our invoice related logic is working. We should probably add some more tests, but we have a great baseline to develop our invoice application.
We have built a solid functional core for our application logic. We are not spreading the business rules across components and, when the time comes to wire this functionality up with the UI, the components will end up being a skinny layer to connect the user actions to our core modules.
Let me know what you think of this approach in the comments!
Shameless Plug: If you liked this article and there are openings in your company, I’m currently looking for a job as a Senior Full Stack Engineer. You can check my Linkedin and drop me a line at vinicius0026 at gmail dot com if you think I’m a good fit. Cheers! 😃 | https://medium.com/swlh/modularizing-the-logic-of-your-vue-js-application-5b920e17c25e | ['Vinicius Teixeira'] | 2020-06-02 16:16:51.759000+00:00 | ['JavaScript', 'Typescript', 'Vuejs'] |
First Confirmed Case of Coronavirus Reinfection Doesn’t Mean We’re All Doomed | First Confirmed Case of Coronavirus Reinfection Doesn’t Mean We’re All Doomed
The case is one in 23 million
Photo: Li Zhihua/China News Service/Getty Images
Scientists in Hong Kong reported today the first confirmed case of reinfection with SARS-CoV-2, the virus that causes Covid-19. Since the beginning of the pandemic, there have been concerns about long-term immunity to the novel coronavirus, and several possible cases of reinfection were reported in the media. But until now, none were confirmed scientifically.
The question has always been whether reports of a person testing positive, recovering from the virus and testing negative, and then testing positive again weeks or months later is because of faulty testing, “dead” viral RNA lingering in the body, a reemergence of the same infection, or a genuine instance of reinfection. The Hong Kong report is the first to use genetic testing to confirm the two cases in the same person were caused by slightly different strains of the virus.
According to a manuscript leaked by South China Morning Post reporter Lilian Cheng on Twitter, the patient, a 33-year-old man with no preexisting conditions, first got sick in March, presenting with a cough, sore throat, fever, and headache. He tested positive for SARS-CoV-2 on March 26 and was monitored in the hospital for two weeks (standard protocol for patients in Hong Kong regardless of disease severity) until he was discharged on April 14 following two negative tests. The second time he tested positive was after returning to Hong Kong from Spain on August 15, when he was screened at the airport as part of standard reentry procedures. This time, however, he was completely asymptomatic and never developed a cough, fever, or any other signs of Covid-19.
Scientists at the University of Hong Kong sequenced the genome of the virus from the tests taken in March and August and discovered that they differed in several key areas, indicating that they were two different strains of SARS-CoV-2. Specifically, 24 nucleotides — the “building blocks” that make up the virus’s RNA — were different between the two infections. The August strain was a variation of the virus known to be circulating primarily in western Europe, suggesting the man was reinfected while abroad.
While the news of a legitimate reinfection is worrying, virologists and immunologists took to Twitter to reassure people that this doesn’t mean we’re all doomed. In fact, scientists have been expecting reinfection to occur all along. Akiko Iwasaki, PhD, a professor of immunobiology at Yale University, tweeted, “This is no cause for alarm — this is a textbook example of how immunity should work.”
Virus-specific antibodies created by the immune system are central to the question of immunity, and varying reports have emerged over the past few months about the quantity, quality, and duration of antibodies produced in response to SARS-CoV-2. The vast majority of people who’ve recovered from Covid-19 do develop antibodies to the virus, and typically the more severe the infection, the more antibodies they produce, providing them with protection against reinfection. However, according to the leaked manuscript, which is under review at the academic journal Clinical Infectious Diseases, the Hong Kong patient had no detectable antibodies after his first infection.
It’s possible this man had a very mild initial case of Covid-19 or an abnormal immune response that resulted in fewer antibodies being produced. Either way, the absence of antibodies after the man’s first infection could explain how he became infected with a different strain a second time. In contrast, a preprint study published earlier this month and covered in the New York Times reported that three people who tested positive for antibodies were spared in a large outbreak that infected 104 people on a fishing boat from Seattle.
Given that most people do develop antibodies to the virus, Angela Rasmussen, PhD, a virologist at Columbia University, tweeted that the Hong Kong case “doesn’t have major implications for immunity since most people DO have IgG [antibodies] after recovering from infection.”
The fact that the second case was asymptomatic is also a good sign because it suggests that there is some protection (perhaps from T cells) that made the second infection less severe. “While immunity was not enough to block reinfection, it protected the person from disease,” Iwasaki tweeted. What’s more, the man developed a robust antibody response after the second infection.
Finally, it’s important to remember that this is one confirmed case of reinfection out of the more than 23 million cases of Covid-19 worldwide, and one in 23 million is pretty good odds. As Rasmussen pointed out on Twitter, “How many people were screened to find this single case of reinfection? There’s no indication that this is anything other than a rare case of someone getting reinfected after not developing immunity to the first infection.” | https://coronavirus.medium.com/first-confirmed-case-of-coronavirus-reinfection-doesnt-mean-we-re-all-doomed-85bde2ab9e72 | ['Dana G Smith'] | 2020-08-24 22:12:30.261000+00:00 | ['Hong Kong', 'Immunity', 'Health', 'Covid 19', 'Coronavirus'] |
How to manage files in Google Drive with Python | As a Data Analyst, most of the time I need to share my extracted data to my product manager/stakeholder and Google Drive is always my first choice. One major issue over here is I have to do it on weekly or even daily basis, which is very boring. All of us hate repetitive tasks, including me.
Fortunately, Google provides API for most of its service. We are going to use Google Drive API and PyDrive to manage our files in Google Drive.
Using Google Drive API
Before going into coding, you should get Google Drive API access ready. I have wrote an article on how to get your Google Service Access through Client ID. You should be able to get JSON file that contain the secret key to access your Google Drive.
Getting Started with PyDrive
Installing PyDrive
We will use the python package manager to install PyDrive
pip install pydrive
Connecting to Google Drive
PyDrive has made the authentication very easy with just 2 lines of code.
You have to rename the JSON file to “client_secrets.json” and place it in the same directory with your script.
gauth.LocalWebserverAuth() will fire up the browser and ask for your authentication. Choose the google account you want to access and authorize the app.
drive = GoogleDrive(gauth) create a Google Drive object to handle file. You will be using this object to list and create file.
Listing and uploading file in Google Drive
Line 1 to line 4 will get you the list of files/folders in your Google Drive. It will also give you the detail of those files/folders. We capture the file ID of the folder you would like to upload files to. In this case, To Share is the folder I would upload the files to.
File ID is important as Google Drive uses file ID to specific the location instead of using file path.
drive.CreateFile() accepts metadata(dict.) as input to initialize a GoogleDriveFile. I initialized a file with "mimeType" : "text/csv" and "id" : fileID . This id will specific where the file will be uploaded to. In this case, the file will be uploaded to the folder To Share .
file1.SetContentFile("small_file.csv") will open the specified file name and set the content of the file to the GoogleDriveFile object. At this moment, the file is still not uploaded. You will need file1.Upload() to complete the upload process.
Accessing files in folders
How if you would like to upload files into folder inside a folder? Yes, again you would need the File ID! You can use the ListFile to get the files but this time change the root to file ID .
file_list = drive.ListFile({'q': "'<folder ID>' in parents and trashed=false"}).GetList()
Now we can get into folder picture inside the folder To Share .
Other than uploading files to Google Drive, we can delete them too. First, create a GoogleDriveFile with the specified file ID. Use Trash() to move file to trash. You can also use Delete() to delete the file permanently.
Now you have learnt how to manage your Google Drive files with Python. I hope this article is useful to you. Do drop me a comment if I made any mistake or typo.
You can view the complete script in my Github. Cheers! | https://towardsdatascience.com/how-to-manage-files-in-google-drive-with-python-d26471d91ecd | ['June Tao Ching'] | 2020-09-07 15:28:50.569000+00:00 | ['Python', 'Google', 'Google Drive', 'Data Science', 'Programming'] |
How to Find the Time to Pursue Your Passions | How to Find the Time to Pursue Your Passions
Building a meaningful side hustle while working a 9-to-5
Photo by Blake Cheek on Unsplash
While my girlfriend and I were getting ready for bed yesterday, she rolled over and quietly asked me, “What time are you getting up tomorrow?”. I could already tell where this conversation was headed.
“5:30", I responded.
To be honest, I’m not sure why she continues to ask me this question because I get up at the same time every single day. Nonetheless, she picked up her phone to check the time:
11 pm.
I could tell she was doing some mental math in her head, silently adding up home much sleep I’ll get if I do get up at 5:30. After coming to the answer, she then tried to reason with me.
“I think you should sleep until 6:30. That’s still an hour and a half before you have to work — think of everything you could get done!”
She was right — sometimes I do actually need a little more sleep. But the other part of her argument is the exact reason I get up at 5:30 in the first place.
“Yeah, but if I don’t get up at 5:30, I won't have enough time to get everything done that I want to throughout the day.”
Getting up early allows me to focus on the things I want to focus on when I’m at my best. Trying to write an article after a full day of work is like beating my head against the wall — it never ends well.
Getting up early creates an opportunity to give my best effort to the things I care about most. Before the stress of the day begins, I can devote 2.5 hours to myself — journaling, reading, writing, and exercising.
And when I finally log onto my 9-to-5 job at 8 am, I feel like I've already accomplished so much.
The Biggest Source of Underutilized Time
I get it, finding the time to pursue what you love is hard. You’re swamped with work, family, exercise, or (let’s be honest) Netflix.
The truth is, if you can’t find time to work, you won’t ever be able to pursue what you want.
Most of my evenings after a long day are filled with exercise, family time, dinner, and a little relaxation. One of the last things I want to do is work some more.
But the mornings are the complete opposite. They’re some of my most creative and where 80% of my work is done.
The biggest source of underutilized time is in the morning.
No one actually likes getting up early, but there’s a reason why some of the most successful people in the world do it.
Benjamin Franklin once said, “The early morning has gold in its mouth.”. And Aristotle, the famous greek philosopher said this about the mornings:
“It is well to be up before daybreak, for such habits contribute to health, wealth, and wisdom.”
Success requires work. And if you can’t find the time to work, you won’t be able to build anything successfully.
If you don’t want to put in the work at night, trying waking up early. Sure, you may have to go to sleep a bit earlier, but I guarantee that you’ll feel invigorated and more creative than you ever have before.
Photo by Chris Curry on Unsplash
Waking Up Early Makes You Less Tired
“Okay, I get it,” my girlfriend responded after I told her why I get up at 5:30.
“I just want to make sure you’re getting enough sleep.”
This is one of the things I love about her — she’s constantly worried about me. I reassured her that I get plenty of sleep, and then followed up with something I hadn’t even recognized before, it just kind of spilled out of me.
“When I sleep in, I end up feeling more tired, not less.”
I know, it sounds counterintuitive. but bear with me.
Let’s do the math:
To bed at 11 pm
Wake up at 5:30 am
Total sleep time: 6.5 hours (if I fall asleep at 11 on the dot)
According to the CDC, the average adult needs at least 7 hours of sleep per night. Based on the above, I’m nearly spot on. More often than not, when we’re “tired” it’s due to oversleeping, not undersleeping.
Sure, maybe you could say I need a little more sleep. But when I get up early knowing that I get to work on something I love — I spring out of bed.
It’s addicting starting my day by working towards the person I want to become and building something ridiculously cool. | https://medium.com/change-your-mind/how-to-find-the-time-to-pursue-your-passions-283d7b2dbb33 | ['Devin Arrigo'] | 2020-12-07 15:11:39.524000+00:00 | ['Advice', 'Writing', 'Self', 'Success', 'Creativity'] |