Ever bore of the new tab screen in Chrome? It’s what you’re presented with after doing a CTRL + T.
Well, with the magic that is extensions, you’re free to change this. There are a host of replacements available on the Chrome Web Store. But I’ve always appreciated a low-tech, unobtrusive approach. I’ve used the Google Art Project screen, which puts a new great work of art on your new tab. For days that I feel overstimulated, I’ve opted for simply a blank tab.
But recently I found the Google Earth new tab, and I’m in love with it.
It’s mesmerizing, yet subtle somehow. It doesn’t take over my screen, it just invites me to take a moment before racing off to the next website, and simply gaze upon our planet’s beauty. That may only be one or two beats, but at least it’s a bit of pause in a busy online life.
But here I am again, ready to get back into the work of expressing myself… and getting more organized. The upshot of the rather long hiatus in this series of articles on productivity management is that I have this nice big data-set from which to draw my conclusions. Which is rather rare for me. Typically, when I find some “new solution” to an old problem, I’m too quick to conclude that the new is better.
Well, this time I can pretty highly recommend my new take on the old way. And what is this new way?
Inbox by Gmail
Once again, I’m hardly cutting edge on this bit of software. It’s been around for a while now. It rather obviously back-engineered some of the coolest features of the competitor email app known as Mailbox. Inbox is a novel take on its existing email platform, Gmail. It re-imagines your email as possible “todos”, allowing you to set reminders to your email workflow. Each email can have an associated task date. If you add a reminder to an email, these will show up over on your Google Calendar as well, or in Google Now as a card (for mobile users). So there’s very good cross-product integration.
Marking an email “done” in Inbox translates to applying the Archive tag over in Gmail. The genius of Google’s approach here is that you don’t have to sacrifice your Gmail experience and commitment to use Inbox. You can fluidly go back and forth if you want to. Although what I found in the past 12 months is that by month 2 or so, I was fully using Inbox exclusively.
And of course, Google has baked in very good keyboard shortcuts so that your workflow can be as fast as you want it to be. On mobile devices, each email or Reminder can be swiped right for completion and left for rescheduling. It’s a powerful and fast workflow. And when you’ve conquered your tasks/emails — which is to say, addressed all the stuff that’s in your inbox — Inbox presents you with the most pleasing trophy you could want: virtual sunshine.
Obviously, having all these features integrated tightly into Inbox (and Calendar, and Keep, and Drive, etc.) makes for a great overall user experience. Gone are the days of buying 3rd party plugins to a Mac OS-only mail client just to set a reminder on an email. I couldn’t really be much happier with this solution, since it’s all right there at my various fingertips (whether on desktop or mobile). And the fact that such powerful software is essentially (troublingly?) free makes it all the more compelling.
Looking back, I’m amazed that I ever did email differently. I had a set of pretty good solutions, cobbled together with 3rd party tools and utilities. It all got infinitely better when switching to Gmail. But now with Inbox, I’m in organization nirvana.
What I’m reading a lot on social media is a very determined effort to falsely equivocate either Obama or Hillary Clinton with Trump (whether their characters, their campaigns, or their future presidencies). In my view, this is particularly disingenuous. To put Obama’s presence and stature or Clinton’s experience and dignity up against Trump’s impulsiveness and braggadocio and call them basically the same thing just isn’t being honest with one’s self.
This, I think, is probably the most insidious choice that voters made because it assumes a “pick your poison” baseline, that both are bad. Further, a vote for what is “lesser of two evils” excuses all the other bad traits about Trump. It essentially doesn’t matter how bad Trump was, is, or will be: at least he’s not as evil as “that nasty woman.”
But the Hillary-evil narrative painted so well during the campaign got more and more thin as it wore on. What evil are we really talking about? That she and staffers made the tragic misstep of putting a private email server in use? This was a decision that I’m sure Clinton will rue for a long time, but as the FBI has repeatedly cleared her of treasonous intent, it’s hardly evil. That the Benghazi attacks were bungled? Absolutely. It was tragic and security lapses were made. Mistakes happen, even at the highest level. Is she evil in her mishandling? I don’t think so. She’s worked hard to establish stability in the area since and her tone has been proved to be one of calm in the face of calamity.
The Trump image we’ve all seen during the campaign itself (forget 10-15 years prior) has shown itself to be frightening. What I can’t wrap my brain around is why so many Christians, children of the Reagan GOP, would turn a blind eye to his enabling of very bad behavior. Here’s a man that can’t lose gracefully. He sues the press when he doesn’t like how they cover him in the headlines. He lashes out publicly at women and minorities. He has no sense of decorum befitting of the office.
And yet still I hear how basically they’re all the same. That one choice is just as bad as another. That’s just not true, and you know it, no matter how badly you want that square peg to fit.
Text mining is widely recognized by companies as one of the major tools that are provided by A.I. technology to extract valuable ‘structured” data from text and help businesses filter through and condense valuable project-oriented information. If this sounds a little too ‘sci-fi” for you, that is exactly the point. Its capacity to recognize and respond to human speech and mimic the neural pathway activities of the human brain to develop independent cognitive abilities and behavioural responses is only the beginning, especially as more technological advancements are introduced into the field and artificial intelligence software are gaining higher momentum and demand among multiple industries with connections to the IT/Computer/Mobile industry—from the transport, banking, social services, government, and medical sectors among many more. While the first attempts to develop intelligent thinking machines can be traced historically to Raymond Lull in the 14th Century—and “automatons” are present in the ancient mythology of Greco-Roman, Egyptian and Babylonian/Mesopotamian literature—text mining goes back to the WWII era, when governments started adopting “content analysis” and assigning numerical codes to public concepts and ideas that were found in the media, including newspapers, magazines, letters, documents…etc. with the objective of analyzing and monitoring trends in mass-behaviour by tracking the levels of popularity and development of those concepts/ideas. This practice has also developed into another branch known as open source intelligence, used by governments and the intelligence communities to sift through all the pertinent information that is available on the World Wide Web for reasons of national security and especially in response to the current major crises facing the modern world. Unfortunately for the general public, it is also something that has led to the infringement of personal security and civil rights as demonstrated by its current use as a global spy network set up by agencies like the NSA and the CIA.
A.I.’s ability to extract concepts out of written language text through text analytics by analyzing and processing all the data present on the world wide web—including social media sites like Facebook, Twitter, LinkedIn, or YouTube, as well as independent professional blogs and websites—has enriched a company’s potential to accelerate decision and policy-making, while restructuring also its internal organization, reducing budget costs by saving time and being able to respond faster or anticipate any internal or external crises (such as changes in consumer trends or demand, or meeting important deadlines for a business venture). Even more importantly, it enhances customer engagement by delivering a unique experience that is able to identify and predict their tastes and cater to them individually and accordingly.
Its value for the general public is particularly evident in its applications in the medical industry, where A.I. will be able to monitor a person’s health and predict a heart attack days or even years before it occurs and respond appropriately by administering therapeutic treatment on the spot. The main difference compared to modern day search engines is that text mining can help a business or an individual find a solution for a need they did not even know existed! It can therefore help you find innovative sources that provide the answer to your problem before you even know a problem exists, and that, once again, is science fiction in the making! Integrated A.I. assistants deployed through a user interface on all your media devices including portable smartphones and tablets, and Robot receptionists/waiters are a perfect example of what we can expect in years to come. Sentiment Analysis, also referred to as “Opinion Mining”, for example, is another process whereby A.I. is able to analyze, extract and understand the emotional response of the subject in a context. In other words A.I. is truly the realization of our collective sci-fi archetypal imagery, as any fan of Star Trek would surely know!
Artificial Intelligence (AI) has been a hot topic in the tech and innovation world as of late. It has fueled the stuff of great sci-fi movies for generations, but only now is gaining traction in real, marketable products and services.
Yet, video games that feature AI aren’t particularly appealing; at least, not yet anyway. AI is functional, yet still lacks the flexibility and common sense of a real human. This is evident in plenty of online multiplayer games, where most computer-generated characters lack human reactions in complex situations.
Initially, Google’s Atari-playing algorithm is said to be the future of AI, and that we can learn a lot from its mistakes. However, just like many other artificial intelligence of the past, the platform and software wasn’t able to understand what’s going on in the game in the way a human does.
However, recent news revealed that Google’s AI was able to win the fifth and final game against Go genius, Lee Sedol. The computer system was able to defeat the Korean grandmaster with one loss, marking a significant moment for artificial intelligence. Although machines have beaten the best humans at checkers, chess, and even Jeopardy!, it is the first time a machine was able to top the very best at Go.
Exceeding video games, AI has been able to connect us with musicians of the past that have sadly left us prematurely.
We have seen how AI has been used to bring musicians back from the dead as reported by Popsci, from Tupac in a recent Coachella performance to other memorable artists such as Jimi Hendrix and Kurt Cobain. Software created in 2010 works like ‘a Pandora for live music’ as it analyzes a musician’s voice and sound based on their old, archaic recordings. It will then reconstruct a song using those musicians’ voices as if they have recorded in a modern studio. Thus, it becomes easier for modern musicians to collaborate with their favorite music icons.
Reliving the lives of past musicians is not at all a new concept. In fact, many mobile and online games have been inspired by these icons, such as the Jimi Hendrix slot game that takes you back in time with its groovy 60s design and concepts akin to the popular Guitar Hero game which has spawned offshoots such as the Metallica version for console platforms. But, it’s not only musical legends that have games tailored for them, as recent celebrities also make their own titles from Kim Kardashian to NBA players. Though, artificial intelligence turns these games into a whole different experiences – transforming them into a more immersive, realistic, and engaging proposition entirely.
Andrew Moore, dean of CMU’s School of Computer Science, said AI requires large volumes of data and statistical computation to function, making it difficult to easily produce. However, it can unlock plenty of new avenues for technology in the future.
“Teaching AIs to process scenarios with hidden information will unlock whole new vistas of applicability for the technology.” — Andrew Moore
A sector that is said to benefit from AI is the medical industry, as experts suggest they are now looking at AI 3D hologram avatars to help care for the elderly.
“Although this project is at an early stage, with a number of technical, moral and ethical issues to be addressed, the development of Rita (artificial intelligence) in the form of a humanised avatar could revolutionise how an individual’s personal, social emotional and intellectual needs are met in the future,” said co-director of the University of Kent’s Centre for Child Protection (Dr Jane Reeves in an interview with International Business Times).
For the past 3 years, I’ve been working full-time as a software engineer. This has been a substantial, if not calculated, change for me. I’d been an hardware engineer for longer than I care to think about.
Perhaps the biggest, while subtlest difference between the two career paths that I didn’t see coming is this: determinism. I simply love the relative absolute nature of software. I’m sure some might argue me on that one. But, you get my point. For the most, the outputs of any software project can be clearly predicted; the inputs can be nicely quantized, packaged, and displayed in automated fashions.
I love going to work.
Even on the tedious days of making error handling code, it’s still all fun. Exception handling is one of those topics that is grossly underestimated. It’s hard work, it’s time-consuming (and no one appreciates that investment), and it’s rewards are always deferred.
I’m reminded of all of those points when I witness — virtually everywhere in “real life” — examples of horrendous error handling.
Case in point
While attempting to post a mobile deposit to my bank account, I got an error. The transaction didn’t post, for whatever reason. This was on my Android phone, from which I’ve made dozens of successful deposits in the past to the same bank with the same app.
Fair enough, errors happen.
But the error message I was greeted with was the following:
Not much help there, huh?
The point of error handling is twofold:
Assist the user in resolving the problem
Provide the developer with the conditions prior to the error from which to find a solution to the problem
From the above screenshot, there’s next to nothing for the bank’s engineering team to go on. The tech support tips I got amounted to, “Have you tried uninstalling?”
It’s no wonder that most people’s relationship with software is terrible. And I’m a software engineer (now)!
I’ve got most of the bugs ironed out in my display interface, but not all have been squashed in the driver portion. In other words, the method in which I can input text into the OsRAM is working nicely (I’m using a serial port console), but the nuts and bolts of how strings are sent to the display — arguably the most important part of this project — remains broken slightly.
The problem is that I was lazy. I should have paid more attention to the WR and CE lines for proper data latching into the display at the right times.
But this challenge has been fun. It’s always fun to work under a deadline to see what you can do. This forced me to learn more about Arduino. And despite my first impression, I’ve come to see that it’s pretty great. I especially love the C++ class support. For instance, its string and bitwise libraries are awesome. There are things that aren’t so great, like the editor. I had consistent undo (CTRL+Z) wonkiness that scared me (I was afraid of code-eating), so I switched quickly to Notepad++ with a good syntax language profile.
Since recently switching from OS X on an iMac to Windows 10 on a laptop, I sorely miss file tagging. I’ll admit, this is one feature that I had not given much thought when I was preparing for the big leap to another operating system.
Though I’m happy with my switch, I’m also trying not to live in denial. This is still Microsoft we’re talking about. They have made incredible advancements as of late with their Windows 10 version. And yet, in some areas they are very much behind in innovation compared to Apple.
File tagging is a glaring example.
If you’re at all interested in the Getting Things Done ethos, then you probably know all about this computer software feature. On an Apple computer, you can tag a file or folder with a color and/or keyword. These tags are then searchable. They can help your workflow dramatically.
For instance, in a folder of downloaded bank statements, it would be incredibly handy to know which ones I’ve balanced against my personal finance software, and which still need to be done. Tag the files accordingly!
But after my switch, I can’t do this on Windows 10. And I use Google Drive to be able to do my personal work anywhere, so a file-tagging solution that is platform independent is pretty necessary.
Hence, I began looking for a solution, 3rd party or homemade.
But then I found this 3rd party solution which sounds very promising. But it’s not platform universal, so apparently your tags get vaporized when you email them or open the files on some other OS. You can apparently export your tagging database as an XML file for importing on another computer, but that’s not very intrinsic a solution. I do like how this solution plugs itself into Windows Explorer and the context shell menu!
But ultimately, I think that this won’t be a future-proof solution for my needs.
So instead, I built my own workaround. And I did it with scripting: AHK to be exact. It’s a really fun, easy-to-use scripting language that runs exclusively on Windows. Don’t even get me started on my frustrations with the native scripting on OS X. I always intended on learning it one day… until the day I got out of the Mac world altogether.
Platform independent. This means I could use the files that I tag both on Windows and OS X (I don’t happen to ever use Linux, so that wasn’t a priority for me). Their tags won’t become lost when opened on another platform, though the actual tagging process will only be conducted on a Windows computer.
Transferable. This is a slightly different requirement than platform independence. The tags shouldn’t get lost when files are emailed, messaged, or synced across cloud services.
Searchable. The tagging architecture must be plainly identifiable in some way, such that they can be searched easily.
Non-destructible. The tags must not interfere with the files’ usability.
Extensible. The tags and tokens should be configurable, such that the user can setup their own tagging schemes and change them over time.
I came up with the following:
It’s as inelegant a solution as I am old. But the longer I thought about it, it’s the easiest to implement, the quickest to set up, and meets all the above requirements. In the scheme that works for me, I have three tags:
tagged with some sort of “todo” keyword
tagged with a “done” keyword
tagged: bank_statement_07232015 @TODO.pdf
retagged: bank_statement_07232015 @Done.pdf
For this script to work as painlessly as possible, I used global shortcut keys to tag the files one way or the other. One or more files can be tagged or untagged simultaneously. Alternatively, you can bring up a GUI to do the tagging.
You can find the source code on my GitHub. Here is the source…