blueFug Announces Design of EML3 Framework and Platform

New Open-source Language and Toolkit to Revolutionize Computer to Computer (c2c) and Person to Person (p2p) Communication

Seattle, WA, USA. – 11:59 AM PDT, April 1, 2017 – Jason English, the world’s leading provider of technology communications services, today announces the availability of a revolutionary new computing paradigm and communication protocol known simply as EML3.

“Enterprises have long struggled with finding a way to communicate critical business data in terms that anyone can understand,” said Jason English, blueFug’s CXO, and the Father of EML. “With EML we can encapsulate virtually any form of business data in an open form that is scalable, extensible, sharable and object-oriented and more. Best of all, both developers and non-technical employees can easily learn this new language.”

The new scripting language, English Markup Language III (or EML 3.0), named after its proud father, provides an incredibly simple syntax for communicating any form of data collection through both EMP and SOC formats. Virtually any Business Process or Test Process can be described in EML, and be recognized by both computers and the naked eye. EML is the first “Pure 1:1 Metadata” language, in which each EML tag is 100% analogous to its inherent real-world meaning.

The new EML Toolkit will include:

  • EML Code Library: A virtual “dictionary” collection of 150,000+ valid EML word tags with meta-descriptions.
  • EML Processor: A fully standardized IDE that provides an ideal environment for writing and editing EML programs in a “page layout” type format
  • EML Syntax Checker:  An automated plug-in that works with most leading EML processors to underline in green or red, thereby detecting syntactic or structural errors in EML code.
  • EML Parser AI:  Machine-learning macro daemon for analyzing non-EML data, replacing spaces and punctuation with EML brackets and translating it into proper EML script language.

“We are already rolling out our first advertising campaign for EML this month in the North America, Australia and India markets, where there are already mature EML user communities,” said English. “Our initial go-to-market message will be “[EML] [You’re] [Already] [Using] [It][.]” To facilitate the widespread adoption of EML, a bevy of technology and SI service provider partners have been signed up for the EML Council, and they will get together in a committee sometime in early 2018 to discuss whether EML is a toolkit, a platform or a framework, or all three.

“We all used to make fun of Jason because he looks funny, but now we realize his EML innovation is a critical strategic lynchpin of our Enterprise Platform initiative,” said the Chief Scientist. “More than 90% of the time spent in the average business is expended on inefficient verbal or written communications that could be expressed in EML script instead with much more efficiency [I][think].” 

While the EML3 scripting language is open-source, the EML tag library already contains approximately 150,000 patented innovations or “EMLwordtags” that will remain part of blueFug’s proprietary codebase. The company will continue to roll out their integrated EML Platform across all business and public organizations, supporting development and testing efforts for more than 2000000000 multi-national users both within the company, and among global technology partner teams.

Said French Renaissance author François Rabelais in response: "Most things can be paid for with words."


About blueFug Technology Marketing 

blueFug helps enterprise technology customers reshape their message delivery lifecycles. Our EML3 framework optimizes complex and Cloud-based application explanations throughout the world, eliminating costly constraints and misunderstandings, while improving agility in an environment of constant change. For more information, visit www.bluefug.com or read our blog at bluefug.com/blog or see our twitter at @bluefug. Just don’t call.


# # #

Press Contacts:

blueFug Ventures

Sue D. Nimm

Certified EML3 Communications Professional Level 9

Photo Credit: Trey Jones, wikimedia Commons



Your Smartphone is Showing: The Mobile Threat Exposure

Phones - They're What Hackers Crave

We love our phones. We stare deeply into glowing rectangles at every opportunity. We love messenger apps. We love free Wi-Fi and expect businesses to offer it. We might love a friendly P2P game, or playing Internet radio over the Bluetooth system in a rental car. We love the idea of paying for just about anything from an app rather than fumbling for money.

Don’t think this love affair has gone unnoticed. Cybercriminals increasingly want to come between us and our phones. And who can blame them? With as much as 75% of US Internet traffic coming from mobile devices, that is where the most valuable data -- and the money -- is moving. Why would they stick to hacking standard computers?

You might think your mobile device is rather secure. Indeed, it does have some design advantages over PCs and servers, where most of the security and antivirus activity has focused to date. Unlike conventional computers, smartphones have much of their operation handled at the hardware and firmware level, they have memory but not hard drives, they have a leaner OS … but they are still fully functional, powerful computing devices on their own, with enough sophistication and constant change happening to leave doors open for hackers. 

Looking at the most recent Symantec ISTR report (Dec 2016) which is rich in security stats, while most forms of email phishing and web attacks show rather stagnant growth or decline, new mobile malware variants jumped by 214%. Expect this growth trend to continue, as mobile devices have become the new cyber attack surface of choice.

MDM, EMM and BYOD

Around 2011 as smartphones were joining the mainstream, we started seeing huge investments driving a new class of vendors supporting secure mobility -- companies like Airwatch (now vmware airwatch) and MobileIron. Citrix, CA and Blackberry began expanding their corporate security and mobility initiatives to include BYOD (bring-your-own-device) management.

The main thrust of these MDM (Mobile Device Management) or EMM (Enterprise Mobility Management) solutions was the ability to manage multiple employee devices and the apps on them to improve compliance with corporate standards, which should lead to safer usage behaviors and lower mobile data costs.

For instance, the MDM system can require you to set or reset a phone password before you can access company email. It might put all of the required “corporate” apps in a controlled folder, and prevent a user from installing non-approved apps, playing huge media files over the air, or pop up a warning if they connect to an unknown network. It could remotely wipe the data from a phone if it gets compromised, lost or stolen.

Not all control and security functions are available to MDM software, especially in a BYOD scenario. Several countries have regulations against companies accessing private data on an employee’s personal device. Even if a privacy mandate doesn’t apply in your region, you run the risk of ticking off your entire workforce if the corporate HQ imposes heavy-handed demands on their phones. More than half of employees surveyed by Bitglass said they would refuse a corporate MDM install on their personal devices because of privacy concerns. (Kind of easy to say if your current job isn’t riding on such a requirement though!)

Through the glass: The mobile attack surface

Note that while all of the above capabilities contribute to device security, they are not specifically addressing all of the exploits that can happen on smartphones at a device, network and application layer. 

These exploits can be stupidly simple - sending an email or text message with a bad link to ask the user to enter their password or account number -- yes, it still works occasionally. Or quite beautifully sophisticated, for instance loading an SMS image in the preview window that executes remote code and quietly establishes root control of the device without alerting the user in any way -- see the infamous Stagefright exploit discovered on Android (now patched but the hole is still there on many phones).

Some adware and malware providers have taken to creating realistic, but unsanctioned third-party app stores outside of Google Play and Apple App Store. Popular game titles like Pokemon Go and retail apps on these sites look like the real thing, but they might be sending more of your personal data to unknown locations than you’d like.  

The quantity of new threats to mobile devices is increasing at a rate of more than 2x every six months. If you read the latest TrendLabs 2016 Mobile Threat Report, you get an immediate picture of how fast-moving these exploits can be. Once hackers have used a novel Day 0 exploit and it is identified and patched, they are moving on to the next one. Right now, ransomware is one of the hottest growth areas -- attackers remotely encrypt or “lock up” the data on your device, then demand a payment to restore it. Hopefully you backed it up! No guarantees you’ll ever see your data again if you do pay.

Exploits delivered to your door by MTD

You need to have something on the device that can protect against these advanced new threats, and that’s where a new class of Mobile Threat Defense (MTD) tools come in. Some of the bigger players in security such as Symantec, Trend Micro and Intel have recently bought or delivered new solutions geared for endpoint security, but a lot of the excitement in this space is around newer, more MTD-specialized firms such as Zimperium, Lookout and Skycure.

Basically an MTD solution has three components: 

1. Some kind of app running on the device that should detect a possible threat.

2. Some kind of cloud-based service for gathering alerts and threat data for reporting, and updating devices with the latest exploit definitions. 

3. Some way of taking action to remediate the threat and reduce its impact.

For threat detection, some tools employ a technique called “sandboxing” which is basically a way to maintain surveillance of the device from a cloud based service, then have the application step in if an offending message or potential malware is detected to remediate the threat.  Another way is to have an on-device detection and self-service remediation app installed, which uses the cloud service only for reporting and updates of threat definitions back to the phone. This approach offers some user data privacy advantages and still works without an Internet connection.

You know how a Trojan horse or worm can “weaponize” a computer or device and use it to spread itself across a network? What’s cool about today’s MTD solutions is how the detection capability can turn millions of immunized devices into early warning defense beacons and sources of data on mobile attack vectors. If a known or unknown cyber attack starts becoming detected in a certain region or exploiting a specific device/OS/app/network combination, that gets filtered back to the lab, where security researchers can define the exploit, determine workarounds, and even alert OS and device manufacturers and the global security community, if necessary.

You can’t patch mobile security complacency

Despite software innovation and collaboration among mobile network operators (MNOs), device manufacturers and international standards groups, don’t get your hopes up that we’re about to become threat-free anytime soon. A recent Ponemon Institute study on mobile cyberattacks says 60% of respondents have already experienced some kind of security breach due to mobile attacks. Enterprises know they are vulnerable to mobile attacks, but many seem to lack the wherewithal to do much to prevent them.

To make the problem more confounding, that recent Symantec report mentions that as many as 85 percent of corporate data breaches go unreported, a rapid increase from just 2014 when more than half were reported. Less costly to sweep embarrassing security lapses under the rug and hope they aren’t noticed for a couple quarters?

You would think CIOs and CISOs would be looking beyond the standard network security perimeter, firewalls, anti-virus and email filtering stuff and investing to get ahead of this attack vector, but no: the latest Gartner Predicts 2017 report on Endpoint Mobile Security estimated that by 2019, only 25% of mobile-ready enterprises will deploy mobile threat defense capabilities on enterprise-issued mobile devices. That's company equipment, not bring-your-own.

Clearly, complacency is the greatest threat to mobile security, and it will likely require a few more high profile mobile attacks in the headlines to change that. Until then, watch your phones.

Checking in on CA’s Continuous Transformation

When I heard about the CA DevOps and Cloud Forum regional event here in Seattle, I decided this would be a great opportunity to stop by the EMP museum and hear about the state of continuous delivery from CA Technologies, their customers and Forrester analysts, and maybe catch a little of the Star Trek exhibit.

CA is continuing with its brand mantra of Digital Transformation, and advancing that on June 15 they recently announced an Open Ecosystem for Continuous Delivery that incorporates their product suite, along with containerization (Docker), CI (Jenkins, etc.) other common tools (JIRA, git, etc.), as well as cloud service providers that can host elements of the solution.

“DevOps is the new factory driving business transformation” said Kieran Taylor, CA’s product marketing head for the division. Rather than focus on known disruptors like AirBnB and Uber, Taylor presented several customer examples of more established companies like GE, Nike and Bosch that are building innovative practices such as deep analytics and IoT devices through better automation and more nimble release timelines.

The solutions map is quite broad now – encompassing their well-established CA Release Automation (formerly Nolio), Service Virtualization (ITKO LISA from my alma mater), API management (formerly Layer7) and Application Performance Management solutions, as well as the more recently named solutions of CA Test Data Manager (formerly Grid-Tools TDM), CA Agile Requirements Designer and Agile Management (formerly Rally), and a Mobile Cloud for building/testing mobile apps.

Stephen Feloney, CA’s product management VP for the unit, described how the new Continuous Delivery toolchain is not just about deploying faster, but automating testing with test data and services across every phase of the SDLC to avoid risk. “94% of executives face pressure to release faster, but you can’t claim ‘Assume the Risk’ as a badge of courage if automated testing is not built into every release.”

Forrester analyst Milan Hanson framed the current market for more agile development. “Simply driving IT costs down is no longer the top priority – 68% of companies now rate customer experience (CX) highly.” Success in CX is measured not just by satisfying customers with business technology, but through growth delivered by delighting customers.

The need for speed in delivering applications customers want can negatively impact customer experience.  “Many companies are basically doing faster releases, with QA in production, atop constantly changing environments that are hard to replicate.” Even if faster releases are done as quick canary deployments with rollback capability, that can lead to costly customer losses, and demoralizing extended-hour break-fix exercises and war room scenarios for IT teams.

Then Forrester presented some TEI (Total Economic Impact) studies they conducted with a sampling of several large deployed customers using CA’s TDM, service virtualization and release automation solutions. [Reports available lower on the release page here.]

The payback on these ranged from 3-6 months from implementation, with 3-year ROIs ranging from 292% to 389% per solution. Release automation reduced deployment times by as much as 20X, and the use of service virtualization and test data management created some equally astounding results – saving 640 developer hours per release, finding more than 150 defects in earlier phases…

Man, I have been either marketing or writing about software for a long time, and have never seen a major analyst present those kinds of numbers for me. The results make sense though, when you visit the customers who have fully embraced and championed the value of these solutions for their SDLC.

My favorite part of the program was a customer Q&A which could have used more time on the agenda, in my opinion. Practitioners from a major state healthcare payer, and the online automotive service AutoTrader.com fielded questions from CA and the audience.

Adam Mills of AutoTrader said they used to spend 2 weeks out of every 6 week test cycle waiting for environments to be ready, and now they are not only out of that game, they are doing some cool what-if testing scenarios, including something like NetFlix’s famous “Chaos Monkey” project.

“We Set up ‘Chaos as a Service’ to simulate the behavior of systems working improperly in our testing – slow performance, no response, multiple responses, garbled data,” said Mills. “We immediately found we were breaking things like error handling that you can’t test without generating that kind of data. We get a lot of benefit from testing what third parties might do. Now that we can simulate whatever we want – it’s a lot of fun.”

A post-session reception was perched in the fantastic little Blue Lounge atop the EMP theater room. Looking at some APM demos and talking to some of their current and potential customers there, I definitely felt the presence of a “chicken or the egg” dilemma for established IT shops in prioritizing which aspects of their software delivery toolchain to modernize first.

One thing is for sure. All established companies are struggling with test environments, and the time it takes to get them provisioned well at each phase. Should they start with a move to cloud-based labs or containers, or by making the assets themselves leaner and more repeatable with test data management and virtual services? Should more performance and test insight be embedded into the software itself so real-time feedback occurs and problems are found earlier? All I can say is yes – start somewhere! 


Guest Post: On-Demand SV Breakfast in the Cloud

Here's a fun complete breakfast I prepared, photographed, ate and then wrote for the ServiceVirtualization.com community site sponsored by CA, ran Dec. 16, 2015. (See original post at http://servicevirtualization.com/247-complete-devtest-breakfasts-service-virtualization-in-cloud-environments/) - JE

It’s been a while since Service Virtualization (and this SV.com site, for that matter) came out, both as a practice and a technology. Since this site was launched back in 2010, it seems like another trend has occurred: fast food restaurants selling breakfast food 24/7. I don’t know about you, but breakfast is still by far my favorite meal. If everything you want is there, no meal beats breakfast. So why not have a complete breakfast there whenever you want it?

The invention of Service Virtualization in 2007 was huge for resolving dependencies in development and testing, so those teams could move forward without the “complete breakfast” available. At the same time, SV inadvertently resolved some of the primary constraints to serious enterprise adoption of public dev/test cloud environments. We used to describe this phenomenon of constrained components that you can’t simply import as “wires hanging out of your cloud.”

Take any system that you need to have ready for testing, but is not readily available. It could be a heavy mainframe that is too bulky to image as a VM, or a third party service you don’t have the access permission to copy. It would be much easier if you could realistically simulate just the behavior and data you need to run tests with those components.

So SV gave us a lightweight way to eliminate these constraints by replacing them with Virtual Services. This new technology is now a standard practice in large enterprises, with several major vendors offering solutions in the space. SV is proven to “cut the wires” of dependencies in dev/test environments.

That’s great for traditional on-premise environments, but it is especially useful in cloud dev/test scenarios, where dynamic availability – anywhere, anytime — is of the essence. Cloud infrastructure has come a long way in the last few years as well – offering increased capacity and performance at decreasing cost. We're seeing some huge environments running in cloud labs now at every phase of the lifecycle, including CA applications, SAP test regions, Oracle RAC servers, the list goes on. You can even invoke and specify these environments via API with a host of new IaC (Infrastructure as Code) and automated deployment solutions like CA’s Release Automation and CI/CD tools like Jenkins and many others. Containers have also now become a new, happy meal player in this complete SLDC breakfast.

Having the real thing in a 24/7 cloud is great, but who really wants a Coke with breakfast? In many cases, even if you could get the whole production-style application architecture imported, you may not always want the real thing in your dev/test cloud. Production systems may respond and perform unpredictably. If you are developing an application that will talk to production systems, you will likely need to suss out all the boundary conditions in your battery of tests. For instance, what if the mainframe responds in 30 seconds instead of 3 seconds, or .3 seconds? What if my partner’s service returns my form request with an unknown error, or a bunch of SQL hack statements? 

It takes too much work and coordination to try and make every other team’s system respond exactly as you want. But you can easily make a virtual service learn the behavior, and get Test Data Management (TDM) to supply the exact right data, so that you can have it all dynamically provisioned in a cloud environment that does exactly what you want, on-demand. And other teams can get that same level of customization and on-demand environments that are separate so they can work in parallel, without having to become experts in how the whole application infrastructure and network is set up. Better to focus on the aspects of development testing, integration and performance testing that are in the scope of your requirements, and simulate the rest with SV. 

The same concern applies to virtualizing test data. Unless you are getting close to production phases of your SDLC, you shouldn’t have to deal with extracting and loading huge volumes of production data into dev/test environments. When policy demands test data be scrubbed of sensitive info before being loaded, TDM provides another lightweight asset that can provide valid “virtual test data” and be spun up in a public dev/test cloud.

What’s cool is that all of these modern development and test technologies can be loaded into an environment in Skytap, along with all the pre-configured servers, network and domain settings, and management controls needed. Then you can instantly stamp out clones of the exact environments needed to advance development, test and release activities faster than we ever imagined.

Yes, SV and Environments-as-a-Service in cloud infrastructure are great technologies separately, but when pulled together, and given automation, they make up a complete breakfast of dev/test champions that is available 24/7. Don’t just settle for serial, or cereal breakfast just because it is late. Customers won’t wait for it. So why should you?



A love letter from Cloud to Service Virtualization

[I originally wrote this one for a Parasoft roundup and Valentine's Day theme on Service Virtualization if you want to see the original post, but it is certainly entertaining enough to share here!]

Dear Service Virtualization,

Hey, I know it’s been a while since we started being “a thing.” When we met, everyone said you were just mocking, and that I wasn’t real enough to make a living, with my head in the clouds. Yet, here we are, a few years later.

Service Virtualization, you complete me.

As a young Dev/Test Cloud, I always wanted to try new things. And what better use for Cloud than experimenting with software for startup companies? I was flexible, I thought I had the capacity to handle anything. I’d stay up all night studying or partying, but sometimes I’d crash. So what if some college kid’s cloud-based photo-sharing site experiment goes down? It wasn’t going to impact anyone’s life.

But when it came to serious business, there was always something missing. What was I going to make of myself? Who could trust their future to me, and develop things that really matter in the cloud? Clearly I didn’t have everything I needed – I was lacking certain critical systems and data, and it was preventing me from maturing. But you came along and together, we changed all that.

One thing I’ve learned is that I don’t always have to handle everything by myself. A dev/test cloud environment is not just a place to store and run VMs for application work—it needs the same clustering, network settings, load balancers, security and domain/IP control as you have in production. I can handle a lot, for sure.

But there are certain items developers and testers need that don’t image so well. Like a secure data source that should be obscured due to HIPAA regulations, or a mainframe system the app needs to talk to, but would be unwieldy to represent in a Cloud like me. That’s when I say Service Virtualization makes every day a great day.more

We’ve come a long way since then, and we’ve handled increasingly serious challenges. Simulating some very complex interaction models between systems, and deploying those into a robust cloud environment of real VMs and virtual services that can be copied, shared and stamped out at will across teams. We work together so well, we can practically finish each other’s sentences.

Hard to believe all this started less than 10 years ago. Here’s to us, Dev/Test Cloud and Service Virtualization standing the test of time. Now let’s go make some history together.

Yours faithfully,

Cloudy