Developing an Enterprise Artificial Intelligence or AI Strategy

learn solutions architecture

How can an organization develop its Artificial Intelligence or AI strategy? What are the various facets of developing an organization’s AI strategy? What is the CIO’s role in developing that strategy?

In an earlier episode, I had covered the basics of AI, its related technologies and the impact that AI is having on organizations in general and touched upon certain business use cases. So, if you are looking to get a quick review on the basics of AI, then please look for the episode published before this one.

In this episode, we will take the discussion forward and go deeper and start discussing the overall strategy or strategies that CIOs and CTOs need to pursue to start building an effective AI-enabled organization. We will also discuss topics that should be of importance for any CIO or technology executive when defining an organization’s AI strategy.

So, let’s get started.

It’s no secret that CIOs and CTOs across all industries are under pressure to accelerate the adoption of AI within their organizations to give them the needed technological and operational advantages and boost over their competition. Strong AI-capabilities, similar to digital transformation efforts are becoming quite essential for survival. Not having an AI strategy and corresponding capabilities can be very risky. I mean not building the needed AI capabilities will leave organizations stuck with old systems and processes, and as those legacy systems and processes lose their relevance fast in this digital economy, it could put the organization at a great peril. On the other hand, if applied properly, we know having the right AI capabilities can provide organizations with the needed competitive advantage due to AI’s ability to deliver deep business insights from its data, automation of operations, ability to forecast business demand, and creating better products and other capabilities that can help position organizations to start competing effectively in the new AI and digitally enabled marketplace.

As AI is fairly new, CIOs and CTOs should also ensure that there is collaboration and cohesion amongst various departments of the organization wherever there may be any AI related work and experimentation going on. This collaboration will ensure that organizations as a whole can learn from each other faster in turn accelerating AI’s adoption within the enterprise. Having shadow activities throughout the organization can prevent executives from formulating a cohesive strategy, which can deliver more impactful results than standalone or siloed initiatives.

CIO’s role in the adoption of ML and AI capabilities

Before we get into the various dimensions of developing an AI strategy, let’s ensure that we understand a CIO’s role and responsibility in the development of such a strategy. As CIO’s are increasingly become part of the organization’s top executive suite, we should remember that CIOs are also responsible to reinvent their organization’s business models as being demanded by the external market forces. CIOs can do so more effectively by using AI and its technologies to extract relevant insights and knowledge from its operational data that can help in business executive decision making.

CIOs are also responsible to work with the business to identify various business problems and use cases where ML can benefit the enterprise and to deploy relevant ML solutions across the enterprise. In this context, CIOs are also responsible to select the right tools and technologies, architectures, and patterns along with identifying the right AI models and institute a process where deeper data analysis can yield insights from data.  In a nutshell, CIOs need all this to help them accelerate the creation of customer value and increase business growth.

Keeping this in mind, we will review 7 key dimensions that contribute to the creation of an organization-wide AI strategy. They are the following:

  1. Getting clear on an organization’s overall strategy and objectives
  2. Investment in AI research including the need for experimentation and democratization of AI in the organization
  3. Identifying the business use cases for AI within organizations
  4. Considerations in building the AI-enabled technology platform
  5. AI-enabling various business applications
  6. Developing an AI focused Data Strategy
  7. Building the right set of skills in the organization

In the rest of this episode, we will now cover each of these in greater detail.

Get clear on overall strategy and objectives

The first dimension we will look at has to do with getting clear on the organization’s overall strategy and objectives. As always, when adopting any type of strategy, executives such as CIOs and CTOs should be focused on the overall business strategy of their organizations, and stay focused on specific business outcomes as they consider developing their AI strategy. As we will note later in the episode, implementing an AI technology infrastructure and processes can be an expensive proposition and before embarking upon any such initiatives it’s important that business executives get an idea on how AI is going to benefit their organization and their commitment on pursuing such initiatives. As we had noted in an earlier episode, implementation of AI within an organization can happen pretty much at all levels of the organization – from automation of simple processes to bringing in robotics and implementing real-time Deep Learning algorithms and processing against large volumes of disparate data collected from various sources. As a business, you can’t afford to just buy a set of tools and technologies and then hope to get the right ROI from that investment. It’s important to spend time and understand the business outcomes that you desire, discuss with your stakeholders, understand your organization’s readiness in terms of the data and the underlying platform, and then you can start to formulate strategies and invest in those capabilities.

It’s therefore essential to ensure that an organization takes a methodical and systematic approach towards AI’s adoption while ensuring that their overall approach is aligned to the organization’s overall strategy. Perhaps at this stage, an organization can also develop some sort of a maturity model to get a roadmap that shows the adoption of various AI capabilities over time within the organization.

Taking a strategic approach doesn’t necessarily mean that enterprises should stop themselves for addressing some obvious problems that they could resolve readily or to ignore the low hanging fruit, but it means that keeping the perspective of the overall organization strategy is a must. AI should, therefore, be a core component of an organization’s overall digital transformation strategy.

Invest in AI research

The second dimension of developing an organization’s AI strategy is to invest in the right research as a precursor to building larger AI capabilities within the organization. In general, we know from industry surveys that a number of organizations have been investing billions in AI research for the past 3 years. This is also confirmed by a number of research organizations including a study by Mckinsey. Depending on the potential business cases that you as a technology executive have identified in the organization, you should look to start experimenting to gain basic capabilities in ML, DL, natural language processing, speech recognition, computer vision, etc.

When investing in research and experimentation, a CIO’s focus should be to democratize AI and its capabilities in the organization. When carried out properly, eventually this can result in a change of the organization’s overall mindset. As an example, analytics is widely used in all parts of the organization. An organization can take its analytics progressively from non-AI to AI-enabled analytics to give them more insights from that data. Most organizations are used to descriptive analytics, which essentially informs end users on historical information and explains ‘what happened’. The next step up from that type of analytics is “diagnostic analytics”, which delves into the root causes of the events and provides information on ‘why did it happen’. The third step up from that touches on predicting future outcomes from existing information and data and the next level after that is for the system to recommend actions – something which is also referred to as “prescriptive analytics’. Finally, we enter the domain of advanced AI, where systems become self-healing and find solutions to various problems. Although a number of such self-healing solutions have been built-in in various computing and telecom hardware, their use is also showing up in other user applications as well. So, we see that by democratizing the principles of AI and the insights they can provide the business can enable an organization’s users to start thinking at a totally different level and augment an organization’s overall intelligence.

As part of the research, an organization should also start experimentation with building of a digital platform that can have AI applications covering scenarios across various devices, the cloud and the edge. Starting the experimentation process by embarking on various Proof of Concepts and pilots related to the various use cases based on a coherent strategy can help accelerate AI’s adoption within the enterprise.

Leading the research and experimentation process can thus trigger the needed conversations at all levels of the organization on the possibilities for the organization and help an organization rollout AI and augmented intelligence across all the organization’s business processes.

Identify the Business Need for AI

Another key input to developing an organization’s strategy is for the CIO along with other key business stakeholders to start the conversation and identify the specific business use cases where an organization can benefit from AI. This is an important point that we should reiterate and that is that a CIO should not get too bogged down in the tools and algorithms but rather in the specific business use cases that your organization can benefit from. In that context, it doesn’t hurt to discuss the use cases in the context of the available tools and technologies but the focus should be on specific business use cases and the overall business case for piloting AI solutions for those use cases. CIOs should also start the conversation by broadly including staff from IT, LOBs, industry players, competition, and other business stakeholders. As a CIO, it’s useful that you start by asking the big questions, identify the real business problems, and discuss opportunities where AI can bring the most impact to the business. As you have these conversations, you will begin to see your strategy start taking shape and as that happens ensure that the strategy stays aligned to the overall goals and objectives of the organization. As you get clarity on that, eventually you can start a deeper discussion related to the relevant AI tools and frameworks.

More commonly, CIOs start by implementing AI in their departments and usually the data center. AI is being used in the data center for the past few years and can be used in a number of cases such as forecasting hardware and computing requirements, optimizing space and energy requirements, and in dealing with infrastructure management issues such as detecting or even preventing hardware failure. We also see AI being used to the extent that a majority of data center tasks are not only being completely automated but AI solutions provide recommendations on further optimizing various aspects of a data center’s operations. AI is also extensively being used in the realm of cybersecurity to fend off online attacks. AI technologies allow not only for detection of attacks but based on historical incidents and other data across the net, can provide various prevention strategies.

Before we leave the topic of use cases, you should keep in mind that the AI platform or infrastructure that you end up building and its corresponding costs will depend largely on your business uses cases. For example, if you are implementing an AI system to provide you real-time insights related to the health of a manufacturing operations or let’s say a trading operation, you will need to invest appropriately in the right kind of storage and processor hardware solutions to make up for the latency and scalability issues associated with regular network attached storage systems. Similarly, if you are planning to implement Deep Learning based solutions, they sometimes require more processing power than the traditional CPUs. In that case, you will need GPUs or better processing capability to handle those AI workloads and services.

Building the AI-enabled technology platform

Once an organization identifies opportunities where AI can potentially help the organization, CIOs should start focusing on building the right technical infrastructure that can deliver those capabilities. Again, depending on the specific uses cases, this can have wide-ranging implications. Decisions needed in the building of the right technological infrastructure include the choice of tools, service providers, ML architectures that may be needed for the various business areas of the organization, and other such decisions.

Similar to putting together any other digital or technology solution and infrastructure, implementing an enterprise wide AI system and infrastructure requires a well thought-out and cohesive strategy. After all, implementing such a system that touches enterprise wide data elements, requires massive processing power and large and continually growing storage, numerous interfaces and pipelines that bring together data from various sources, high bandwidth networks, algorithms that are far sophisticated in logic than the other workloads that an enterprise may be running, etc. needs strategic thinking and planning and a clear implementation roadmap.

Accordingly, as you build sophisticated Al systems and applications, you will need to ensure that the surrounding systems management and processes are upgraded as well to handle the upgraded capabilities of your systems and applications.

As for building the AI applications and services, one obvious choice is to start with AI service providers, which offer various ready-made AI services. These include services from the large service providers such as Microsoft, Amazon, Google, IBM, and others.

For example, if we look at Microsoft, Microsoft’s Azure ML enables the building of ML and DL models. They also offer Pre-Built AI by providing access to finished services on the cloud, which organizations can start consuming with little effort. If your organization wants to build a chatbot, for example, they have a framework for building bots and provide tools for the development of conversational AI. In fact, Azure provides a number of solution templates, reference architectures and design patterns on their AI gallery.

Amazon’s AWS, IBM’s Watson, Google and other service providers too, provide a number of pre-built AI services as well as tools to build AI applications. Depending on your business situation, you can explore those service providers and their tools and see which one would fit your situation more than the others. Perhaps in subsequent episodes, we will have a more in-depth conversation about the various types of tools and some of their pros and cons. If you are interested, ensure that you subscribe to this podcast so you are notified of our future episodes.

AI-enable existing applications

Next, let’s take a look at another dimension related to developing an organization’s AI strategy and that has to do with how to AI-enable various applications within the enterprise. In this context, organizations can start using the capabilities embedded in their internal enterprise systems along with developing their own AI applications for their sophisticated business use cases. I discussed this topic briefly in an online meet-up when answering a specific question. Here is that clip.

For example, a number of enterprise systems have already started to incorporate AI capabilities augmenting the capabilities of those tools and essentially providing the users with augmented intelligence. We see AI functionality being built in systems such as SalesForce, MS Dynamics, Workday, and others. When used in these systems, AI turbo charges capabilities of those tools, providing organizations better insight into their customers, suppliers, operations, and other business processes. So, one strategy might be to use AI in that context. More specifically, let’s consider SalesForce, an established CRM tool in the market. This tool has added a layer of artificial intelligence through a product called Einstein, which brings various AI functionality to its users. Einstein in SalesForce allows users to perform sophisticated data analysis and allows users to predict business outcomes, allows an organization to build chatbots that can be trained with the organization’s CRM data making chats with customers more personable and other such capabilities. Microsoft’s SharePoint is another example in which Microsoft has packed a number of AI features, especially related to image recognition and text extraction. Users can essentially teach SharePoint a number of things such as recognizing a certain image and when the tool scans documents and detects that image, it can treat them in a certain way using the rules defined in the system. Such capabilities can help in managing contracts, invoices, and more.

So, we see that many tools have already incorporated AI under the hood and while it may not be very obvious to end users, a lot of reports and analytics that is presented to users already make use of machine learning algorithms to report on more sophisticated analytics.

Organizations can also start developing AI applications of their own by incorporating specific AI technologies that are more relevant for their business use cases and needs. For example, technologies are available from companies like Second Spectrum that use ML to watch every second of video of a basketball game, understand it, and then deliver insights about the game and potential improvements to teams and coaches. Earlier this was accomplished by interns spending hours and hours to watch games, tagging videos, and making them available for coaches to watch. Now all that can be squeezed into minutes and seconds. Organizations who see any relevance in such technology, can deploy appropriate ML models to get insights from all types of data including video, images, text, and others and use those insights to improve their business processes.

Organizations can also enable their other apps and cloud workloads with AI and ML technologies to enhance their customers’ experience. AWS, for example, provides what they call “pre-trained” AI services through AI APIs, which can provide a basic set of ready-made intelligence for use cases, which include but are not limited to personalized recommendations, increasing customer engagement, getting security recommendations, image and video analysis, advanced text analysis, and more.

Developing an AI-focused Data Strategy

The sixth dimension in developing an organization’s AI strategy has to do with data. Organizations have now realized the massive insights and intelligence that is trapped in their processes and data. CIOs, therefore, must immediately adopt a comprehensive data strategy to help them get that data organized and then to start leveraging the right tools to provide them the insights and enact processes to act on that intelligence and derived insights. A CIO has the responsibility to establish a platform that takes the data across its lifecycle from its initial creation and allow for democratization and easy consumption and analysis of that data. Considering the complexity of the process and the vast stores of enterprise’s data that’s easier said than done and thus requires a solid data strategy. As we know, data is the fuel that powers AI and its applications. The problem that we have at the enterprise level related to making data available for AI is manifold. First, we need to ensure that we prepare and organize the data assets appropriately to make it AI-ready. Second, at an enterprise level we usually are dealing with large volumes and multiple stores of data. Digital transformation initiatives are already resulting in solutions that are inundating enterprises with large volumes of data. The nature of AI system that you decide upon may further increase the volume of data that your organization may handle and process. Third, we need to ensure that the various types of data are organized for organizations to run ML and AI to extract the needed insights. Finally, the digital transformation is making enterprises generate huge amounts of data constantly and an enterprise’s AI infrastructure needs to ensure that it caters for this non-stop data streaming as well. In lots of cases, the amount of data that an organization is collecting is faster than the organization can run insights on that data.

So, with these challenges at hand necessitates that an enterprise and its senior leadership develop a relevant data strategy. Having the right data architecture can make the development of AI applications faster and easier and can also pave the way for future AI applications as they get identified depending on the organization’s business needs. So, in the next few minutes, I will touch upon various points that a CIO should consider when developing an organization’s data strategy for AI.

  • First, we should point out that the data strategy for AI is part of an enterprise’s overall data strategy. An enterprise’s data strategy is usually created with a number of requirements in consideration, among which AI should be one of them. Some organizations who don’t have a data strategy may find themselves scrambling for one as they venture to bring AI into the enterprise. While they may have gotten away with not having a data strategy until this point but it can be difficult to establish a coherent and well aligned AI strategy without having an underlying data strategy. So, a number of organizations find themselves scrambling to develop one, if they didn’t have one before.
  • The second part of developing a data strategy is to identify potential data sources across the enterprise and even external to the enterprise. These data sources are those that an organization intends to join and marry as part of their AI efforts to derive the needed business insights. These data sources could be both structured and unstructured and could be from sources such as ERP systems, mainframes, legacy databases, IoT devices and sensors, and other sources. The analyzed datasets could also span a number of business areas such as product complaints, literature reviews, external research, product malfunctions, social media and other data. So, if you are preparing and organizing a data model for marketing, you could get data from various omnichannel sources, internal platforms, media partners, agencies, online transaction data, and data from other sources.
  • Another aspect of the data strategy is integrating all that data into repositories that can be used for AI. This may involve integrating various data sources into a data lake or to integrate data in traditional data warehouses. Data aggregation involves integrating data lakes and other sources, building data connectors for data access, data from IoT and other sensors, etc. Consolidating their enterprise data to give them better insights into their business operations such as bottlenecks in their processes, errors in their manufacturing operations, and so on can be the first step toward AI. ML can then be brought in and used against that data to identify patterns in that data and to provide enhanced forecasts and predictions.
  • As part of developing the right data strategy, it’s also important for an organization to establish the right data flows. An organization needs to ensure that the organization has the right and repeatable data flows that pull in data from an organization’s business processes, covering all devices, sensors, etc. These data flows may need to be established to bring data into the enterprise’s analytics or overall AI platform.
  • Another point related to preparing and organizing data for AI is that due to the nonlinearity of data and the various types of structured and unstructured data that must be integrated, developing a data strategy for AI may be more complex than traditional approaches. Also, data preparation in AI usually involves statistical and mathematical processing of aggregated data including normalization of data, and data cleansing. CIOs therefore should understand that appropriate skills related to data science and AI may be needed to develop the right data strategy for the organization.
  • Data quality is also an important factor that should be considered when developing an organization’s data strategy. Low quality data into your AI system will result in corresponding poor quality insights, no matter how sophisticated your AI algorithms. Techniques such as data scrubbing and data cleansing therefore become quite essential in this exercise. Various quality checks ensure that data is fit for ML training and testing and also allow for addressing security, privacy, and governance issues.

Build the right set of skills in the organization

Another dimension of building an organization-wide AI strategy is to ensure that the organization has the right types of skills to help it build its AI capabilities. The skills can vary quite a bit depending on your organization’s needs and mostly would depend on the types of capabilities that you as a CIO decide to get within the organization. CIOs and technology executives also should not overlook the need for formal training for its key staff. This training can address the basics of AI. And once the organization has a basic strategy in place, more advanced training can be provided in line with that strategy. So, for example, if the organization plans to use AI services from an external service provider such as Amazon, then it can focus its training on that front.

The various type of skills that an organization may need include the following:

  • Skills related to Image Processing, Image Analysis, and/or Computer Vision
  • People with skills and training in machine learning data mining and other quantitative research analytics such as: Non-Linear Regression Analysis, Multivariate Analysis, Bayesian Methods, Generalized Linear Models, Decision Trees and Random Forest, Non Parametric estimations, Neural Networks, Ensemble Models, etc.
  • Strong background in statistical languages technologies (e.g. R, SAS) and deep experience in Hadoop technology and open source languages (Python, Spark)
  • Experience with computer development languages (Java, C++).
  • As most of these solutions are built on the cloud, skills are also needed related to delivering solutions pertaining to a mix of Cloud, Hybrid cloud,
  • Skills are also needed in the area of data techniques such as Data Warehouse, Data Engineering, Advanced Analytics, and Data Science
  • And there are other such skills needed.

 

With this we come to the end of this podcast. I will have more topics on AI and how to overcome various challenges in bringing AI in organizations in a later episode. Again, please subscribe to this podcast to ensure you are notified of those episodes. This is Wasim Rajput and thanks for listening.

Digital Technologies and Trends for 2019

learn solutions architecture

What are the key digital and technology trends for this year? What business outcomes can organizations expect from these technology trends and how are these technologies expected to benefit organizations?

So, with that let’s get started and review the top seven trends that I see are on CIOs agendas for this year.

There’s nothing new when we say that we live in a world where change is the only constant. What is surprising however is the rapid pace of change that industries and organizations are experiencing and going through. The impact of the change is being felt almost globally and across all organizations and industries, as organizations rush to alter their business models to minimize the impact of disruption triggered by this constant and rapid change. A number of new technologies are causing this disruption and triggering new business models. To stay ahead of the curve, organizations should get aggressive about exploring opportunities and explore the relevance of these technologies to their organizations.

Technologies that are enabling this change and our new world are many. But in today’s episode we will focus on technologies that are at the forefront and are changing the way organizations operate and compete in the new economy. Any organization that hasn’t yet identified potential use cases for the application of these technologies should do so before they are left behind by their competition.

CIOs across industries are applying these technologies to bring business benefits such as increased automation, improved user experiences, new products and services, and other such business outcomes for their organizations.

A review of these technological trends can help you learn not just some of the buzzwords but also to learn about their disruptive potential, the opportunities they offer, and what organizations need to do to        start weaving them into the fabric of their organizations.

Before I jump in and cover those 7 trends, they are:

  1. Advanced and Augmented Analytics
  2. Digital Twins
  3. Blockchain
  4. New trends in Cloud Computing
  5. XaaS (Anything as a Service)
  6. Internet of Things
  7. Mixed Reality (which includes Augmented and Virtual Reality)

 

Advanced and Augmented Analytics

The first trend I would like to highlight today is about the advances that are occurring in the field of analytics. We can call it advanced analytics or as a number of research organizations like Gartner like to refer to it as Augmented Analytics. Essentially, advanced or augmented analytics is analytics on steroids. More specifically, its analytics coupled with Artificial Intelligence and automation.

Over the past few years, we have observed how numerous technologies in the area of AI, ML, and data science have made it possible for organizations to run sophisticated analytics on the massive amounts of data that they are generating, along with the historical data that they already have and the external data that they can get to from social media and other sources. However, accordingly the amount of preparation and other related manual work has made this process quite tedious. As the need for getting ones hands on advanced analytics and business insights goes up, it has resulted in more work related to data preparation, data discovery, ML model selection, searching and querying for data insights and more. To tackle the manual and laborious effort that accompanies such tasks, many systems and tools today are embedding a number of features that are automating a number of these steps. This then is helping businesses to focus on getting key business insights and intelligence more quickly thus helping them make the right decisions rather than struggling with the many steps of getting to that point.

So, getting back to augmented or advanced analytics – we can say that this discipline refers to a suite of technologies in the realm of analytics that brings more automation and intelligence to this overall process of data preparation, discovery, model creation to analyze and extract insights, and searching for or querying of insights through a natural language interface. Again, the idea is to help organizations spend more time on acting on the insights and intelligence that they derive from data rather than spending manual time and effort on preparing the data and getting insights from that data.

For CIOs this should especially matter if they have been investing in standalone data science, data integration, and other analytics solutions as they may find that a number of the functions and features that they may be paying for are perhaps now being addressed by their respective systems of record or ERP vendors and these functions are features are perhaps becoming more accessible part of other tools as well. For sophisticated AI applications they may still need those tools but that’s something that should be looked at more closely on a case by case basis for each business. If, for example, we look at enterprise systems related to HR, CRM, finance, procurement, customer service, and others, we will see that a number of them now incorporate a number of such capabilities to help their users with their decision making. To cite a specific example, SalesForce has introduced Einstein Analytics, which is essentially advanced analytics powered by AI. These sutie of tools on SalesForce pack a lot of functionality related to providing predictive insights and prescriptive recommendations and accordingly provide apps and functionality, which allow organizations to not only visualize their overall sales and marketing pipelines but also help in making complex forecasting decisions.

We should note that there is no one tool per se to deliver all such capabilities for an organization but rather it’s something that organizations must understand as part of their overall AI, analytics, and decision making framework so they can start making the right changes to their processes and overall strategy of acquiring and operationalizing technology related to getting more intelligent insights. By understanding the overall process and the complexities inherent in the process, organizations will start to select the right business tools which will make it easier for them to get the right insights and to automate the many tasks that are involved in the derivation of these business insights so they can make faster decisions.

We see analytics also getting advanced to a level where organizations are enabling constant streaming of data from their business processes and analyzing that data in real time to give them real time intelligence and business insights. A number of technologies come together to make this happen including those of Artificial Intelligence, ML, DL, data management, data science and others. The idea is to have an organization’s systems and processes create new intelligence constantly to help it in making instant decisions.

Digital Twins

The next trend I would like to cover is that of digital twins. A digital twin refers to a digital representation of any physical entity, which needs to be monitored. Physical entities include but are not limited to people, process, equipment, places, and others. Creating a digital twin (or having a digital representation of the entity) allows an organization to study the behavior of the actual physical entity and to run various types of analysis to understand its behavior, improve its functioning, perform diagnostics, and more.

Due to the nature of the digital twin, it’s important in many cases to have data related to the entity constantly transmitted to the system using sensors and IoT devices. For example, to have a digital twin of an equipment such as a jet engine or large machinery, sensors on them would constantly transmit data to the main system providing engineers deep insights into the behavior of the equipment. They can then use that data to predict failures, test configurations, and perform more of such analysis.

Although the idea is not new and we have seen its applications on a number of platforms, its adoption is still not as widespread. For example, GE’s Predix platform is a digital industrial IoT platform and cloud based service that maintains digital representations or digital twins of various industrial equipment and provides its users the ability to run analytics and learn more about the devices and equipment that it monitors. This concept of digital twins is also used in managing assets in the energy sector where lifecycle of physical assets can be studied and improved. Organizations, for example, can monitor offshore oil rigs and study variables, which can further help in improving their performance without being at the physical rig itself.

IoT and sensors have further popularized the concept of digital twins and its use is expanding to pretty much all industries where there’s a need to monitor physical entities.

Blockchain

Moving on, the next technology that we will discuss is that of Blockchain. Blockchain is a distributed ledger technology that provides decentralized trust across a network of untrusted participants. Cryptocurrencies like Bitcoin and Ethereum were founded on these technologies. Since then, interest in Blockchain technologies and relevant investments has grown exponentially for the past few years. According to Statista, a leading provider of market and consumer data is forecasting that blockchain technology revenues will grow to more than $23 Billion by 2023. Organizations that have shown more interest in Blockchain are from the financial industry — and that’s for a good reason. It’s in this industry where organizations participate in carrying out financial transactions in the larger ecosystem and have to trust each other. Blockchain due to its distributed ledger technology solves that problem for them.

Other organizations from different industries are also experimenting and implementing blockchain in their businesses. We see this technology being applied to solve manufacturing supply chain issues, food and agriculture industries, and others.

A number of technology service providers have built platforms, which allow organizations to build and deploy blockchain platforms. IBM, Microsoft, and Amazon provide cloud based blockchain services, which many have been experimenting with for the past couple of years.

Building enterprise blockchain solutions can be more challenging as it not only requires a strong technology platform but also cooperation from various participants. It also requires extensive testing to ensure security, performance, trust, and scalability issues are appropriately addressed. It’s for this reason that sometimes it takes longer and relatively more extensive planning to roll-out these solutions.

Cloud Computing

Another technology worth mentioning again is that of cloud computing. Although, the move to cloud computing has been going on for a few years now, but we also know that not all organizations have moved to the cloud. Many who have started the process, have only moved a fraction of their workloads on the cloud. But due to the successes of this computing paradigm the trend continues to be hot and enterprises will continue to invest a sizable chunk of their investments to migrate their old and new workloads to the cloud. Although the initial move to the cloud was triggered by cost reasons and for software and applications to be used as a utility, many other advantages have come to fore over the past few years. A number of technical innovations over the past few years simply would have not been possible without cloud computing. This includes but is not limited to Artificial Intelligence applications that deal with a lot of data and need massive processing power. Technologies such as Blockchain and IoT also have gained a lot from having a cloud based backend. So, in general we can say that enterprises have plenty of reasons to move to the cloud.

Before discussing some of the trends related to cloud computing, let’s review some of its basics. Cloud computing is a computing paradigm, which allows for network access to a number of computing services made available through shared physical and virtual resources. Cloud computing services are available in different configurations. First, in a non-cloud environment, an organization manages all of the resources and services related to networking, storage, servers, virtualization, O/S, Middleware, Data, and applications. In an IaaS configuration, everything related to the applications and middleware is managed by the organization, whereas the cloud service provider manages the bottom layers of the stack namely the networking, storage, and hardware. In the PaaS configuration, with the exception of the applications and data, everything else is managed by the cloud service provider. Finally, in the SaaS model, all layers are managed by the service provider. So, which model you or your organization decide to get depends on your specific business case.

Having said that, we should mention a number of trends in that arena. First, we see that the hybrid cloud environments are becoming more popular and many organizations, especially the larger ones are settling on this paradigm. A hybrid cloud environment is one where an organization uses a mix of public and private clouds and on-premises environments for its computing needs. Although a number of enterprises have started the move to the cloud, many have come to realize that they won’t be able to (or have reasons to) migrate all their workloads to the cloud. So, a hybrid cloud configuration gives them the best of the different worlds out there and gives them the flexibility of capitalizing on cloud technologies for applications that can benefit the most from such computing. Another trend that we see within the realm of cloud computing is that of serverless computing because it provides the organizations the flexibility to get and pay for the software and computer services without worrying about the underlying infrastructure. In serverless computing although the servers are still there but customers don’t have to worry about them as they are focused on getting software and compute services from the cloud services provider. This is another step up from the orginal cloud computing model where customers have only to pay for the compute services they use rather than leasing cloud resources that they wouldn’t use. In the past couple of years, the industry has seen a rising popularity of this model, especially if there are services that organizations can use from the cloud services provider without worrying about getting the underlying infrastructure.

Another cloud computing model that has been on the rise is that of edge computing. Edge computing is an architectural construct, which refers to running certain programs near the edge without having to run everything on the cloud. This helps with latency, bandwidth, and other performance issues and compute tasks and information can be allocated across the overall architecture more intelligently. This model has become more popularized with the emergence of IoT computing, where computing can be run on or near the IoT devices near the edge without overloading the cloud.

XaaS (Anything as a Service)

The next trend that we will cover has to with CSPs providing Anything as a Service or XaaS as it’s usually written. Xaas or Anything as a Service is a general term that refers to various services that are available through Cloud Service Providers. Over the years with the successful adoption of cloud computing and the services that one can get from the various cloud service providers, the world has seen service providers increase the number of such services. We are all familiar with the IaaS (or Infrastructure as a Service), PaaS (Platform as a Service), and SaaS (Software as a Service). In addition to these, other types of services have started to come up recently. So, here we will review some of the services that are becoming increasingly popular.

The first one I will state is the Storage as a Service. With demand for storage constantly growing, organizations are increasingly running into limitations of having their own storage irrespective of the type of cloud environment that they have. For more storage needs, organizations are therefore, turning to their CSPs for their storage needs as through them they are finding better performance, scalability, flexibility, and manageability options for their storage needs.

Another cloud service worth mentioning is that of DBaaS (or Database as a Service). As organizations’ needsrelated to databases is constantly increasing, it’s equally becoming difficult and costly for them to provision, manage, configure, consume, and operate their databases. In such cases, organizations are turning to their CSPs, who alternately provide a better, efficient, cost effective and overall agile way to perform such activities.

A third service we will mention here is that of Analytics as a Service. We know how the field of business Intelligence and Analytics have exploded over the past few years. In line with that trend, CSPs have been maturing in the services that they have been offering and we therefore see many organizations turn to CSPs for their business intelligence and analytics needs.

Finally, in this context, we will also mention AI as a Service. As we have mentioned in the previous episodes on this show, many CSPs such as Amazon, Microsoft, Google, IBM, and others offer a number of AI based services through their public cloud environments and the consumption of such services is on the rise.

Besides these services, there are many others that CSPs provide. We will cover more of these services in other episodes of the CIOtechCentral podcast shows.

Internet of Things

The next technology I would like to focus on is the Internet of Things or IoT as it’s more popularly referred to. Although IoT and related technologies have been used in the marketplace for a number of years – may be even couple of decades, their use has skyrocketed more recently due to innovations such as cloud computing, faster Internet, ability to store data collected from those IoT devices and so on. If we look at manufacturing organizations, we will observe that they have been using these and related technologies for more than two decades where they had intelligent devices attached to certain manufacturing and factory assets collecting data on the health of those assets and then making that data available for analysis. However, the miniaturization of devices and other technologies that I just mentioned earlier has enabled IoT to take over the world by the billions and perhaps this number will go even higher. Intelligent devices are making their way in a wide number of business processes. And CIOs challenge is not so much about learning the basics of this technology because as I mentioned earlier, they have been around for a number of years but their challenge is more on how the larger acceptance and adoption of IoT will impact an organization’s technology infrastructure. So, that’s what we will look at today.

Let’s first take a quick look at the potential use cases in the market. If we look at the data from the various research organizations, it’s clear that the market for IoT is growing without bounds. Most electronic devices are now getting connected to the Internet and coming online providing more opportunities for them to communicate with each other. These devices include laptops, household appliances, automobiles, vending machines, and others.

IoT technologies are also finding their use in smart homes, wearables, industrial internet, retail, supply chain, etc. Cities across the world are deploying sensors in cameras, streetlights, and other electronics deployed across the city to track various types of activity. These sensors capture and relay all types of data including audio, video, and others. IoT is also being used in Industrial Internet settings to ensure effective operations of industrial equipment and to facilitate a safe working environment. The industry for Industrial IoT or IIoT is expected to grow exponentially until it makes its way in most industrial operations. These applications are already helping organizations control costs, increase operational efficiency, and facilitating safer industrial operations. This is impacting industries such as Oil and Gas, Healthcare, Electric and Water, Transportation, and others.

Key considerations that CIOs should be looking into have to do with ensuring implementation of the right IoT platform. This obviously is driven by the extent of the expected IoT use within the organizations. The overall IoT platform architecture usually comprises of the IoT devices communicating with some edge devices and then to some type of a gateway. The gateway connects to the cloud where most of the data collected from IoT devices is stored. Depending on the scale of the overall operation, organizations could be looking at storing massive amounts of data so scale should be considered. Some devices, for example, generate and transmit millions of pieces of information to the backend, which then necessitates considerations for data storage and processing at the backend.

Besides scalability, security should be another consideration to ensure end to end protection of all the data. Also, with all this data being generated, chances are that CIOs would want to process and store the data in a way to help them get the right analytics and insights. In that case, a number of system integration issues will also have to be considered that would ensure that all operations from data transmission from IoT devices to storage to processing at the cloud and making it available for analysis works seamlessly.

So, these are some of the issues that organizations will be grappling with this year and next related to IoT. Accordingly, they will be looking at the right solutions to ensure they can address these issues to maximize returns from this technology.

Mixed Reality

The last technology trend that we will look at today is that of Mixed Reality. Mixed Reality refers to two technologies related to Augmented Reality (or AR as it’s called) and Virtual Reality (VR). Although these technologies are still in the early stages of their development and use, according to Statista (which is a market research organization), the market for Mixed Reality is expected to hit around $4 Billion by 2025.

So, let’s review these two technologies. First, VR or Virtual Reality is a technology that uses computer simulation to provide the user with an experience of being in another location or space. Their common applications to date have been in the area of gaming and similar entertainment. Sophisticated headsets in the market that enable greater immersive experiences allow customers and organizations to explore new opportunities in the enterprise space. VR has potential applications in retail where it allows customers to experience the products or services in multiple dimensions and in greater detail and can help them decide quickly about buying those products and services. VR is also making its way in the education market where customers can go through better experiences for learning and training. The military, too, is making extensive use of VR in training their soldiers for tough terrains and situations.

AR, on the other hand differs from VR in that it provides a blend of real and virtual worlds where users while wearing special headsets can see a projection of a virtual world (such as graphics) onto the real world that they see. It essentially augments the user’s experience to allow them to experience virtual items in physical spaces. So, augmented reality enhances the real life environment and provides immersive experiences. It can be used in the area of Construction and Architecture for example, where structures can be superimposed as 3D visuals onto real space to provide an idea of how they would look like in reality. It can also be used in the field of education where supplementary information can be superimposed on the actual physical learning materials. Microsoft’s HoloLens and Google Glass are examples of products and services that provide both AR and VR capabilities. A number of other AR headsets, apps, glasses, and other devices have started to appear in the market. Their uses are becoming popular in a lot of industries such as manufacturing, retail, energy, and others. These devices can help in the performing of complex surgeries where surgeons can get an enhanced and 3D view of the area being operated on with the devices pointing out various details to help in the surgery.

Shoppers can walk into showrooms and don one of these headsets and see their choice of configurations before making a selection. For example, a shopper can visualize a car with a specific color, and other accessories before making a selection. Within a manufacturing setting, imagine an inspector walking onto a manufacturing floor wearing these devices, where these devices can point out certain processes, people, equipment and other details on the shop floor that can help in their inspection. Its use is becoming even more popular in the area of product design where designers can visualize a number of models and make changes before finalizing the design of the product. Headsets are available to allow users and tourists to visit places virtually and walk on a beach or downtown of a city and experience the city without actually going there or to get a taste of the place before buying tickets. Similarly, tourists can experience hotel rooms and other spaces before making their accommodation reservations.

As a technology leader of your organization, depending on their business of course, you can find its applications in a number of areas. You can also develop your own customized mixed reality experiences using various developer toolkits. Microsoft HoloLens among others provides that type of support. They also support an open API surface and driver model in line with open standards making it easier to build applications for your business that can provide immersive experiences. For Microsoft, they provide support for developing and deploying these applications on their Azure cloud platform where you can build cross-platform, spatially aware mixed reality experiences and connect it with other services as well.

 

A Review of Artificial Intelligence (AI), related technologies and business use cases

learn solutions architecture

What is Artificial Intelligence or AI? What are the related technologies of AI? And what are some of its basic use cases?

In this episode, we will review AI or Artificial Intelligence and how it’s being used across industries by organizations to boost their performance by doing things smarter, intelligently, better, faster, and cheaper. As this is a large topic, one episode won’t be able to do justice. So, we will begin this episode with the basics of AI, it’s underlying technologies, and their use cases. In later episodes on this show, we will slowly build up to cover more specialized topics such as developing an AI strategy within the organization, types of applications and overall capabilities that organizations can start building and other such topics.

So, let’s get started with the basics of the topic.

In the past few years, especially since about 2016 the interest in AI has picked up enormously as organizations for the first time have started witnessing and observing AI’s real business benefits and its direct impact on an organization’s overall performance. If we look at some of the use cases, we can easily see that a lot of organizations now have successfully deployed AI in various forms and fashions within their organizations. In fact, many industry pundits are saying that organizations, which won’t be using AI fairly soon will be left far behind resulting in the erosion of their competitive positioning. The National Science Foundation clearly states on their website and other channels that “To stay competitive, all companies will, to some extent, have to become AI companies.” So, what that essentially means is that all organizations will have to build certain AI capabilities to be able to compete effectively. For CIOs, this also means that not only will they have to start building AI capabilities within their organizations and to show its applications in real business situations but they should also have a longer-term strategy for making this happen.

So, to see what AI is able to do, consider Xiaoice (spells as xxx), a chatbot released by Microsoft, which can talk and interact with its users. In this chatbot, Microsoft has brought in their best research of all of its AI technologies related to speech recognition, natural language processing, machine and deep learning and more. This Chinese celebrity chatbot is learning so fast from its interactions that young Chinese men and women feel comfortable turning to Xiaoice to talk about their issues, heartbreaks, and daily stresses. Many send it love letters and some have even invited it to dinner in the hopes that it (or she) would show up. According to Microsoft, part of the chatbot’s popularity stems from the way she exhibits high emotional quotient (EQ) by remembering parts of a conversation and following up in later conversations. The chatbot is also a poet and has published a book of poetry and helps Chinese people write poems. To date, it has penned hundreds of millions of poems and is currently hosting a TV morning news show with viewership approaching ONE BILLION viewers. Yes, that’s a billion with a B.

In this context, we should also make it clear that AI does not refer to one system or one type of technology – such as a talking or a working robot. Rather, depending on the AI capability that your organization needs, AI refers to a number of systems and technologies that must be assembled and integrated depending on your organization’s business case. So, if you are trying to process large amounts of data and looking for specific patterns and insights in that data or to derive certain types of sentiments to gauge public opinions, you will need a different set of technologies and systems than let’s say if you were putting together a shop floor automation system involving computer vision and robotics or if you were trying to build an intelligent chatbot such as Xiaoice. Therefore, it’s important to always develop the right AI strategy before jumping in. In other episodes, I will accordingly delve deeper into the need to have an enterprise wide AI strategy that must be driven by your organization’s overall business goals, strategies, and objectives.

  • Accelerates automation
  • Enables derivation of insights not possible through traditional technologies
  • Systems are self-learning – keep the context of previous insights and keep improving
  • Able to handle and process large amounts of data
  • Accelerates decision making due to the availability of the right insights at the right time
  • Ability to process complex datasets integrated from various enterprise datasets

How big is AI?

To get an idea of how big AI is going to be in the future and its impact on the global economy and organizations, consider the following:

  • A PwC Global Artificial Intelligence Study forecasts that AI will add $15.7 trillion to the global economy by 2030.
  • The US government’s National Science Foundation invests more than $100M annually in AI research.
  • The number of enterprise customers and government agencies deploying AI to dramatically transform their businesses is increasing manifold. Both organizations are governments have started to take notice of AI and how it can be used for their transformation. AWS public platform, which offers readymade AI based services is boasting more than ten thousand customers on their platform alone.
  • At a government level, UAE, for example, sees the adoption of AI so urgent and important that they have created a top level position of that of a Minister of State for Artificial Intelligence. Not to get too detailed in how the UAE government is structured but a Minister position is one that is part of the Council of Ministers reporting to the Prime Minister of the UAE and that would be like a Cabinet Secretary level position in the US. We also see many other countries create high-level positions in the government overseeing adoption of AI both in the government and public sector.
  • At a defense level, we see governments of all advanced countries such as US, China, Russia, and others investing a lot in AI.
  • And the list goes on.

So, we can see that organizations of all types are seeing many benefits in AI and it should be part of every organization’s strategy and CIOs and technology executives should have a clear roadmap for its adoption and to bring relevant benefits to their businesses. Although as we just heard, interest in AI has picked up dramatically, we are still at a stage where organizations have the opportunity to catapult themselves to great successes and use their data to bring out major transformations to their operations and overall performance. However, if they were to wait longer, it may be too late.

In this Episode

So, in this episode, we will answer 3 questions.

  1. First, is what is AI. This may be quite basic for some but before we get into more advanced topics in subsequent episodes, it’s important that we spend perhaps a few minutes getting back to the basics.
  2. We will then review some of the technologies that are associated with AI or mentioned in the context of AI. These terms include Machine Learning, Deep Learning and some others. We will look at those and what they have to offer.
  3. After that we will get into the specific use cases that apply to each of those technologies and see how they are benefitting organizations. This is not a small topic. So, although we will start it in this episode, we will cover more of this in subsequent episodes where we will cover the various strategies that organizations can pursue to bring AI in their organizations.

After we review the answers to these questions, we will be ready in the next episode to delve into the type of overall AI strategy that organizations need to start adopting AI related technologies in their enterprises and to boost their performance.

So, let’s get started and address the first question, which is ‘What is Artificial Intelligence?’

What is Artificial Intelligence?

There are lots of definitions of AI or Artificial Intelligence, with each providing a unique dimension of what AI can do. To put it simply, an AI program is able to learn and do more things with time without being explicitly programmed. And that’s where the term ML came from. It’s the ability of machines to learn and do more things and help machines achieve Intelligence or as we would like to call it ‘Artificial Intelligence’. That’s what distinguishes AI software from regular software.

AI also refers to a set of technologies in the computer science domain that make computers and machines perform human like activities such as those related to visual perception, recognizing patterns, speech recognition, continuous learning from experience, problem solving, reasoning, making predictions, and others. In some cases, these machines end up doing those tasks even better than humans. For example, using these technologies computers can identify patterns in vast stores of knowledge and information very quickly something that a human mind isn’t able to do. But in other cases, these technologies are still quite far behind, especially when it comes to visual perception, reasoning, learning, and so on.

But regardless, when applied and used adequately, AI and related technologies can help organizations achieve a manifold improvement in their business processes and overall performance.

AI technologies work by a using combination of AI algorithms, lots of data, programming models, and hardware acceleration. Since AI works with a lot of data and the fact that it needs these sophisticated neural network algorithms working in the background executing various models, it increases the computation and processing needs, thus necessitating advanced hardware as well. So, for example, if the given AI problem involves a lot of image processing then GPUs can be used to help accelerate the processing of tasks. As we know, GPUs stand for Graphics Processing Unit and are dedicated hardware units to perform graphics only functions.

So, with the development of these human like functions in software and hardware, we see full-fledged intelligent solutions and machines surface over the past few years, which are helping organizations solve complex problems. For example, we see the implementation of virtual chatbots, which automatically and seamlessly respond to customer and user inquiries thus freeing up agents to focus on answering more difficult types of questions. We also see manufacturing plants collect a lot of data related to their ongoing operations and then using AI technologies like ML and DL to look for specific patterns in that data and to derive useful business insights that are then being used to improve the overall operations by reducing wasteful steps, optimizing processes, reducing costs, and so on.

A quick note about chatbots as their use is constantly increasing. Chatbots are software help agents that allow users to have intelligent conversation with the system, which help the users with answering questions, filling forms, taking orders, ordering food, and so on. In many cases, the person may not even know that they are talking to a machine or software. A number of organizations have installed chatbots on their websites including UPS, Macys, and others helping users with their queries and helping them carry out more advanced tasks. Chatbots can interact intelligently with the users who may ask questions such as “where can I order dinner, or where can I drop my package, where can I buy some xyz perfume, etc.

What are the different technologies of AI?

Now, we will move on to answering the second topic, which has to do with the different technologies of AI. AI is usually associated with a number of key technologies. They include but are not limited to Machine Learning (ML), Natural Language Processing (NLP), Deep Learning, machine perception and even robotics. Again, although these technologies have been available for some time, they have become more accessible and practical for use now due to the world’s new found capabilities to process large data and availability of massive computing power through the cloud platform.

Machine Learning – Let’s look at ML first. ML is one of the most important and foundational technologies underlying the field of Artificial Intelligence, and refers to the ability of a machine to behave not through specific programmed instructions (the way computers and software usually work) but through its own learning. Normally, the program starts with a generic algorithm but then builds on the data that it’s provided to build its logic. The basic algorithm is programmed based on a mathematical representation of the problem that one is trying to solve and then uses available data to train the overall logic of the program. ML usually works on structured data. Essentially, this means feeding the algorithm the data, which is structured and labeled and then it uses that data to learn and improve on its performance.

Typical applications of ML include image recognition where a program becomes better at detecting certain objects such as faces or others. It can be used in existing applications such as E-mails. AI enabled plugins exist today, for example, that allow for E-mail filtering based on certain criteria. ML is also being widely used in medical diagnosis where a program scans numerous X-rays or CT-scans and helps the physicians in the diagnosis of a number of ailments. And there are numerous such examples.

Deep Learning is another technology related to AI. DL can be considered a specialized form of ML. DL algorithms are usually used for more sophisticated and complex cases in AI. DL algorithms work by mimicking the human brain, which is made of neurons and where thinking and reasoning works across various layers of neural networks. Similar to those concepts, in DL, a set of multiple algorithms working at different layers, act as Artificial Neural Networks to interpret the data that they feed on. With DL, a software program improves itself through ANN (Artificial Neural Networks). Data is processed through each layer of artificial neurons and progresses through processing to discover more features and patterns from the data. So, in summary, DL is inspired by neural networks of the brain where the brain’s network works to represent concepts, relationships, see and understand the context, and represent the vast amount of data thrown to it.

To illustrate an example of Deep Learning, some of us may have heard how Google’s AlphaGo system, which was able to beat humans at the game called Go. Go is an abstract strategy game that is played by two players in which the idea is to capture more territory on a board. By using DL techniques, the machine was able to learn by first playing many games with many players and similar to humans kept learning from its experiences eventually beating many game masters.

Natural Language Processing is another form of AI related technology. This technology take human generated text and renders it in machine form. For example, it takes human generated text, detects nouns such as people, places, things, relationships between those entities, detects sentiments and emotions in that text, extracts certain keywords from the text, categorizes information, and so on. One can build such capabilities by either programming all the rules in software or using machine learning.

These are only some technologies related to AI. So, depending on your specific use cases, your applications may use ML, DL, along with other technologies such as image recognition, Natural Language Processing, Robotics, and others to give you a full-fledged AI system.

 

What are the specific use cases for AI?

 

Next, we will address the third and final topic of this episode, which is to review the specific use cases of AI to give us an indication of how this technology can benefit organizations. So, here are some of the ways that AI can benefit organizations.

  • AI is used in automation of mundane and processing intensive tasks – Although software has helped us to automate a number of tasks already, AI technologies help us automate more complex tasks and ones that are processing intensive. So, we see AI helping organizations and the society at large in anything that is data intensive or time and processing intensive.
  • AI is used to perform advanced analytics to analyze lots of data, detecting patterns and trends, and extraction of predictive insights. Using ML and DL, for example, AI driven analytics provides insights related to improving a business’s operations, analyzing problem root causes, and much more. This technology, for example, is being used in the medical imaging field where AI trained systems can scan the data of millions of CT scans and provide diagnosis.
  • AI programs are also being used to understand natural language and to extract meaning from that language. For example, they are being used in image recognition where AI programs mimic human capabilities in terms of recognizing images, informed decision-making, deductive reasoning, and inferences. And so on.
  • One of the most common examples of AI use cases is that in customer service. We see numerous organizations started to employ AI assisted systems to improve an organization’s customer service operations. A business dealing with hundreds of thousands inquiries can be quite overwhelming for any business. Not only an organization needs more agents to handle these inquiries, the agents have to be trained on the various types of calls that flow in through online channels or call centers. So, as a business grows, this problem gradually gets even worse. This is therefore an ideal problem for AI systems to solve. An organization, for example, can train AI systems with various customer inquiries, be they in Email format, chat, or other. As the system learns, they can gradually get better to handle a large percentage of those calls, especially those that have simple and straightforward answers thus freeing up agents from being busy with repetitive rote tasks and instead to focus on handling more complex customer inquiries and problems. These AI systems can also be used by customer service agents to research better by demanding answers based on the large historical inquiries and other types of data that these systems may have access to. So, the process to respond to customer’s even more complex inquiries can get a major boost. When customers get quick and relevant answers to their inquiries, this can lead to increased customer satisfaction.
  • As another use case, manufacturing organizations use AI systems to help them improve their manufacturing operations. As more data fuels the use and success of AI systems, the manufacturing organizations have plenty of it. They can use data from their production histories, and couple it with ongoing data feeds and then use various AI technologies to try to anticipate production problems and to prevent them to ensure a smooth flow of production.
  • We also see AI use cases being used in financial institutions and banks. For example, these organizations are making use of speech recognition and working with Amazon’s Alexa tool to allow customers to talk to Alexa and conduct various banking transactions. So, Alexa understands a customer’s voice and requests and thus, a natural dialog between a human and a machine allows a customer to carry out various banking transactions. A bank can also provide personalized financial services to all of its customers based on their individual profiles, transaction history, and other relevant details.
  • Amazon Go from Amazon is another great example of AI. As we know, Amazon Go are physical stores recently opened by Amazon. Amazon has equipped these stores with state of the art AI technologies, which enables customers to shop and checkout without having an in-store cashier or self-checkout station. Customers shopping at Amazon Go stores download an app on their mobile phones and this app along with computer vision, sensors, and deep learning algorithms and software enables customers to navigate the store, shop the items, and essentially walk out of the store without checking out.
  • We are seeing AI being used in the area of new product design as well. For example, when an AI enabled system is provided enough inputs and models, it can generate enough simulations and recommend multiple product designs from which engineers can choose from. Watson, an AI system from IBM provides inspiration for new songs and even works of art when it’s fed by millions of pieces of music and art. So, just with this we can see that product innovation can accelerate in organizations, which employ AI systems.

These were only some of the use cases related to AI but the point was to provide an idea on how AI can dramatically help an organization advance its goals. Deciding where to start applying is a major strategic decision. We will review those strategic decisions in another episode on CIOtechCentral’s podcast.

Summary and conclusion

So, here are a key points that I want you to take away from today’s episode.

  • First, as we discussed, we are already at a point where organizations are experiencing manifold increases in productivity and cost savings in their business operations using AI and its technologies. Many are also using AI in the creation of new products and services giving them a major boost in increasing their revenues. So, if your organization is not actively pursuing an AI strategy, it should get on it right away.
  • Second, as we have seen, with enough data, processing power and sophistication of AI algorithms, AI will be able to completely transform the way business is conducted. So, organizations need to start looking at organizing their data and build scalable digital platforms that can make all of this work together. Every organization, large or small, therefore must have a sound data and technology strategy. One thing we should realize is that a sound data strategy is also needed because the digital world with sensors, cameras, and other IoT devices, APIs interfacing with other computing ecosystems, and more is creating lots and lots of data. And all that data eventually must be harnessed, processed, and used to advance the business and its goals.
  • Finally, as AI can have different types of uses for different organizations, it’s important to have some type of an AI strategy before jumping in blindly. While it may be tempting to jump in to grab some low hanging fruit, it’s essential to develop a longer-term strategy. We will cover that in one of our next episodes in the days to come.

 

 

CIOtechCentral – Intro to the New Podcast on Digital and Information Technologies

learn solutions architecture

Welcome to CIOtechCentral, a podcast that focuses on digital and information technologies. This is the first and introductory episode meant to give you an idea of the topics that will be covered on this podcast.

On this podcast, we will cover the latest digital and Information Technology trends, and their applications within organizations and enterprises of all types and how CIOs and CTOs can advance their organizations’ agendas and strategies using these technologies to position themselves competitively in the marketplace.

Some of the topics that we will cover in this podcast will include the following:

  • Foundations of digital business and best practices required to maximize business outcomes.
  • Understand business use cases related to digital technologies such as blockchain, IoT, Cloud, Social media, Artificial Intelligence, and other technologies
  • AI, or Artificial Intelligence, its related technologies and applicability to enterprise performance
  • Developing an AI strategy for the enterprise – What’s needed to get an AI system in the enterprise?
  • How are CIOs jobs changing in the digital era?
  • Why adopt an open source development model within the enterprise?
  • How can one transform their organization into a smart and intelligent organization?
  • How can organizations adopt design thinking and what benefits can it deliver?
  • Moving ERP systems to the cloud – Dos and Don’ts
  • What is a hybrid cloud? Uses and strategies
  • The strategic value of APIs and microservices
  • Digital currencies – What do they mean for businesses?
  • And other topics like these

I encourage you to subscribe to this podcast to ensure that you are always getting notified on future episodes as they are published. Also, if you like the content, please take a moment to rate the podcast on iTunes or whatever channel you get this delivered on. We highly appreciate your feedback. If you are looking to get in touch with me, please find me on Linkedin and feel free to connect and provide me feedback through that channel as well.

Thanks and hope to see you on the future episodes of this show.

 

 

What is DevOps? A Tutorial and Training. DevOps Explained

learn solutions architecture


In this session today, we will review DevOps, key practices and tools, and steps that one can take to transition to a DevOps environment. Before, I begin, please ensure to subscribe to my YouTube channel and you will be notified of more learning videos on the latest topics and trends in the area of digital and cloud computing.

Definition of DevOps

So, first let’s get to the definition of DevOps. What is DevOps? Let’s review the concept from a number of dimensions.

  • DevOps refers to a collection of practices and a general philosophy in the area of software development, the overall goal of which is to constantly deliver and deploy high quality software at high velocities. So, it’s not one practice or a methodology but usually a collection of various practices and methodologies.
  • DevOps refers to the concept where software developers and Ops staff collaborate throughout the software development and deployment lifecycle to ensure the delivery of quality code to production. So, it eliminates the silo mentality and the finger pointing that has existed in IT environments for the past many years. This elimination of silos of software development and operations has allowed software developers to understand the complications inherent in running software that they develop in the operations environment making them sensitive to stability and reliability issues that are important for productions and operations environments. Likewise, operations staff and engineers and system administrators understand the complexities of the software build process making them less critical of “those software developers.” So, a DevOps culture thus instills teamwork to solve issues, empowers teams to take critical decisions – all the while keeping teams focused on the ultimate business outcome, which is to deploy and run quality software that would delight customers. So, as DevOps brings development and operations teams together, the scope of DevOps usually encompasses both software development and infrastructure management processes.
  • In an ideal environment, DevOps practices integrate with Agile Software Development Methodologies. Here, we need to understand the relationship between Agile software development and DevOps. To put it simply, Agile software development enables production of quality software quickly. However, even if that’s done there is no way to ensure the integration, testing and deployment of that code rapidly. That’s where DevOps practices take over. So, Agile Software Development approaches together with DevOps enable a fast and rapid software design, development, testing, and deployment of quality software products to production that in turn can have a direct bearing on customer experience and satisfaction.
  • DevOps is also synonymous with the many tools that are used in software delivery and deployment because a key tenet of DevOps has to do with the automation of the various pipelines that integrate software development, testing, deployment in production and monitoring. We will cover some of these tools and their functionality a little later in this presentation.

 

Business Benefits of DevOps

Next, let’s review the overall business benefits of instituting DevOps principles and practices. Most organizations who have successfully implemented DevOps report the following:

  • An accelerated delivery and deployment process – As DevOps brings down the silos between organizations by getting teams in software development and production or operations to collaborate closely throughout the software development and deployment lifecycle, organizations observe a considerable increase in the velocity of quality software development to production. This helps the organization to serve its customers faster and to innovate and test its innovations at a rapid pace. This is a departure from the traditional practices where production staff would become a bottleneck in the deployment of developed software to production to ensure stability of operations. However, as teams in DevOps work as part of one team throughout the overall process, any issues related to software development or operations are addressed during the overall development lifecycle facilitating a fast delivery to production and operations.
  • Higher frequencies of software releases – DevOps practices such as CI and CD along with automation of the pipelines from software development to deployment enable organizations to release software constantly to production or at a minimum get it ready for deployment. Depending on the size of the organizations and the scale of their software development, many have reported that their releases have gone up by 50 to 100 times. For example, in one of the latest DevOps conferences, Netflix reported to have been doing thousands of releases on a daily basis. That’s an astounding increase from earlier practices which allowed for much fewer number of releases per week.
  • Automation of repetitive tasks – As we discussed in the earlier point, such a high increase in the frequency of software releases isn’t possible unless various facets of the overall software delivery and deployment pipeline are fully automated. The various steps in this whole pipeline can include code development, integration, testing, security, validation, deployment, monitoring, etc. a number of which can be automated in an organization that institutes DevOps.
  • Stability, security and reliability of deployed software – Automation can ensure that various policies and best practices are reflected in code or scripts minimizing human errors and thus increasing compliance with an organization’s policies thus ensuring stability, security and reliability of a production and operations environment.
  • Better predictability of software release cycles – Due to the automation of the various facets of the overall lifecycle, the organization expects to get better predictability on when certain business functionality can be deployed into production and thus can plan accordingly.
  • Fewer errors in delivered and deployed code – Automation and the constant practices of testing and integration ensure that developed code has fewer errors when run in production.

 

Popular DevOps Practices

Now we will cover some of the popular DevOps practices. Although there a number of DevOps practices to help organizations achieve the overall goal of rapidly delivering quality software to production and to enable its automated monitoring, in the following we will cover the 4 popular practices that are at the foundation of any organization using DevOps.

Continuous integration refers to the practice of developers integrating their newly developed or modified code with the code baseline checked in by others continuously. The keyword here is ‘continuous’ and that’s to ensure that any defects that may surface during integration are surfaced as early as possible rather than waiting to integrate later in the process. So, as multiple developers produce and update code, they are constantly integrating with the main baseline to prevent discovering larger integration problems later. This practice, therefore, removes potential hurdles from the process and speeds delivery of software to the production environment.

Continuous delivery ensures that software is constantly readied for release by automating the steps highlighted in continuous integration along with other steps of unit testing, load testing, integration testing, API reliability testing, etc. This helps developers discover any issues pre-emptively rather than discovering at a later stage. Whether that actually gets released to production or not depends on other factors including prioritization by the product owner of the various business functions. The deployment to production therefore waits for a manual approval trigger. Through the practice of continuous delivery, high quality software is ready to be deployed quickly to production reducing the risk of suboptimal code being released to production while ensuring speed and fast time to market.

Continuous deployment is similar to the continuous delivery process except that delivery is automated all the way to production and not merely to a staging environment. In general, unless you are confident of auto deployment, this practice is not recommended. Usually, someone does a final manual check of other dependencies before code is deployed in production. Most organizations prefer to take the process all the way to continuous delivery and then wait for a manual check and validation of other dependencies. However, depending on your business situation and process and environment maturity, you can consider the instituting of this practice with care.

Infrastructure as code facilitates configuration of infrastructure components such as servers through code. In traditional environments, software developers manually provision and configure servers and apply patches to the multiple servers in various environments such as dev, test, pre-production, and production. As technological advances in cloud computing have allowed engineers to interface with infrastructure through APIs and code, engineers can provision and configure servers using software thus simplifying and accelerating the entire process.

 

So, when we look at these practices of team collaboration, and lean processes that fuse development, testing, deployment and monitoring processes, we see that these practices have come from the Agile and Lean approaches that started a few years ago. So, if you are an IT and technology executive, this should provide you some cues to ensure that regardless of your IT maturity, ensure that your teams understand the fundamentals of Agile and Lean as that knowledge can help your teams to formulate your organization’s processes.

DevOps Tools

Now we will look at DevOps tools. These tools span a number of areas including the building and compiling of software, testing, configuration management, application deployment, monitoring, version control, and others. Other tools are used in the areas of continuous deployment, continuous delivery, and continuous deployment. These tools together with the emergence of virtualization allows organizations to deploy digital services quickly to the business.

Basically, within DevOps, the tools are needed for a wide variety of activities and functions some of which are the following:

  1. Building and provisioning of servers
  2. Virtual infrastructure provisioning – This refers to using APIs and other tools to help you provision other parts of the infrastructure either in your cloud environment or public cloud environments such as Amazon’s AWS and others.
  3. Building code – These tools compile code into executable components.
  4. Maintain source code repositories
  5. Configuration management – These tools facilitate configuration of development environment and servers.
  6. Testing and automation –
  7. Version control – This ensures that all code history is maintained in the repositories using version numbers. With numerous developers checking in and checking out code, tools can help track previous histories using automated versioning. Also, if there is ever a need to revert back to the previous version of functioning software in a production environment, these tools allow that very easily.
  8. Pipeline orchestration – These tools orchestrate the entire process from the time software is ready for deployment all the way to deployment. There are other tools that provide complete visibility from the beginning to the end.
  9. And so on.

 

Over the past few years, a number of tools have surfaced with multiple features that are too numerous to mention but here we will cover some with their primary features. With time, these tools mature and incorporate additional functionality. Here is an overview of some of those tools.

  • Jenkins – It’s one of the most common and popular tool and addresses various facets of both Continuous integration and continuous delivery.
  • Vagrant helps DevOps teams create and configure lightweight, development environments. This falls under the Infrastructure as Code and essentially lets developers create a single file for projects where they can describe the type of machine they want, the software that needs to be installed, and access rules for the machine. Vagrant then uses that to provision development environments.
  • Splunk – This tool provides operational intelligence to the teams and is based on data analytics.
  • Nagios – Monitors the infrastructure components such as applications, services, operating systems, network protocols, system metrics, and network infrastructure.
  • Chef is another popular tool that turns infrastructure into code so that users easily and quickly can adapt to changing business needs.
  • Docker – This is an open integrated tool that allows DevOps teams to build, ship, and run distributed containerized applications
  • Artifactory is a universal code repository manager that supports software packages created in any language or technology.
  • JIRA – This is one of the very popular tools used by Agile teams. This tool is used by DevOps teams for issue and project tracking.
  • ProductionMap is another popular tool with advanced orchestrator and development features. This tool enables teams to develop and execute complex automation on a large scale of servers and hybrid technologies.
  • Ansible is a DevOps tool for automating your entire application lifecycle. Ansible is designed for collaboration and makes it much easier for DevOps teams to scale automation, manage complex deployments, and speed productivity.

 

If you work with public cloud frameworks such as Amazon’s AWS or Micrsoft’s Azure, then they you will have to integrate with their specific tools and solutions. For example, in the AWS world, you have access to the following: There are others as well but we will cover the key ones here.

  • AWS CodePipelineAWS CodePipeline addresses both the continuous integration and continuous delivery practices and when configured properly according to your workflows, allow a complete smooth DevOps pipeline.
  • AWS CodeBuild – This tool as is obvious from the name is used for building software and checking in repositories and to perform testing. It also ensures that one doesn’t have to worry about server provisioning, etc. as they are taking care of in the background.
  • AWS CodeDeploy – This AWS service allows the deployment of code to production to AWS server instances.

 

DevOps Transitioning

Finally, we will discuss some of the steps that organizations can take to institute DevOps in their environments.

  • Start Small – First start small. Start with a small project and get it through a continuous integration and delivery pipeline and then to deployment. So, essentially get your teams to understand the technicalities of instituting a DevOps pipeline from planning and development of code to deployment, monitoring and collecting feedback.
  • Focus on the cultural aspects – Also, in parallel start to focus on the cultural aspects. That’s very important. DevOps is not merely about getting a bunch of tools and making them work. If the cultural aspects that we discussed earlier about collaboration, bringing down siloes, etc. are not taken care of, then the effort won’t yield fruitful results.
  • Define a workflow specific to your environment – Next, as you start to mature gradually and start to piece together various tools to support the development and deployment processes in your environment, define a specific workflow or workflows that would be appropriate for your software environment. So, for example, you may have multiple and hybrid development environments related to Docker, Kubernetes, legacy applications, working in public cloud environments, and more. Ensure that your defined workflows will work to support all those scenarios.
  • Select the right tools to define your workflow – Depending on the workflows that you define, you will need to then ensure that you pick the right tools that integrate tightly to form an integrated DevOps pipeline. That is essential to ensure maximum automation that at the end will help you achieve high velocity development, delivery, and deployment.
  • Establish business level metrics and measure maturity over time – Finally, ensure that you institute the right metrics to ensure that you can measure your organization’s maturity over time in terms of delivering more software, quality of deployed software, and so on.

 

With this we come to the conclusion of this presentation. To learn more on other topics, ensure to subscribe to my YouTube channel where I post a number of best practices related to the area of digital transformation.

devops training practices

Six Cloud Migration Strategies (Based on Gartner and Amazon Methodologies)

learn solutions architecture

This post discusses the six cloud migration strategies (Based on Gartner and Amazon Methodologies).

In today’s episode I will cover the 6 fundamental migration strategies that organizations have at their disposal when migrating to the cloud. These strategies are based on Gartner’s research and also on the work that Amazon has done in helping their customers to migrate to the cloud. Both Gartner and Amazon discuss these extensively on their blogs and websites as well. As a technology executive, if you are still in the early phases of your cloud migration journey, a review of these strategies can help you in developing the right mental models that in turn can guide you to develop your own digital transformation journey.

So, let’s get started.

One of the key phases that every organization goes through when considering migrating its legacy systems to the cloud is that of a discovery process. In this phase, the organization essentially takes a detailed inventory of its systems and then decides one by one on the effort and cost required to do the migration. This step is usually done by keeping the overall business case and objectives of the migration in perspective. For each of the applications and systems in its inventory, the organization may decide on a specific migration strategy or approach. We will discuss those strategies next.

Re-hosting – The first strategy is that of re-hosting. This is also referred to as lift and shift and involves migrating a system or application as is to the new cloud environment. The focus is to make as few changes to the underlying system as possible. During the discovery process of the migration planning exercise, systems that qualify for such a migration are usually considered quick-wins as they can be migrated with minimal cost and effort. However, as the application and system usually involves a simple lift and shift, such as system isn’t expected to utilize the cloud native features and thus isn’t optimized to run in a cloud environment. Thus depending on the system, it may even be more expensive to run the new migrated system on the cloud. These types of issues should be considered before categorizing a system for such type of migration.

Refactoring – Refactoring is the second migration strategy and falls on the other extreme of the migration effort because it requires a complete change and reengineering of the system or application logic to fully make use of all the cloud features. When complete, however, this application is fully optimized to utilize cloud native features. So, even though the cost and effort required for this migration can be quite high, in the long run this approach can be efficient and cost effective because the application is reengineered to make use of the cloud native features. A typical example of refactoring is changing a mainframe based monolithic application from its current form to a microservices based architecture. When categorizing an application as refactoring, the business should perform a detailed business case analysis to justify the investment of the cost, effort and a potential business impact and also to ensure that other alternatives are considered as well.

Replatforming – This type of migration is similar to re-hosting but requires few changes to the application. Amazon’s AWS team refers to this approach as lift-tinker-and shift. Even though this approach closely resembles that of re-hosting, it’s categorized differently simply because it requires some changes. For example, in doing such migrations, an organization may plug its application to a new database system that’s on the cloud or change its web server from a proprietary version such as Weblogic to Apache Tomcat, which is an open source based web server. So, for planning purposes it’s important to categorize it as such. Obviously, if a system or application is going to be changed to make even slight changes, it may need to be put through more thorough re-testing processes.

Repurchasing – This migration strategy entails essentially switching the legacy application in favor of a new but similar application on the cloud. Migrating to a SaaS based system would be an example of such a migration where an organization may decide to migrate from its legacy financial system to a SaaS based financial ERP system.

Retire – The fifth strategy is about retiring systems and applications that an organization no longer needs. During the discovery process, an organization may find applications as part of its inventory that are no longer actively used or have limited use. In such cases, those types of applications may be considered for retirement and users of those systems (if any) can be provided other alternatives.

Retain – In some cases, the organization may decide not to touch certain applications and systems and to postpone their migration for later in the future. This may be either that the applications are too critical to be touched at that point in time or require a more thorough business case analysis. Either way, it’s normal for organizations to not touch some applications and systems during their cloud migration efforts. However, in certain cases such as a data center migration, organizations may not have a choice and will have to consider one of the earlier described strategies.

To conclude, although the strategies that I have covered address most of the common cloud migration scenarios, as a technology executive you can devise other categories based on your business needs. Defining these migration categories and their criteria upfront can be a major and helpful step to aid in the migration of one’s legacy systems to the cloud.

Hope this session was useful. Again, to ensure that you don’t miss any future episodes, do subscribe to this channel.

  • End

Digital Transformation Roadmap and Customer Experience Management

learn solutions architecture

A great webinar that discusses the the ways your customer(s) touch your business along with an example of a Company’s Digital Business Transformation and how they began their process.

— Transcript of the video

We have all experienced the ways on which we become frustrated in our interactions with the businesses that we do business with and, as you see with this slide, it talks about how customer experience management is high on the list of many top executives. And, we all know from history that the cost of sales is high, whereas if we can keep a client with us; in other words, retention, the profitability from that customer increases dramatically. And so, how do we continue to cause that individual to be loyal, right, that stickiness… and then it’s about quality of data, right. e

So, using and understanding a master data management strategy so that we have that golden record, that really identifies the fact that Jim Marrsolla is the same as Jay Mars Ola, the same as JH Mars Ola, the same as James Mars Ola. And, so instead of being four different people with inside your system with maybe two or three different email addresses, I’m identified as one individual. And my attributes in regards to how I (or customers) want to be communicated with in interacting with your business is also consolidated. And then and then last is the fact that with this direction around customer experience management, those that… I wouldn’t say have cracked the code, because there is always the element of continuous improvement, it’s really around the fact that their leaders have greater loyalty; meaning that with that loyalty comes more buying, right, your share of the wallet is increasing your ability to go deeper and wider with inside that customer’s ability to purchase from you additional products or services, and then as importantly is the willingness to forgive right, is that everybody makes mistakes. And, if I’ve got a trusted relationship with my with the business or the business has with me, chances are I’m going to forgive them if they have a miscue. And so, these are the things that we’re talking about as we start to dive into this area of digital transformation and digital business transformation.

Now, for the last maybe 18, 24 months we started hearing a lot about digital transformation, digital business transformation, new digital business models, and the like. So, the question always comes up is really what is digital transformation? And, as you pull different people different things come up, and so when you ask somebody hey you know our year is your business digital? Is your company you know going through a digital transformation metamorphosis? And, you’ll get responses back with, well, yeah absolutely. we we’ve got a responsive website so that when you go on to the device whatever device it is it you know you’re going to have an experience that’s going to be associated with that device. and you know we really have done a lot in the last 12 months for, you know, a couple years around social CRM, or the way that we’re starting to allow our website and doing true tag management, and optimizing our search engine optimization around copy writing in and we’re really a believer in in content management marketing and in associating that.

And so, you start to hear these different elements in regard to what depending on who you’re talking to these isolated areas of digital transformation. and I would I would challenge the thinking around something a little bit different; relative to the fact that to us, digital transformation is really about understanding the customer and becoming a customer advocate. And so, our view of things is to really flip the model, and be thinking from the customer out and aligning your digital technologies, all of your systems, your projects, around the customer’s experience, and not the other way around. For instance, putting a system in and then coming to realize who this is going to be you know we didn’t count on how the customer is going to interact with this eCommerce system, or how they’re going to make a payment for this. And now we have to modify and in our own minds, justify why this is a good thing for the customer to go through as opposed to engineering from the customer’s involvement with the business and rearranging the organization to support that. And so, the reason why we believe that the customer experience is king is because, most people in their business haven’t necessarily connected the dots on a regular basis on the goldmine that comes from your customer. Because as this slide shows, your customer really is in a position to… the customer service; there is ways that they understand how maybe your product or service works better than you do. You could have an open community where you’re starting to see others with inside your customer base answer questions, solve problems.

You also have the way of looking and interacting or by interacting with them, to have them give insights into what might be new what additions might be necessary, so they can be a forerunner of your research and development. Of course, advertisement right you know, the Nike swoosh, and whatever the Fitbit might be, they all of a sudden those are items that become an advertisement. But at the same time somebody is out blogging, somebody is out socializing, what it is that you made or the service you did. That’s the one that everybody always seems to latch on to because it’s the most obvious. The next one is maybe from a designer perspective, how are they modifying, what are they taking and adding to your product or service that’s making it unique in their own.

And then finally, really the connection of, what are the various additional products that might link together from different sectors or different areas Tomasz create a brand new category around your product and service. And so, these are areas that if we’ve got a strong customer engagement experience process, we really start to see that our customers truly are Capital Management, and an investment for our business.

So, with that said you know let’s  talk about the ways that your customers are touching your business, and these areas that if they are taken holistically, can really change the way that you’re interacting with your customer and creating a customer experience. If dealt with isolated which is how most of us are dealing with them today, we end up with siloed information, disconnected customers and they’re on the edge or the verge of staying with us until they find somebody that is going to holistically work with them and engage them on that basis. So, the first thing that comes to mind, again is our web site. The web is really our initial interaction with our customer. You know, depending on what report or story you want to read, anywhere between 40 to 80… I’ve heard it as high as 90, but let’s you know keep it back into that that spread we all go online, and we start to investigate, we start to evaluate, we start our sales and our information process, about what we want to purchase, who we want to do business with, what service we want to do from somebody’s website; or do a web search. And so, from that, you can see here the number of ways that we can capture data and we can have an interaction with our prospects or our customers on that basis. And, every interaction leaves the data trail; and so be mindful that with the website, there’s a rich amount of information that depending on how we’re integrating that into our organization, can be quite useful.

The second one is what I mentioned earlier was mobile. We talked about how ages 8 to 88 is in this released, in North America are using mobile devices and interactions. And so, from that what there is available is really location analytics and how somebody is interacting. So, where does mobile apps play with inside our ability to have a unique customer experience for our customers and our prospects. And so, mobility becomes a very strong opportunity to really create a competitive advantage for us, and also a loyalty opportunity for our customers. Again, social we talked a little bit about earlier, there’s the listening side of it, the engagement, more and more organizations are looking at ways on how to use that for customer support, and so social becomes another mechanism in regard to the interactions and the touch points that that a customer is going to have with our with our business. I mentioned earlier, supply chain; so our suppliers how are they engaging with us? Are we allowing them ways to interact with our business that’s convenient for them, or is that on our terms because that’s what that’s what’s helpful for us in the way that we’ve constructed our systems? And again, we have the ability to look at mobile or other apps that can be extended in supporting our partners as part of our customer, as part of our customer experience management direction. as we look at organizations that have locations; so if you’re, let’s say a financial institution, you could be a retail location, you could be a hospitality… you could be a hospital, a clinic, any area that has locations where you’re doing physical interactions with your customer, really becomes an opportunity to start looking at location analytics, and in ways that you can engage and interact with a customer, as they’re coming in to your location or even as they’re coming around your organization.

When I’m out and I’ve got pride no different than anybody, else high-maintenance Starbucks even though I’m not one that specifies a lot of different types of Starbucks coffees, I like it straight up and black, I will get a text on my phone of Starbucks that are in the area. So, location analytics are ways that we can have a different experience with our with our customer base. And to that extent, when we look at our locations, there is these opportunities once again for the interactions and the capturing of the data trails that are associated with those interactions at that location. And you may notice if you’re if you’re if you’re catching it, a lot of these are mobile apps, a lot of these have a mobile capability associated with it. and from  this location interaction, we can start to drive loyalty, we can we can sift information from point-of-sale interactions if we are actually taking credit cards or payments that are associated with that and starting to enrich the data on how we’re associating the customer with the actions and the reactions that they’re taking with our with our business.

The next one is traditional marketing, so marketing in the sense of how we’re doing advertising, media buying, our email interaction, there’s still enough evidence that comes out almost on a daily, if not weekly basis, talking about the value of email, and how to interact in and exchange information associated with that. and like, I’m sure you’re like me, is once you submit your information to one, funny how it seems to populate like rabbits we’re, all of a sudden, I’ve got you know a half a dozen or a dozen more emails coming in just because of one download of one white paper or information that’s associated with it. So, email is still a very big connection in regard to marketing, but there’s also with so many other facet and facets of how marketing is working to engage with the customer and more times than not, marketing has been isolated as the group that needs to talk about, or work with the customer experience. And yet, as we look, digital transformation far exceeds just what market is doing, and they really can get need some help from the organization to make effective, what they’re what they’re promoting about the about the business and about the products and about the services.

Along with that is sales, right, there is the opportunity to generate interest, and then there’s the opportunity to close right or to win the business; and that’s where sales comes in. and so all of those interactions whether it’s on an e-commerce basis direct selling, indirect selling through channel partners, all of that is nuggets of information that are all creating an experience for the customer or the prospect and how they are looking at the way your organization engages with them. Sort of like the courtship right, if if how I’m being treated as a prospect can give me a glimpse on how I’m going to be treated as a customer after the fact. And so, that’s always what’s running through some of our minds on why we may or may not want to do business with an organization. An area where a lot of us forget is accounting, and in how we interact with the invoices, the bills, payments that come about when we’re interacting with an organization.

One of the big ones that I put on here is fraud prevention, and in the use of big data in that area. I travel a lot and so if I haven’t remembered to let chase, or be of a or Wells Fargo know that I’m going to be in a particular location for a period of time, or forget because their window is about three months, all of a sudden I swipe a credit card the next thing I know, I’ve got a text coming through, I’ve got an email coming through and I’ve got a phone call coming through, from one of the banks in regards to fraud detection; cancelling or denying a transaction. So, all of that creates an experience of how I need to react or interact with the accounting department, or with the invoicing area with insight in this case, my banking relationships.

The next one that seems to have a life of its own, but is so involved with all of our day-to-day actions, is the call center. Whether it’s through social chat or telephone, if I go to the website and in the opportunity to get service help and the opportunity for the business to do an upsell or cross-sell. How many times we’re now, you know, the clothes of 2015 and in Fortune 500 companies that you know of specifically, in certain service industries, whether it’s again banks, it could be your mobile phone company, it could be your internet service provider where you make a phone call, you key in your data, and then the next thing you’re being transferred to a live operator. And, what’s the first thing that they want to do? Maybe they’ll ask for your opinion identifier, and those are the few that at that point, your information is pulled up and they they’ve identified you. But more times than not, I’m still finding that I have to go through the routine of giving information back to them that I just plugged into their IVR system. And I’ve learn now to no longer ask the question, ‘why did I have to put this into your system for only you to have to ask me that question again?’ because the answers come back exactly the same, the scripts that a call center are given are exactly the same. You know, I’m sorry Mr. Mars Ola for the inconvenience, you know, and I realized that there walking through their scripts and the information that I’m trying to share won’t get passed on beyond their apologies, and their scripts. But, again it’s an opportunity where as a business; we’ve got the capability of aligning and linking this information together. If we start to think about the customer experience and start to work on knitting together all of this information from the customer advocates perspective out. and then the last one is the investors are maybe donors; so if you’re a publicly traded company, you’ve got… we talked about mailing lists, maybe you’re a not-for-profit or org and you want to have communications out to your donors, there’s a lot of rich information on ways that they’re interacting with your organization and how you want to be able to capture that information and communicate back out to them.

So, really the whole point between all of this, is centralizing the data. because at the center of it is the way that this data is brought together allows your business to be able to create an environment so that when the customer engages with your organization, that information is centralized and those that are interacting with the customer have access to the data, so that they can have an intelligent reaction and conversation or interaction with the customer on that basis. Again, the key is around the customer experience management the customer experience and the loyalty.

We talked about early on, that the more loyal, the more money the, more profitable, more opportunity we have to expand the services with that particular customer. So, let’s transition and now talk about the frame work on what we think is a good model and a good place to start, and you’ll notice here I put in parentheses use a scrum based model. If you’re not familiar with scrum you may know the term agile; the model around that is very simple. we’re  not going to talk about specifics around scrum, that we can set aside if you’re interested in that, happy to go into a another discussion and we can we can actually do a webinar specifically on how to use scrum for digital transformation, which might be a good topic. But, the concept around scrum is to chunk things out, is to do things in in a 1 to 4 week model and get something that’s useful out into the users, or in this case, the customers hands, so that so that there’s continuous progress and continuous improvement being made on how you’re going to do your transformation.

So, where’s the beginning? The beginning is really organizationally, and what many businesses need to understand is this this concept of digital transformation. Really, if you’re looking at it from a customer centric perspective, has to be strategic. So, the executives all the way on down like any project, it needs to be embraced and inoculated within and through the organization. And so strategically, the organization has to start to understand that this is what’s right or good or best for the business. And, there’s enough data to prove that if, you know, and I should say since that would be the direction for your organization or you may not that… I would imagine that’s the reason why you’re on this on this workshop. Secondly, is the business itself right. So, starting to align the businesses thinking that, we’re a holistic organization we’re building this around a 360 degree view of the customer how the customer interacts with us, walking through the journey maps of that customer and what are all of the to the next pillar organizations with inside the business that are being touched where the departments that are being affected by the journey of the customer, and how are those systems then which is the last pillar the technical side?

So, how does the technology of our organization? Where do these systems play? How are we dealing with them today? Where do we need to be tomorrow to start bringing that that truly 360 degree view as well as the key performance indicators that will model our business so that technically, organizationally the business starts to reflect a strategy on which we want to want to drive our organization in this digital transformation? And so, those three, what we call pillars of transformation, really is around the strategy. You have to know what you want to do, where you want to go, and start to look at what the roadmap or the journey map is for that.

And so, the second part is where a lot of us are very good at as businesses. We’re very good at projects, we’re very good at, ‘Hey, we’ve got a problem here’s what we’re going to do to solve it…’, and we have a project we wrapped services and an energy around it and we go and then the last is the operation, how do we keep this up and running how do we make this work for that for the business. And, I think that that is a solid you know idea; what I think that we recommend is you’ve got to start in the getting with the strategy and start to roadmap before you really look at what are the projects and one of the operational sights that reflect how you want to get to where you want to go.

So, step number one, if you remember when we talked about the touch points, the very last slide was about the data. And, it really is all about the data, because if the data is off and there’s misalignment, then we start to really waste money. And also, possibly irritate our customers with bad data; with the wrong information being sent to them. I received a call yesterday from an organization that is a in the systems integration business, happens to have three letters and they call …somebody from their team called up asking if we would be interested in doing business with them and have them come in and do services for us. And, I asked the young man, do you know what business we’re in what kind of work that business and decision does?

And, quite frankly, he didn’t; he made something up and once I explained to him you know it was in a friendly way to just help him realize that he you know, may want to know a little bit more about the customer of the prospect that he’s calling on almost a, you know, the next day I received a call yesterday from somebody interested from a real estate company, wanting to do evaluation on my home and wanting to do it for free. And, when I asked the young lady, I said, ‘In this day and age with zillow.com, why would I need your organization to do evaluation of my home if I ever do want to sell it?’ and she said, ‘Well, quite frankly, I don’t know what zillow.com is.’ and I said, You are in the real estate business and you don’t know what Zillow is?’ and she said, ‘Well, I was hired to do a specific task, and do a specific job, so I really don’t know you know anything about that?’ And so, I share that because with inside all these organization, we have a ton of data, we have a lot of information but we’re not translating it in ways that are useful. And so, I share those stories with you that that’s because there’s so much disconnected data.

Remember the wheel, and remember all those touch points? More times than not, those are all isolated databases that are information that are associated with the accounting department or with the marketing department or maybe in the call center or; or there’s another group that’s responsible for sales and isolating this information, and it gets used by the teams that are with inside that area, but when  marketing sends something out in a promotion, it’s not necessarily connected with the call center, or it’s not necessarily made available to the partners downstream, or you know, maybe accounting isn’t aware that this promotion has been sent out, you know. So, there’s a disconnectedness when the organization isn’t sharing in linking this information together.

So, that’s why our very first place to start is with the data; and to look at centralization of the of the information. The second thing is, to really look at the external data. So, now that we you know have our internal data and where that’s coming into play, how are we dealing with the external data, right? So, we have as I mentioned information that may be on our website, but what about our social approach? Where are we and how are we using that information and bringing it in? What about the areas of enrichment? In other words, if there is demographic information that we can pull in, other sources of data that we can enrich the information that we have or the knowledge or the preferences that we have on our customers with insider organization.

– -End

Program Management Professional Certification

Bitcoin Cryptocurrency Outlook and Trends for 2018, 2019 and Beyond

learn solutions architecture

This video reviews the positive and negative forces that are pulling on the Bitcoin cryptocurrency and how those forces could potentially shape its future. As the market for cryptos in general and Bitcoin specifically is extremely volatile associated with enormous speculation, and other unknowns, you should watch other more recent videos and read recent articles before making any critical decisions. This video also reviews the pros and cons of Bitcoin as it relates to the Blockchain technology.

Transcript follows:

Can Bitcoin and Cryptocurrencies Survive the year 2018 and Beyond?

This is another session on Blockchain and Cryptocurrencies. If you like this video, please ensure that you subscribe to this channel and connect with me on Linkedin at the link at the end of this presentation.

The focus of this video session is more toward Bitcoin because it has been around much longer than the other cryptos and has a much larger market capitalization compared to other cryptos and is thus under increased scrutiny by people and governments alike.

In this video, I will be reviewing the positive and negative forces that are pulling on the Bitcoin cryptocurrency and how those forces could potentially shape its future. As the market for cryptos in general and Bitcoin specifically is extremely volatile associated with enormous speculation, and other unknowns, you should watch other more recent videos and read recent articles before making any critical decisions.

As cryptos in general are inserting themselves in the middle of a rather relatively stable financial system, they are facing concerned and strange looks from regulators, bankers, governments, and other entities that are concerned about any potential impact to the world’s financial systems.

Let’s now review some of the positive and negative forces that are impacting Bitcoin and that could shape its future. For more information on Bitcoin and the blockchain technology behind it, you should watch other videos on CIOTechCentral.com and subscribe to this video channel on Youtube.

Positive Forces

Let’s first review the positive forces that are shaping Bitcoin.

High Market Cap: First, if we look at the market capitalization of cryptocurrencies in general and Bitcoin specifically, it has reached staggering levels. Bitcoin, for example, lately had surpassed a market cap of more than quarter a trillion dollars and even after the recent corrections it is exceeding $150 Billion. In fact the market cap for cryptos has reached levels where governments have started to take notice and are concerned how their future success or failure could impact the existing financial systems and markets. This high market cap is largely due to the increasing number of merchants and customers that have started to trade in Bitcoin and is also influenced by the upbeat sentiment of a large number of investors.

Investor Trust: The second positive force propelling Bitcoin and other cryptocurrencies has to do with investor trust. A number of investors and cryptocurrency believers are of the opinion that once institutional investors get into the game, the price of Bitcoin will scale to new heights into high five and even 6 figures. Many of us have heard predictions of bitcoin hitting price levels of $100,000 in the next two years. However, that’s a big if and while the start of futures trading by a number of Wall Street firms such as Morgan Stanley and Goldman Sachs shows potential on that front, its overall effect largely remains to be seen. Another indication of high investor trust is that despite a number of bad news surfacing in various corners of the globe, most of which has to do with governments coming down hard on Bitcoin and other cryptocurrencies, the stock price of bitcoin as of this date continues to hover around $2000. This is indicative of the high trust that a certain segment of the population continues to place in the future viability of cryptos.

Utility: Another positive force related to the popularity of Bitcoin has to do with its utility. It has been a while since Bitcoin is being accepted as a digital asset of exchange in commercial transactions. Many merchants have now enabled the ability to accept Bitcoin as a currency and the trend is continuously on the rise. Merchants that accept Bitcoin include Overstock.com, Expedia, Dish Network, Microsoft, and many others. Lately, Mark Cuban the owner of the Dallas Mavericks Basketball team also announced that the team will start accepting bitcoin for ticket sales.

Valuation – Another factor related to the popularity of Bitcoin has to do with its valuation. Although there are many skeptics who question the potential value of any digital asset, there are others who make the flip argument that the scarcity and utility of any asset give it value and that includes digital assets such as Bitcoin. Given that Bitcoin has a fixed limit of 21 million coins, and its utility is on the rise, supporters contend that this would not only prevent Bitcoin to get devalued in the future but will continue to increase its value.

Negative Forces

Now, let’s review some of the negative forces and influences that are bearing down on cryptos in general and Bitcoin more specifically. These negative forces are being discussed even more these days as all cryptos seem to be going through another phase of market correction.

So, here are the negative forces bearing down on cryptos in general and Bitcoin more specifically.

First use case of the blockchain technology – One general comment that is considered negative related to Bitcoin specifically is that as Bitcoin was one of the first use cases of the blockchain technology, it has a number of technical drawbacks. This includes the long time that it takes to process transaction blocks and add them to the blockchain, the high energy that it takes to process those transactions, and so on. With time as more technological breakthroughs are surfacing, the newer cryptocurrencies are coming up with better and efficient ways of implementing the blockchain technology. This raises the question that if cryptocurrencies were going to survive for the long run, then which specific crypto would make it ahead of the others. For example, the Ethereum blockchain that issues the Ether cryptocurrency is known to be more efficient in how the blockchain technology is implemented and investors seem to be quite bullish regarding Ethereum.

Increased Regulation – Another negative aspect of cryptos seems to be the encroaching regulation. Lately, talks regarding regulating cryptocurrencies has gained considerable momentum. We have seen a crackdown of cryptocurrencies from a number of countries. Obviously increased regulation will make it difficult for investors and users to exchange cryptocurrencies to their fiat counterparts and vice versa and that will have an impact on the acceptance of this cryptocurrency.

South Korea for example has been taking a number of steps to stem the use of cryptocurrencies. In 2017 South Korea took the steps to stop more ICOs or Initial Coin Offerings. Then, starting in 2018 the South Korean government started to take steps to stop cryptocurrency trading altogether. Although the legislation for this is still in the works, it seems that the government may follow through on its commitment to ban the use of cryptocurrencies. Obviously such measures taken by any government can lead to a loss of trust in the cryptocurrency as a potential asset. And more countries move in to take such measures, it’s going to potentially further impact the investor trust behind such currencies.

China and Russia, too, have been taking similar steps. China, for example, has started to take steps to stem the proliferation of cryptocurrency miners and similar to South Korea is in the process of taking steps to stop trading associated with cryptocurrencies.

As for the US, the US regulators such as the SEC and other government agencies are increasingly concerned about unseasoned investors pushing the stock prices of Bitcoin and other cryptocurrencies to new highs. These regulators are very wary about such bubbles as from their earlier experiences of the housing and Internet bubbles only a few years earlier, the regulators are keen to prevent more of such happening in the future. What’s adding to their concerns is the perceived lack of utility for such cryptocurrencies and their use as an asset. There is even more of a concern as according to many reports, investors are said to be using credit cards and even taking out mortgages to invest in cryptocurrencies. So, the key concern for regulators is that should cryptocurrencies experience a serious meltdown that the impact could also impact the existing financial markets.

No perceived intrinsic value – Another attribute of Bitcoin that its skeptics like to point out is that cryptocurrencies don’t have any intrinsic value. An asset usually attains value because of the trust that people place in it. Gold and fiat currencies for example are trusted because governments back those assets guaranteeing their value. Bitcoin and other cryptocurrencies to date lacks that backing and thus there are questions whether any crypto or asset can be of any significant value without such a global backing of governments.

High mining costs – Another negative force preventing Bitcoin’s wider acceptance is that mining bitcoin is known to be quite expensive as it uses a lot of energy. According to many estimates, it takes more than $10,000 to process a bitcoin transaction block where each block has 2500 transactions in each block. Given that miners compete to process the blocks and only the fastest to validate the blocks qualify for a compensation, this would prevent miners in the future to participate in mining activities and therefore, may drive a number of miners out of the market thus impacting the overall value of Bitcoin.

Decreasing incentives for mining Bitcoin – For cryptos that are built on blockchain technology, mining is an important function as miners validate the authenticity and integrity of transaction blocks before they are added to the blockchain. In the current scenario, miners are incentivized by getting compensated in Bitcoins. However, the concern is that once all Bitcoins are issued, processing Bitcoin transactions will have to rely on transaction fees and that may have to be set high to keep the overall business model profitable for miners. This new model that solely would rely on transaction fees may not be as attractive or profitable and thus may eventually drive many miners out of the market.

Summary

So, to summarize, the answer to the question whether Bitcoin and other cryptocurrencies can survive this year and beyond will greatly depend on the following factors:

  • The first factor has to do with the extent to which governments around the globe crack down on cryptocurrencies. This includes cracking down on mining operations, exchanges, as well as we on trading. The extent of crackdown could vary from one government to another. Obviously, in extreme cases, if there is a global crackdown on all activities then that would put a damper on the rise of cryptos.
  • Point number two has to do with the extent to which governments regulate cryptocurrencies because too much regulation could make them no different than the transaction of fiat currencies in the digital world.
  • Specifically to Bitcoin, the future success would depend on its ability to become more efficient and become faster in processing transactions and adding them to the Bitcoin Blockchain.
  • Also, specific to Bitcoin, it will have to address its rising demands on energy required for mining operations.
  • Another factor driving the future success of Bitcoin would be the new business model that will have to be decided on once all Bitcoins are issued and the miners start looking for another form of compensation using transaction fees.

With this we have come to the end of this video session. Be sure to subscribe to this channel for more videos on cryptos and blockchain and also don’t hesitate to connect with me on Linkedin at the link that you see on the screen.

— End

 

What is Blockchain and its Business Benefits? An Introduction on the Basics

learn solutions architecture

Here is a brief video introduction on the topic of blockchain and its business benefits to the enterprise.


Transcript —

Hello and welcome to an introductory session on Blockchain Technology. In this brief session, I will introduce the blockchain technology, and provide an overview of its business benefits. In the other episodes on CIOTechCentral.com, you can find more sessions on Blockchain and other Digital and Internet Technologies.

Introduction

Blockchain is a software based technology that provides a secure and trusted distributed ledger over the Internet enabling a number of sophisticated applications related to transacting assets and value. Blockchain initially became popular with the launch of Bitcoin, which is one of the most widely used cryptocurrencies. Since then interest in blockchain technologies has skyrocketed as a potential means to power distributed applications in various industries. Although this technology is still in its infancy, it’s already being touted as one that will revolutionize and disrupt businesses at a scale much larger than any form of other historical technological disruptions.

What is Blockchain?

So, the next question is “What is a Blockchain?” Although the inner workings of blockchain is beyond the scope of this session, I will provide a brief overview on what a blockchain actually is. Basically, Blockchain is a distributed ledger that is made up of transaction blocks where each block contains a number of transactions that may have occurred anywhere on the distributed network. Each node on the network has a mirror copy of the ledger. So, that means that every time a transaction block is added to the ledger, all nodes on the blockchain network have to agree or arrive at a consensus to add that block to the ledger. Once all nodes agree, all nodes update their respective copies of the ledger.

The addition of blocks to the blockchain is done securely by using cryptographic and hash functions and some other security mechanisms that guarantee the integrity of the information in the block and also ensures that none of the previous transaction blocks of the blockchain are compromised. This mechanism in how blocks are securely added to the distributed blockchain ledger is what has made the blockchain technology attractive for various types of business use cases.

As I had mentioned earlier, the first implementation of blockchain technology was for the cryptocurrency Bitcoin. Since then, however, more advanced blockchain frameworks have surfaced to make up for the deficiencies that existed in the earlier implementation of blockchains. One of the primary features of the newer blockchain frameworks has to do with the inclusion of business logic in the blockchain. For example, the latest innovations such as those from the Ethereum foundation allow the development of business logic and to incorporate that part of the distributed blockchain. These distributed applications are referred to as smart contracts and execute in a secure fashion on each of the blockchain nodes. Similar to the ledger, smart contracts also have to be mirrored across nodes to ensure the integrity of the overall blockchain database and the business logic that drives it.

You can listen to other videos or read articles on CIOTechCentral.com covering various topics on Bitcoin and Blockchain.

Business Benefits of Blockchain

Next, let’s review some of the business benefits of the blockchain. As we had alluded to earlier, blockchain is suited for various use cases that involve asset ownership and transacting of those assets of value securely over a distributed network. Let’s review some of those benefits:

  1. The distributed yet secure nature of blockchain is considered a great opportunity to do away with any applications or business processes that require the need for central authorities to establish trust between various parties. This is one of the main reasons for the emerging popularity of the blockchain systems. The idea is that with users engaging in a peer to peer fashion over a secure network don’t require central authorities to validate and authenticate transactions. Users in a bitcoin network, for example, can send payments to each other without the need of banks to validate those transactions. The underlying secure and distributed architecture therefore provides that foundation to all the parties.
  2. As the need for intermediaries goes away with all parties having direct access to the blockchain ledger and the platform, it reduces the cost and time to reconcile and settle transactions. The reduction of cost comes with the reduction of processing costs across the larger ecosystem and the reduction of time comes from the elimination of intermediaries and the time they require to carry out their part in the overall business ecosystem.
  3. Since blockchain allows building of a distributed ledger that resides on many nodes of the Internet, the downtime related to this platform is almost zero. Again, that’s because many nodes have an updated copy of the ledger and is accessible by its participants.
  4. Transactions in a blockchain system become part of the ledger through a very secure and trusted mechanism of validating digital signatures based on the concepts of public key cryptography. This makes blockchain systems well suited for use cases that require establishing digital identities over the open and decentralized network, especially where multiple parties have the need to establish the identity of individuals.
  5. Due to its potential to bring various parties on one centralized digital platform, it is ideally suited for large business ventures looking to create new markets or extend their enterprise’s ecosystems. For example, all organizations that are part of the car supply chain from the manufacturer to government agencies that register those vehicles and the customers that buy those vehicles can all be part of an extended blockchain providing all parties complete visibility on the complete history of the vehicles.

So, in a nutshell, the ability of a blockchain enabled platform to offer a decentralized yet secure and trusted foundation can enable various businesses and individual entities to securely collaborate with one another, carry out transactions, and exchange assets of value – all without the need for middlemen and intermediaries. Because of this we see that many organizations in multiple industries have started to test as well as deploy blockchain applications.

Look for other videos on CIOTechCEntral.com to learn the various business use cases where blockchain technology is being used.

— End