• Home
  • About Us
  • Contact Us
  • Privacy Policy
  • Special Offers
Business Intelligence Info
  • Business Intelligence
    • BI News and Info
    • Big Data
    • Mobile and Cloud
    • Self-Service BI
  • CRM
    • CRM News and Info
    • InfusionSoft
    • Microsoft Dynamics CRM
    • NetSuite
    • OnContact
    • Salesforce
    • Workbooks
  • Data Mining
    • Pentaho
    • Sisense
    • Tableau
    • TIBCO Spotfire
  • Data Warehousing
    • DWH News and Info
    • IBM DB2
    • Microsoft SQL Server
    • Oracle
    • Teradata
  • Predictive Analytics
    • FICO
    • KNIME
    • Mathematica
    • Matlab
    • Minitab
    • RapidMiner
    • Revolution
    • SAP
    • SAS/SPSS
  • Humor

Monthly Archives: May 2020

Innovation: It’s More Than Just A Nice-To-Have

May 31, 2020   SAP
 Innovation: It’s More Than Just A Nice To Have

Our day-to-day lifestyle has significantly changed in this new reality created by the COVID-19 pandemic. Regardless of how tech-savvy you were before, you – like everyone else – now depend on digital tools to manage both your personal and professional life.

A lot of these products, services, and platforms did not exist or were only available to a select audience before the pandemic. Many were considered too innovative or “nice to have” but not necessary. But think about all of the elderly people staying connected to their loved ones through video platforms, the average families using delivery apps for groceries, or the companies hosting large conferences in virtual setups – the list goes on and on. In most cases, there was no (or little) need for those tools before, so they were scarcely used. And yet organizations still invested in and brought them to market. Looking at the demand now, we can see some organizations were a step ahead of the others. What’s their secret to knowing what will be needed in the future? The answer: their drive for innovation.

Innovation: what does it mean?

How many times have you heard or seen the word “innovation” just today? Well, at least four times in this article so far. And beyond? Yes, it’s everywhere, but it is still an abstract idea for many people – just a buzzword. Let’s dig deeper into what innovation is. Here is how several online sources (linked in the References below) define it:

  • Innovation is the process of doing things differently and discovering new ways of doing things.
  • Innovation is adapting to change to better meet demands of products or services.
  • Innovation is improving business processes and models, developing new products or services, adding value to existing products, services, or markets.
  • Innovation’s aim is to provide something original or unique that can have an impact on society.

Understanding and living innovation, especially in times of change

One point that is missing from this list: Innovation spares no one. It is essential for individuals and organizations, for the CEO of a company just as much as the entrepreneur who is just getting started. That said, you do not need to aim be the next Amazon or Airbnb. Small changes can help you foster an innovation mindset to proactively respond to potential disruptions.

Imagine the current pandemic a decade ago: no virtual office meetings, no video calls with family and friends, no 24/7 food delivery to your doorstep. Think about the economic and emotional impact it would have had. By all means, the economic impact today is enormous, but imagine how much worse it could have been in the past. Companies would have stopped operating with no alternatives; there would no e-commerce, no IT infrastructure, and no availability. If there wasn’t an innovative mindset and driven teams that created these products, services, and platforms, we would be in an even less fortunate scenario now.

Write (innovation) history!

“It is only the farmer who faithfully plants seeds in the Spring, who reaps a harvest in the Autumn.”
— B.C. Forbes

At this moment, we are writing history. We are living through the most disruptive period we have ever seen, a time when innovations are needed more than ever. We need to think one step ahead and take this opportunity to reinvent ourselves. We also need to make this an ongoing practice – all of us, from small and midsize businesses to big corporations. Whether you are producing something like face masks and need to rethink your supply chain management due to high demand, or you’re an events company that needs to go fully virtual in a single day, this applies to you.

I strongly believe that we will rise from this crisis with a new appreciation for innovation and change. Innovation should be a mandatory component of your daily experience and strategy instead of being regarded as a luxury – especially now.

References

For more on thriving through today’s disruption and certainty, see the “Navigating Disruption Today, Planning for Tomorrow” series.

Let’s block ads! (Why?)

Digitalist Magazine

Read More

A Practical Guide To DevOps For ERP

May 31, 2020   BI News and Info
 A Practical Guide To DevOps For ERP

Part of the “DevOps for ERP” series

Whether or not you believe in the value that DevOps can offer to a business – and there’s already plenty of evidence to show that it can deliver major benefits – there’s no doubt that more and more companies are starting to wonder why they haven’t extended this approach to their ERP systems.

Not so long ago, I regularly had to explain what agile and DevOps meant, but nowadays, people come to us asking how we can help them adopt these approaches.

So why the change? Transformation is the key. It’s a word that’s a bit overused by my colleagues in the marketing world, in my opinion. But with the move to cloud, the constant emergence of new technologies, and growing pressure on businesses to innovate and increase competitiveness, real changes are happening that IT teams simply have to respond to.

Perhaps unlike in years gone by, ERP teams are not immune to this trend. As “systems of engagement” like websites and mobile apps change faster than ever, the “systems of record” that often power them need to keep pace. Otherwise the whole business slows down.

Unfortunately, the ERP development processes most people have been familiar with throughout their careers – the “waterfall” method most often still in use today ­– tend to suffer from a slow pace of change. This can be explained by the concern that changing things in ERP systems has traditionally come with a high chance of failure (an unacceptable outcome for business-critical systems).

DevOps, on the other hand, supports application delivery in shorter, more frequent cycles where quality is embedded from the start of the process, and risk is substantially reduced.

Great, I hear you say; let’s do it! However, even the most enthusiastic organizations cannot implement DevOps in ERP systems in exactly the same way as they’ve done for other applications. The fundamental requirements for DevOps are the same – I covered some of them here­ – but the practicalities are different, not least because standard DevOps tools aren’t capable of doing the job. What’s more, the DevOps experts don’t necessarily understand what’s needed in ERP, while the ERP experts may never have heard of DevOps!

What is the practical reality if companies do adopt DevOps for ERP?

Higher-quality development

Delivering software at high speed requires a robust development process that combines clear business requirements and constant feedback. DevOps mandates that ownership of quality “shifts left” and is embedded from the very start of the process. This way, most (and ideally all) problems can be identified long before they get to live production systems (where the disruption caused and associated cost to fix are much greater).

In practice, this means we need to ensure that nothing leaves development without being fully quality-checked. Working practices like daily stand-up sessions, mandatory peer reviews of code, and a set of universal coding standards might not seem revolutionary for some IT teams, but they are new ideas for many ERP professionals. They’re only part of the solution, though, going along with technical elements like automated unit testing and templated lock-down of high-risk objects.

One other practical outcome of DevOps from the very first stage of development is that ERP and business teams must be more closely aligned to ensure that customer requirements are clearly understood. Integration between the development team and other IT functions like QA and operations also establishes an early validation step.

Low-risk, high cadence delivery

Continuous integration is an aspect of DevOps that means that – unlike in many ERP landscapes – changes can be successfully deployed to QA or any other downstream system at any time without risk. The big change here is the ability to deploy based on business priorities, rather than just having to wait for the next release window.

Automation gives you the means to achieve this new high-frequency delivery cadence in ERP systems by providing a way to better manage risk (spreadsheets definitely do not form a core part of a DevOps-based software delivery process!). It enables you to check every change for technical issues like completeness, sequencing, dependencies, risk, and impact and more, ensuring that nothing is promoted prematurely.

This more rigorous, agile approach means QA teams, in particular, can focus their attention on real issues rather than technical “noise,” which accelerates the delivery of functionality that business users or customers are waiting for. Changes can be selectively and automatically deployed with confidence, rather than waiting for the next full release.

Minimal production impact

“Stability is king” has long been an unofficial mantra in ERP environments, given their importance to day-to-day business operations. With DevOps, the required system stability is maintained even though live production systems can be updated far more often. Rigorous controls – built on both technical solutions and new collaborative workflows – ensure that deployments are safely delivered to end users as soon as possible.

But there is always a risk, however small, that a change to live ERP systems can cause problems that stop the business. That’s why Mean Time To Recover (as opposed to the more traditional Mean Time To Failure) is a key DevOps metric. The most effective ERP DevOps processes feature a back-out plan that allows changes to be reversed as quickly as possible so, even if disaster strikes, the impact of change-related downtime is minimal, and business continuity can be maintained.

The culture question

As I’ve explained, when implemented correctly, DevOps fundamentally changes traditional ERP development processes. However, the manner in which DevOps impacts the roles and approach of staff can be just as important. In DevOps, effective collaboration is key. Traditional silos based on job function are replaced by multi-skilled, cross-functional teams that work together to deliver agreed-upon business outcomes. This may require a significant shift in how teams are organized.

It’s normal for some people to find this new way of working challenging, but creating a successful DevOps culture empowers team members to take responsibility at every stage of the development lifecycle. It enables them to collaborate with their colleagues and focus on a common goal of rapidly delivering the high-quality features and functionality the business needs to remain competitive.

DevOps benefits and outcomes

Change happens fast, and companies need to respond quickly. IT systems must, therefore, have the flexibility to rapidly change, expand, extend, and adapt.

But accelerating delivery cannot be done at the expense of business continuity. Successfully adopting DevOps for ERP combines speed, quality improvements, and risk reduction. That provides the flexibility to change ERP environments at the speed the business needs with confidence that it can be achieved without compromising stability.

For more on this topic, please read “How to Build a Business Case for DevOps” and “Self-Assessment: Are You Already Doing ERP DevOps?”

For a practical guide on how to introduce DevOps to your ERP software development and delivery processes, download our e-book.

A version of this article originally appeared on the Basis Technologies blog. This adapted version is republished by permission. Basis Technologies is an SAP silver partner.

Let’s block ads! (Why?)

Digitalist Magazine

Read More

Gears

May 31, 2020   Humor

Posted by Krisgo

via

Like this:

Like Loading…

About Krisgo

I’m a mom, that has worn many different hats in this life; from scout leader, camp craft teacher, parents group president, colorguard coach, member of the community band, stay-at-home-mom to full time worker, I’ve done it all– almost! I still love learning new things, especially creating and cooking. Most of all I love to laugh! Thanks for visiting – come back soon icon smile Gears


Let’s block ads! (Why?)

Deep Fried Bits

Read More

Pico Neo 2 and Tobii-powered eye-tracking variant now available worldwide

May 31, 2020   Big Data
 Pico Neo 2 and Tobii powered eye tracking variant now available worldwide

Pico Interactive is making its Neo 2 line of standalone headsets available worldwide. The base model is $ 700, while an eye-tracking variant powered by Tobii is $ 900.

We tried both models at CES in January and while the eye-tracking wasn’t perfect in the early demo, it also worked without calibration and the electromagnetic controller tracking technology was very interesting. The controllers were able to track even when they were behind my back, unlike the kind of tracking used with Facebook’s Oculus Quest.

The Neo 2 headsets run on Qualcomm’s Snapdragon 845 chips, feature an SD expansion slot and are supposed to be able to stream content from a VR Ready PC “over wireless 2X2 MIMO 802.11ac 5G link with a common MIMO 5G router.”

The headsets are primarily pitched toward businesses but may offer an intriguing alternative for some folks looking to step outside the Facebook ecosystem for VR hardware. HTC also offers the Vive Focus Plus priced starting around $ 800 while Facebook’s Quest starts at $ 400 but is priced around $ 1,000 when bundled with features and support tailored toward businesses.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

Pico’s Neo 2 Eye version is meant to allow “businesses to gain a deeper understanding of customer behavior, enhance training efficiency, improve productivity and increase overall safety at work,” according to the company. The eye-tracking variant is also said to include dynamic foveated rendering to reduce “shading load in some applications” while increasing frame rate. The headsets are 4K resolution with 101-degree field of view and weigh 340 grams without the headband. Those specifications are as stated by Pico and comparing things like resolution and field of view in VR can be especially tricky because there’s no industry standard method for comparing these measurements. Likewise, streaming VR content from a PC to a standalone headset can lead to comfort issues in certain situations depending on a range of conditions including the amount of traffic on your local area network.

This story originally appeared on Uploadvr.com. Copyright 2020

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More

How to Create an Ubuntu PowerShell Development Environment – Part 3

May 31, 2020   BI News and Info

The series so far:

  1. How to Create an Ubuntu PowerShell Development Environment – Part 1
  2. How to Create an Ubuntu PowerShell Development Environment – Part 2
  3. How to Create an Ubuntu PowerShell Development Environment – Part 3

Over the last few years, Microsoft has made great strides in making their software products available on a wider range of platforms beyond Windows. Many of their products will now run on a variety of Linux distributions (often referred to as “distros”), as well as Apple’s macOS platform. This includes their database product, SQL Server.

One way in which Microsoft achieved cross-platform compatibility is through containers. If you aren’t familiar with containers, you can think of them as a stripped-down virtual machine. Only the components necessary to run the application, in this case, SQL Server, are included. The leading tool to manage containers is called Docker. Docker is an application that will allow you to download, create, start and stop, and run containers. If you want a more detailed explanation of containers, please see the article What is a Container on Docker’s website.

Assumptions

For this article, you should understand the concepts of a container, although no experience is required. See the article from Docker referenced in the previous section if you desire more enlightenment on containers. Additionally, this article assumes you are familiar with the SQL language, as well as some basics of PowerShell. Note that throughout this article, when referencing PowerShell, it’s referring to the PowerShell Core product.

The Platform

The previous articles, How to Create an Ubuntu PowerShell Development Environment Part 1 and Part 2, walked through the steps of creating a virtual machine for Linux development and learning. That VM is the basis for this article. All the code demos in this article were created and run in that specific virtual computer. For best results, you should first follow the steps in that article to create a VM. From there, you will be in a good place to follow along with this article. However, they have been tested on other variations of Ubuntu, CentOS, as well as on macOS.

In those articles, I showed not just the creation of the virtual machine, but the steps to install PowerShell and Visual Studio Code (VSCode), tools you will need in order to complete the demos in this article should you wish to follow along.

For the demo, I am assuming you have download the demo files and opened them in Visual Studio Code within the virtual machine, and are executing individual samples by highlighting the code sample and using the F8 key, or by right-clicking on the selected text and picking run.

The Demo

The code samples in this article are part of a bigger sample I provide on my GitHub site. You’ll find the entire project here. There is a zip file included that contains everything in one easy download, or you can look through GitHub and pick and choose the files you want. GitHub also displays Markdown correctly, so you may find it easier to view the project documentation via GitHub rather than in VSCode.

This article uses two specific files, located in the Demo folder: m11-cool-things-1-docker.ps1 and m11-install-docker.sh. While this article will extract the relevant pieces and explain them, you will find it helpful to review the entire script in order to understand the overall flow of the code.

The Beginning

The first thing the PowerShell script does is use the Set-Location cmdlet to set the current location to the folder where you extracted the demo code. This location should have the Demo, Notes, and Extras folders under it.

Next, make sure Docker is installed, and if not, install it. The command to do this is rather interesting.

bash ./Demo/m11-install-docker.sh

bash is very similar to PowerShell; it is both a terminal and a scripting language. It is native to many Linux distros, including the Ubuntu-based ones. This code uses PowerShell to start a bash session and then executes the bash script m11-install-docker.sh. When the script finishes executing, the bash session ends.

Take a look inside that bash script.

if [ -x “$ (command -v docker)“ ]; then

    echo “Docker is already installed“

else

    echo “Installing Docker“

    sudo snap install docker

fi

The first line attempts to run a command that will complete successfully if Docker is installed. If so, it simply displays that information to the screen via the echo command.

If Docker is not installed, then the script will attempt to install Docker using the snap utility. Snap is a package manager introduced in the Ubuntu line of distros; other distros use a manager known as flat packs. On macOS, brew is the package manager of choice. This is one part of the demo you may need to alter depending on your distro. See the documentation for your specific Linux install for more details.

Of course, there are other ways to install Docker. The point of these few lines was to demonstrate how easy it is to run bash scripts from inside your PowerShell script.

Pulling Your Image

A Docker image is like an ISO. Just as you would use an ISO image to create a virtual machine, a Docker image file can be used to generate one or more containers. Docker has a vast library of images, built by itself and by many companies, such as Microsoft. These images are available to download and use in your own environments.

For this demo, you are going to pull the image for SQL Server 2017 using the following command.

sudo docker pull mcr.microsoft.com/mssql/server:2017-latest

The sudo command executes the following docker program with administrative privileges. Docker, as stated earlier, is the application which manages the containers. Then you give the instruction to Docker, pull. Pull is the directive to download a container from Docker’s repositories.

The final piece is the image to pull. The first part, mcr.microsoft.com, indicates this image is stored in the Microsoft area of the Docker repositories. As you might guess, mssql indicates the subfolders containing SQL Server images, and server:2017-latest indicates the version of SQL Server to pull, 2017. The -latest indicates this should be the most currently patched version; however, it is possible to specify a specific version.

Once downloaded, it is a good idea to query your local image cache to ensure the download was successful. You can do so using this simple command.

sudo docker image ls

image tells Docker you want to work with images, and ls is a simple listing command, similar to using ls to list files in the bash shell.

word image 36 How to Create an Ubuntu PowerShell Development Environment – Part 3

Running the Container

Now that the image is in place, you need to create a container to run the SQL Server. Unlike traditional SQL Server configuration, this turns out to be quite simple. The following command is used to not only create the container but run it. Note the backslash at the end of each line is the line continuation character for bash, the interpreter that will run this command (even though you’re in PowerShell). You could also choose to remove the backslashes and just type the command all on one line.

sudo docker run -e ‘ACCEPT_EULA=Y’ -e ‘SA_PASSWORD=passW0rd!’ -p 1433:1433 –name arcanesql -d mcr.microsoft.com/mssql/server:2017-latest

The first part of the line starts by passing the run command into Docker, telling it to create and run a new container. In the first -e parameter you are accepting the end user license agreement. In the second -e parameter, you create the SA (system administrator) password. As you can see, I’ve used a rather simple password, you should definitely use something much more secure.

Next, we need to map a port number for the container using the -p parameter. The first port number will be used to listen on the local computer, the second port number is used in the container. SQL Server listens on port 1433 by default, so we’ll use that for both parts of the mapping.

The next parameter, --name, provides the name for the container; here I’m calling it arcanesql.

In the final parameter, -d, you need to indicate what image file should be used to generate the container. As you can see, the command is using the SQL Server image downloaded in the previous step.

word image 37 How to Create an Ubuntu PowerShell Development Environment – Part 3

You can verify the container is indeed running using the following command.

sudo docker container ls

As with the other commands, the third parameter indicates what type of Docker object to work with, here containers. Like with image, the ls will produce a list of running containers.

word image 38 How to Create an Ubuntu PowerShell Development Environment – Part 3

Installing the SQL Server Module

Now that SQL Server is up and running, it’s time to start interacting with it from PowerShell Core. First, though, install the PowerShell Core SQL Server module.

Install-Module SqlServer

It won’t hurt to run this if the SQL Server module is already installed. If it is PowerShell will simply provide a warning message to that effect.

If you’ve already installed it, and simply want to make sure it is up to date, you can use the cmdlet to update an already installed module.

Update-Module SqlServer

Do note that normally you would not want to include these in every script you write. You would just need to ensure the computer you are running on has the SQL Server module installed, and that you update it on a regular basis, testing your scripts of course after an update. (For more about testing PowerShell code, see my three-part article on Pester, the PowerShell testing framework, beginning with Introduction to Testing Your PowerShell Code with Pester here on SimpleTalk.)

Running Your First Query

The first query will be very simple; it will return a listing of all tables in the master database so you can see how easy it is to interact with SQL Server and learn the basic set of parameters.

The basic cmdlet to work with SQL Server is Invoke-SqlCmd. It requires a set of parameters, so you’ll place those in variables for easy reference.

$ serverName = ‘localhost,1433′

$ dbName = ‘master’

$ userName = ‘sa’

$ pw = ‘passW0rd!’

$ queryTimeout = 50000

$ sql = ‘SELECT * FROM master.INFORMATION_SCHEMA.Tables’

For this exercise, you are running the Docker container on the same computer as your PowerShell session, so you can just use localhost for the server name. Obviously, you’ll replace this with the name of your server when working in other environments. Note that you must append the port number after the server name.

Next, you have the database name you’ll be working with, and for this example, it will be master.

The next two parameters are the username and password. In a real-world environment, you’d be setting up real usernames and passwords, but this demo will be simple and just use the SA (system administrator) account built into SQL Server. The password is the same one used when you created and ran the container using the docker run command.

Next up is the query timeout. How long do should PowerShell wait before realizing no one is answering and give up? The timeout is measured in terms of seconds.

The last parameter is the query to run. Here you are running a simple SELECT statement to list the tables in the master database.

Now that the parameters are established in variables, you are ready to call the Invoke-SqlCmd cmdlet to run the query.

Invoke-Sqlcmd -Query $ sql `

              -ServerInstance $ serverName `

              -Database $ dbName `

              -Username $ userName `

              -Password $ pw `

              -QueryTimeout $ queryTimeout

Here you pass in the variables to each named parameter. Note the backtick symbol at the end of each line except the last. This is the line continuation character; it allows you to spread out lengthy commands across multiple lines to make them easier to read.

In the output, you see a list of each table in the master database.

 How to Create an Ubuntu PowerShell Development Environment – Part 3

Splatting

As you can see, Invoke-SqlCmd has a fairly lengthy parameter set. It will get tiresome to have to repeat this over and over each time you call Invoke-SqlCmd, especially as the bulk of these will not change between calls.

To handle this, PowerShell includes a technique called splatting. With splatting, you create a hash table, using the names of the parameters for the hash table keys, and the values for each parameter as the hash table values.

$ sqlParams = @{ “ServerInstance” = $ serverName

                “Database” = $ dbName

                “Username” = $ userName

                “Password” = $ pw

                “QueryTimeout” = $ queryTimeout

              }

If you look at the syntax in the previous code example, you’ll see that the key values on the left of the hash table above match the parameter names. For this example, load the values from the variables you created, but you could also have hardcoded the values.

So how do you use splatting when calling a cmdlet? Well, that’s pretty simple. In this next example, you’ll load the $ sql variable with a query to create a new database named TeenyTinyDB, and then execute the Invoke-SqlCmd.

$ sql = ‘CREATE DATABASE TeenyTinyDB’

Invoke-Sqlcmd -Query $ sql @sqlParams

Here you call Invoke-SqlCmd, then pass in the query as a named parameter. After that, you pass in the hash table variable sqlParams, but with an important distinction. To make splatting work, you use the @ symbol instead of the normal $ for a variable. When PowerShell sees the @ symbol, it knows to deconstruct the hash table and use the key/values as named parameters and their corresponding values.

There are two things to note. I could have included the $ sql as another value in the hash table. It would have looked like “Query” = $ sql (or the actual query as a hard-coded value). In the demo, I made them separate to demonstrate that it is possible to mix named parameters with splatting. On a personal note, I also think it makes the code cleaner if the values that change on each call are passed as named parameters and the values that remain fairly static to become part of the splat.

Second, the technique of splatting applies to all cmdlets in PowerShell, not just Invoke-SqlCmd. Feel free to implement this technique in your own projects.

When you execute the command, you don’t get anything in return. On the SQL Server, the new database was created, but because you didn’t request anything back, PowerShell simply returns to the command line.

Creating Tables

For the next task, create a table to store the names and URLs of some favorite YouTube channels. Because you’ll be working with the new TeenyTinyDB instead of master, you will need to update the Database key/value pair in the hash table.

$ dbName = ‘TeenyTinyDB’

$ sqlParams[“Database”] = $ dbName

Technically I could have assigned the database name without the need for the $ dbName variable. However, I often find myself using these values in other places, such as an informational message. Perhaps a Write-Debug “Populating $ dbName” message in my code. Placing items like the database name in a variable makes these tasks easy.

With the database value updated, you can now craft a SQL statement to create a table then execute the command by once again using Invoke-SqlCmd.

$ sql = @’

CREATE TABLE [dbo].[FavoriteYouTubers]

(

    [FYTID]       INT            NOT NULL PRIMARY KEY

  , [YouTubeName] NVARCHAR(200)  NOT NULL

  , [YouTubeURL]  NVARCHAR(1000) NOT NULL

)

‘@

Invoke-Sqlcmd -Query $ sql @sqlParams

In this script, you take advantage of PowerShell’s here string capability to spread the create statement over multiple lines. If you are not familiar with here strings, it is the ability to assign a multi-line string to a variable. To start a here string, you declare the variable then make @ followed by a quotation mark, either single quote or double quote, the last thing on the line. Do note it has to be last; you cannot have anything after it such as a comment.

The next one or more lines are what you want the variable to contain. As you can see, here strings make it easy to paste in SQL statements of all types.

To close out a here string, simply put the closing quotation mark followed by the @ sign in the first two positions of a line. This has to be in the first two characters if you attempt to indent the here string won’t work.

With the here string setup, call Invoke-SqlCmd to create the table. As with the previous statement, it doesn’t produce any output, and it simply returns us to the command line.

Loading Data

In this example, create a variable with a SQL query to load multiple rows via an INSERT statement and execute it.

1

2

3

4

5

6

7

8

9

10

11

$ sql = @’

INSERT INTO [dbo].[FavoriteYouTubers]

  ([FYTID], [YouTubeName], [YouTubeURL])

VALUES

  (1, ‘AnnaKatMeow’, ‘https://www.youtube.com/channel/UCmErtDPkJe3cjPPhOw6wPGw’)

, (2, ‘AdultsOnlyMinecraft’, ‘https://www.youtube.com/user/AdultsOnlyMinecraft’)

, (3, ‘Arcane Training and Consulting’, ‘https://www.youtube.com/channel/UCTH58i-Gl1bZeATOeC4f25g’)

, (4, ‘Arcane Tube’, ‘https://www.youtube.com/channel/UCkR0kwYjQ_gngZ8jE3ki7xw’)

, (5, ‘PowerShell Virtual Chapter’, ‘https://www.youtube.com/channel/UCFX97evt_7Akx_R9ovfiSwQ’)

‘@

Invoke-Sqlcmd -Query $ sql @sqlParams

For simplicity, I’ve used a single statement. There are, in fact, many options you could employ. Reading data from a file in a foreach loop and inserting rows as needed, for example.

Like the previous statements, nothing is returned after the query executes, and you are returned to the command prompt.

Reading Data

People are funny. They love putting their data into databases. But then they actually expect to get it back! Pesky humans.

Fortunately, PowerShell makes it easy to return data from SQL Server. Follow the same pattern as before–set up a query and store it in a variable, then use Invoke-SqlCmd to execute it.

$ sql = @’

SELECT [FYTID]

     , [YouTubeName]

     , [YouTubeURL]

  FROM dbo.FavoriteYouTubers

‘@

Invoke-Sqlcmd -Query $ sql @sqlParams

Unlike the previous queries, this actually generates output.

 How to Create an Ubuntu PowerShell Development Environment – Part 3

Here you can see each row of data, and the values for each column. I want to be very precise about what PowerShell returns.

This is a collection of data row objects. Each data row has properties and methods. The sqlserver module converts each column into a property of the data row object.

The majority of the time, you will want to work with the data returned to PowerShell, not just display it to the screen. To do so, first assign the output of Invoke-SqlCmd to a variable.

$ data = Invoke-Sqlcmd -Query $ sql @sqlParams

If you want to see the contents of the variable, simply run just the variable name.

This will display the contents of the collection variable $ data, displaying each row object, and the properties for each row.

 How to Create an Ubuntu PowerShell Development Environment – Part 3

You can also iterate over the $ data collection, here’s a simple example.

foreach($ rowObject in $ data)

{

  “$ ($ rowObject.YouTubeName) is a favorite YouTuber!”

}

This sample produces the following output:

 How to Create an Ubuntu PowerShell Development Environment – Part 3

In this code, I just display a formatted text string, but you could do anything you want to with it, such as writing to an output file.

Cleanup

When I was a kid, mom always taught me to put my toys away. There are many reasons why you would want to remove containers you are no longer using. Testing is one, and you may wish to write a script to spin up a new container, load it with data, then let the testers do their thing. When done, you may wish to stop the container or delete it altogether.

Stopping and Starting Containers

For a first step, use Docker to see what containers are currently running.

sudo docker container ls

The output shows that on the system only one container is running.

 How to Create an Ubuntu PowerShell Development Environment – Part 3

Take the scenario of wanting to shut down the container, but not removing it. Perhaps you want to turn it off when the testers aren’t using it to save money and resources. To do this, simply use the stop command.

After issuing a stop, you should do another listing to ensure it is, in fact, stopped. You might think you could do another container ls, but note I said it lists currently running containers. If you want to see all containers, running or not, you must use a slightly different Docker command.

sudo docker stop arcanesql

sudo docker ps -a

The stop command will stop the container with the name passed in. The ps -a command will list all containers running or not.

 How to Create an Ubuntu PowerShell Development Environment – Part 3

If you look at the STATUS column, on the very right side of the output, you’ll see the word Exited, along with how long in the past it exited. This is the indicator the container is stopped.

In this example, say it is the next morning. The testers are ready to get to work, so start the container back up.

sudo docker container start arcanesql

All that is needed is to issue the start command, specifying container, and provide the name of the container to start.

 How to Create an Ubuntu PowerShell Development Environment – Part 3

As you can see, the status column now begins with Up and indicates the length of time this container has been running.

Deleting a Container

At some point, you will be done with a container. Perhaps testing is completed, or you want to recreate the container, resetting it for the next round of testing.

Removing a container is even easier than creating it. First, you’ll need to reissue the stop command, then follow it with the Docker command to remove (rm) the named container.

sudo docker stop arcanesql

sudo docker rm arcanesql

If you want to be conservative with you keystrokes, you can do this with a single command.

sudo docker rm –force arcanesql

The –force switch will make Docker stop the container if it is still running, then remove it.

You can verify it is gone by running or Docker listing command.

sudo docker ps -a

word image 39 How to Create an Ubuntu PowerShell Development Environment – Part 3

As you can see, nothing is returned. Of course, if you had other containers, they would be listed, but the arcanesql container would be gone.

Removing the Image

Removing the container does not remove the image the container was based on. Keeping the image can be useful for when you are ready to spin up a new container based on the image. Re-run the Docker listing command to see what images are on the system.

sudo docker image ls

The output shows the image downloaded earlier in this article.

word image 40 How to Create an Ubuntu PowerShell Development Environment – Part 3

As you can see, 1.3 GB is quite a bit of space to take up. In addition, you can see that the image was created 2 months ago. Perhaps a new one has come out, and you want to update to the latest—all valid reasons for removing the image.

To do so, use a similar pattern as the one for the container. You’ll again use rm, but specify it is an image to remove and specify the exact name of the image.

sudo docker image rm mcr.microsoft.com/mssql/server:2017-latest

When you do so, Docker will show us what it is deleting.

word image 41 How to Create an Ubuntu PowerShell Development Environment – Part 3

With that done, you can run another image listing using the image ls command to verify it is gone.

word image 42 How to Create an Ubuntu PowerShell Development Environment – Part 3

The image no longer appears. Of course, if you had other images you had downloaded, they would appear here, but the one for the latest version of SQL Server would be absent.

Conclusion

In this article, you saw how to use Docker, from within PowerShell Core, to download an image holding SQL Server 2017. You then created a container from that image.

For the next step, you installed the PowerShell SqlServer module, ran some queries to create a table and populate it with data. You then read the data back out of the database so you could work with it. Along the way, you learned the valuable concept of splatting.

Once finishing the work, you learned how to start and stop a container as well as remove it and the image on which it was based.

This article just scratched the service of what you can do when you combine Docker, SQL Server, and PowerShell Core. As you continue to learn, you’ll find even more ways to combine PowerShell Core, SQL Server and Docker.

Let’s block ads! (Why?)

SQL – Simple Talk

Read More

Get on the Bambi Bus!

May 30, 2020   Humor

Posted by Krisgo

via

Like this:

Like Loading…

About Krisgo

I’m a mom, that has worn many different hats in this life; from scout leader, camp craft teacher, parents group president, colorguard coach, member of the community band, stay-at-home-mom to full time worker, I’ve done it all– almost! I still love learning new things, especially creating and cooking. Most of all I love to laugh! Thanks for visiting – come back soon icon smile Get on the Bambi Bus!


Let’s block ads! (Why?)

Deep Fried Bits

Read More

Moving to the Cloud – What You Need to Know

May 30, 2020   Microsoft Dynamics CRM

Perhaps you’re contemplating moving your data and business processes to the Cloud, or maybe you’d like to improve your organization’s cloud strategy. Either way, you probably have questions about costs, the migration processes, or even the terminology surrounding cloud technology. As a long-time Cloud Solution Provider (CSP) for Microsoft, BroadPoint has supported many organizations that have moved their operations to the Cloud. We’ve heard all the questions, and we have answers.

A successful cloud migration begins with a plan

Every successful project starts with a plan. Migrating to the cloud is no different. We’ve identified the steps that make up a successful cloud migration plan:

Step 1: Involve all the key players

For most businesses, migrating your data and processes to the Cloud is a significant change, and it will require broad organizational input and support. The key people in your organization need to be fully on board and willing to actively champion the project. Representatives from owners, users, and IT working together for the project’s success will lead to a smoother, faster cloud migration process that meets everyone’s goals.

Step 2: Inventory the physical and virtual servers you already use.

While your current management tools may provide a good representation of the hundreds—maybe thousands—of applications your organization is running, you need an inventory mechanism that can feed data into subsequent steps. With cloud migration assessment tools from Azure, you’ll have a complete inventory of servers with metadata for each—including profile information and performance metrics—allowing you to build your cloud migration strategy. Using this knowledge, map your servers to represent your on-premises applications. This will help identify dependencies or communication between servers so you can include all necessary application components in your cloud migration plan—helping reduce risks and ensuring a smooth migration. Then group your servers logically to represent the applications and select the best cloud migration strategy for each application based on its requirements and migration objectives.

Step 3: Evaluate your on-premises applications.

Once you have mapped your application groups, evaluate how best to move each on-premises application. Again, use the cloud migration assessment tools for resource recommendations and migration strategies for your application servers.

Automated cloud migration tools will also provide insight into your environment and dependencies to build out your cloud migration project plans. Assess your situation now to build a template for future use that aligns with individual applications, locations, or groups within your organization. Start with apps that have fewer dependencies so you can get your migration off to a quick start. Microsoft’s Azure Database Migration Guide provides step-by-step guidance.

If the process seems daunting, our experts at BroadPoint can help. Our team is gold certified in Microsoft Dynamics 365. We can lead you through your inventory assessment or perform it for you.

With an effective plan in place, you’re prepared to migrate to the Cloud.



Cloud migration approaches

What’s involved in the migration? That depends on the approach you take. There are various approaches:

  • Rehost allows you to move your existing applications to the Azure Cloud quickly. All applications are moved as-is without code changes.
  • Refactor may involve changes to the application design but no wholesale coding change.
  • Rearchitect allows you to modify or extend your application’s codebase to scale and optimize it for the cloud. You can modernize your app into a resilient, highly scalable, independently deployable architecture and use Azure to accelerate the process.
  • Rebuild has you rebuilding an application from scratch using cloud-native technologies. With this migration strategy, you’ll manage the apps and services you develop, and Azure will manage everything else.

If you’re unsure which migration approach is best for your company, work with our experts during the planning process. Together we can create a multifaceted application strategy to determine when rehosting, refactoring, rebuilding, or replacing applications will deliver the most value. Additional applications can be built using cloud-optimized and cloud-native design principles. Request that we contact you.

Migration complete; what’s next?

Congratulations! You are ready to operate in a secure and well-managed cloud environment using Azure security and management services to govern and monitor your cloud applications. If you begin using these services during your migration, you can continue full speed ahead to ensure a consistent experience across your hybrid cloud. You can also monitor and manage your cloud expenditure with tools like Azure Cost Management. This solution allows you to track resource usage and control costs across all your clouds with a single, unified view. Take advantage of rich operational and financial insights to make informed decisions.

What does it all cost?

Moving to the Cloud will likely reduce costs over time. Microsoft’s handy Azure calculator can help you calculate costs. Before cloud computing, organizations would purchase computing hardware and software and locate them in on-premise data centers. An IT staff was required to maintain those data centers. Computers were like other capital expenses: usually, large one-time purchases followed by several years of depreciation. Once you move to the Cloud, you are not responsible for those costs, and you pay only for usage. Your capital expense is now an operating expense with a monthly charge. Your BroadPoint partner will be able to give you a firm price for operating your cloud solution.

There’s a lot to consider when it comes to cloud migrations. Our BroadPoint experts are here to guide you and answer your questions. Contact us at BroadPoint today.

By BroadPoint, www.broadpoint.net

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

Resilience And Reinvention

May 30, 2020   SAP
 Resilience And Reinvention

The recent pandemic has disrupted almost every sector on a global scale. We are suddenly in a world where auto makers are building healthcare equipment and luxury goods makers are producing hand sanitizer. Most organizations have been forced into sudden survival mode and challenged to adjust to continually unpredictable dynamics. Mass disruption has led to rapid abandonment of established sales targets, marketing strategies, and predictions about how quarterly numbers might pan out.

The current situation has had inconsistent cascading effects on nearly every industry. While some businesses have shut down completely, others are trying to keep up with rapidly changing demands, and those left somewhere in the middle are attempting major adjustments to operations to adapt to uncertain and shifting environments.

Organizations with robust systems in place have a better chance of weathering the storm, and those that are intelligent enterprises are not only more resilient but able to come out of this crisis seizing opportunities that others either could not see or could not execute on.

Resilient leaders make the right decisions quickly, and adapt

Streamlined leadership is crucial during this turbulent time. Business leaders need to ensure that decision makers have the right data, at the right time, to quickly identify shifting priorities and tackle business continuity risks across value chains, such as supply chain disruptions, inconsistent customer demand, employee productivity challenges, and systems resilience.

Systems resilience is based on a system’s ability to operate during a major disruption or crisis with minimal impact on critical business processes and operations. This means preventing, mitigating, or recovering from technology issues within system architecture, networks, software applications, data, cloud connections, and infrastructure. That’s why CIOs and IT leaders play a key role in ensuring businesses can continue to operate during a crisis.

Resilience of business systems and processes

As the impact of the pandemic intensifies, business systems are tested more than ever before. Regardless of what industry – travel, retail, healthcare, manufacturing, technology, or any of the other global industries – all sectors have been impacted in some way, and all businesses must adapt. This means adapting to the current challenges, which could last weeks or months, as well as adjusting to what comes next: a period of recovery that could last months, or even years.

You need to ensure that your systems are resilient enough to maintain your business in this unpredictable environment and support the reinvention of your organization as we move beyond the current crisis.

It is as important as ever to have insight, agility, and control over your operations in order to understand what changes are necessary and make the right decisions on how to apply resources, and how to take action to get the best possible outcomes during challenging times. To help you do this, key technology solutions can be used tactically to help keep supply chains and products moving, control spending, and help your business navigate a path toward recovery.

Emerging stronger

There are lessons to be learned and applied in the current disruption to emerge stronger. How well you navigate this crisis will determine how resilient your organization is when we move beyond the pandemic and define the next normal.

Lay a strong, yet agile, foundation for your organization with the insight and practical strategies shared on this exclusive LinkedIn Live event. Tune in to the SAP Technology LinkedIn channel at 11:00 a.m. EDT / 8:00 a.m. PDT on Wednesday, June 3, to hear David Robinson, senior vice president of Customer Success at SAP, chats with Nathaniel Crook, vice president of Global and Strategic Accounts at Microsoft, and Emma McGuigan, senior  managing director at Accenture Technology.

Let’s block ads! (Why?)

Digitalist Magazine

Read More

Private Equity Firm Replaces Salesforce w/ Dynamics 365 CRM

May 30, 2020   Microsoft Dynamics CRM

Limitations with Salesforce Prompted this Financial Services Firm to Switch to Microsoft Dynamics 365 CRM and Power Apps

Out-of-the-box CRM software like Salesforce cannot always address the unique needs of every business. At AKA, we design and implement technology solutions for financial services firms. While recently working with a global, private equity firm looking for a better CRM software solution, we quickly realized that their unique business requirements did not fit the “one size fits all” approach most CRM technology provides.

This particular firm was in need of a customized solution for recruiting and augmenting management teams within their portfolio.

Re-tooling leadership teams requires complex talent search capabilities

This firm increases the value of their investments by restructuring leadership teams. Talent recruiters within the firm looking to re-tool management teams need deep visibility into their relationships and talent pool across all of the companies in their portfolio. To ensure a good match, this requires searching across their extensive portfolio of executives’ qualifications, combined with LinkedIn for deeper analysis.

For this client, there were three business challenges we needed to address:

  1. The ability to search through resumes and large amounts of data in their CRM system or transactional database to locate the most appropriate candidates for specific roles.
  2. The firm’s recruiters needed to conduct research and analyze results on LinkedIn without tipping off the candidate.
  3. They needed to report on highly customized candidate search results within the organization—with printability

After researching best-of-breed systems for tracking and hiring, and knowing that their current Salesforce and home grown system were not able to provide the custom query and resume parsing they required, the firm hired AKA to replace Salesforce with a better CRM technology solution for financial services: Microsoft Dynamics 365 Sales.

One major selling point? Dynamics 365 offers out-of-the-box functionality for keyword searches and complex queries. But the firm’s recruiters needed a more robust querying functionality and an easy-to-use interface to efficiently search on candidate records with specific criteria. For example, a recruiter might need to search for a C-level executive with an advanced degree in economics, and experience working in commodities trading. The results of their search are then saved, and the recruiter can then build a comprehensive list of targeted candidates. Adding a custom Power App was just the solution.

Complex queries addressed through Microsoft Power Apps

As planned, AKA implemented Dynamics 365 as the client’s CRM solution with some special features–including a Power App to handle their complex queries on requested data. This Power App will also search and retrieve the data regardless of criteria set or number of criteria in the query.

The Power App solution was quickly and cost effectively deployed. We eliminated the need for custom coded solutions that would have required more time, money and resources to support over time.

As for LinkedIn, the standard LinkedIn integration with Dynamics does not provide the “true” integration needed for this particular financial services firm. To address this, we used a 3rd party parsing tool to automatically download LinkedIn candidate resumes in PDF format and import them into Dynamics as contacts. Once the resume is parsed, the data elements are automatically loaded into Dynamics and become part of the firm’s repository of talent data. The recruiters can also receive and enter typical Word and PDF resumes that candidates send via email.

Power Apps is a Competitive Game Changer

By eliminating two systems. recruiters now have one comprehensive way to execute quick and customized queries on potential candidates. For more specific results, search queries can now include any tag about a candidate such as title, years held in positions, geography, education, certification history and more.

Additional benefits of the Power App solution include:

  • Simplification of a once challenging task now made almost frictionless
  • Competitive advantage with improvements in productivity and speed for finding the best candidates
  • Significant reduction in training time with an intuitive user interface

Is Power Apps Right for Your Business?

We at AKA are excited about Power Apps, one of the tools in the Microsoft Power Platform. As a Microsoft Gold-certified Dynamics partner, we have seen proven results from this simple, cost-effective, and powerful solution. We would love to discuss how this technology can transform your business.

How can AKA and Power Apps help you? Let’s talk.

To see more real world examples of how Power Apps is being used by five other financial services companies, Watch our on-demand webcast.


ABOUT AKA ENTERPRISE SOLUTIONS
AKA specializes in making it easier to do business, simplifying processes and reducing risks. With agility, expertise, and original industry solutions, we embrace projects other technology firms avoid—regardless of their complexity. As a true strategic partner, we help organizations slay the dragons that are keeping them from innovating their way to greatness. Call us at 212-502-3900!


Article by: Tom Berger | 212-502-3900

With 20+ years of field experience, Tom Berger is Vice President of Financial Services for AKA Enterprise Solutions.

Let’s block ads! (Why?)

CRM Software Blog | Dynamics 365

Read More

OpenAI and Uber create Virtual Petri Dish to find the best AI model for a task

May 29, 2020   Big Data
 OpenAI and Uber create Virtual Petri Dish to find the best AI model for a task

Researchers affiliated with Uber AI and OpenAI have proposed a new approach to neural architecture search (NAS), a technique that involves evaluating hundreds or thousands of AI models to identify the top performers. In a preprint paper, they claim their technique, called Synthetic Petri Dish, accelerates the most computationally intensive NAS steps while predicting model performance with higher accuracy than previous methods.

NAS teases out top model architectures for tasks by testing candidate models’ overall performance, dispensing with manual fine-tuning. But it requires large amounts of computation and data, the implication being that the best architectures train near the bounds of available resources. Synthetic Petri Dish takes an idea from biology to address this dilemma: It uses candidate architectures to create small models and evaluate them with generated data samples, such that this relative performance stands in for the overall performance.

“The overall motivation behind ‘in vitro’ (test-tube) experiments in biology is to investigate in a simpler and controlled environment the key factors that explain a phenomenon of interest in a messier and more complex system,” the researchers explained. “This paper explores whether the computational efficiency of NAS can be improved by creating a new kind of surrogate, one that can benefit from miniaturized training and still generalize beyond the observed distribution of ground-truth evaluations … [W]e can use machine learning to learn data such that training an [architecture] on the learned data results in performance indicative of the [architecture’s] ground-truth performance.”

Synthetic Petri Dish needs only a few performance evaluations of architectures and, once trained, enables “extremely rapid” testing of new architectures. The initial evaluations are used to train a Petri dish model while generating a set of architectures through an off-the-shelf NAS method. The trained Petri dish model then predicts the relative performance of the new architectures and selects a subset of architectures for performance evaluation.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

The process repeats until the NAS method identifies the best architecture.

In experiments run on a PC with 20 Nvidia 1080 Ti graphics cards (for ground-truth training and evaluation) and a MacBook (for inference), the researchers sought to determine how Synthetic Petri Dish performs on the Penn Tree Bank (PTB) data set, a popular language modeling and NAS benchmark. Beginning from a ground-truth model containing 27 million parameters (variables), Synthetic Petri Dish generated 100 new architectures and evaluated the top 20 architectures.

The researchers say that at the end of the search, their technique found a model “competitive” in its performance with one found through conventional NAS while reducing the complexity of the seed model from 27 million parameters (variables) to 140 parameters. They also report that Synthetic Petri Dish required only a tenth of the original NAS’ compute and exceeded the performance of the original NAS when both were given equivalent compute.

“By approaching architecture search in this way as a kind of question-answering problem on how certain motifs or factors impact final results, we gain the intriguing advantage that the prediction model is no longer a black box. Instead, it actually contains within it a critical piece of the larger world that it seeks to predict,” the coauthors wrote. “[B]ecause the tiny model contains a piece of the real network (and hence enables testing various hypothesis as to its capabilities), the predictions are built on highly relevant priors that lend more accuracy to their results than blank-slate black box mode.”

Let’s block ads! (Why?)

Big Data – VentureBeat

Read More
« Older posts
  • Recent Posts

    • GIVEN WHAT HE TOLD A MARINE…..IT WOULD NOT SURPRISE ME
    • How the pandemic is accelerating enterprise open source adoption
    • Rickey Smiley To Host 22nd Annual Super Bowl Gospel Celebration On BET
    • Kili Technology unveils data annotation platform to improve AI, raises $7 million
    • P3 Jobs: Time to Come Home?
  • Categories

  • Archives

    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • March 2020
    • February 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    • December 2016
    • November 2016
    • October 2016
    • September 2016
    • August 2016
    • July 2016
    • June 2016
    • May 2016
    • April 2016
    • March 2016
    • February 2016
    • January 2016
    • December 2015
    • November 2015
    • October 2015
    • September 2015
    • August 2015
    • July 2015
    • June 2015
    • May 2015
    • April 2015
    • March 2015
    • February 2015
    • January 2015
    • December 2014
    • November 2014
© 2021 Business Intelligence Info
Power BI Training | G Com Solutions Limited