Category Archives: BI News and Info

Plane fitting for arbitrary number of points

 Plane fitting for arbitrary number of points

I want to come up with a function, which when given a set of points of arbitrary length, will return a plane which best fits my data. I have tried using FindFit for 5-dimensional data, but I get the following error:

FindFit::fitc: Number of coordinates (1) is not equal to the number of variables (3). 

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

Cumulative Update #2 for SQL Server 2016 SP1

The 2nd cumulative update release for SQL Server 2016 SP1 is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.

To learn more about the release or servicing model, please visit:

Let’s block ads! (Why?)

SQL Server Release Services

Under Armour Transforms Into World’s Largest Digital Fitness Brand

Under Armour Kurt Kendall NRF17 SAP 1024x512 Under Armour Transforms Into World’s Largest Digital Fitness Brand

Achieving quantum leaps through disruption and using data in new contexts, in ways designed for more than just Generation Y — indeed, the digital transformation affects us all. It’s time for a detailed look at its key aspects.

Data finding its way into new settings

Archiving all of a company’s internal information until the end of time is generally a good idea, as it gives the boss the security that nothing will be lost. Meanwhile, enabling him or her to create bar graphs and pie charts based on sales trends – preferably in real time, of course – is even better.

But the best scenario of all is when the boss can incorporate data from external sources. All of a sudden, information on factors as seemingly mundane as the weather start helping to improve interpretations of fluctuations in sales and to make precise modifications to the company’s offerings. When the gusts of autumn begin to blow, for example, energy providers scale back solar production and crank up their windmills. Here, external data provides a foundation for processes and decisions that were previously unattainable.

Quantum leaps possible through disruption

While these advancements involve changes in existing workflows, there are also much more radical approaches that eschew conventional structures entirely.

“The aggressive use of data is transforming business models, facilitating new products and services, creating new processes, generating greater utility, and ushering in a new culture of management,” states Professor Walter Brenner of the University of St. Gallen in Switzerland, regarding the effects of digitalization.

Harnessing these benefits requires the application of innovative information and communication technology, especially the kind termed “disruptive.” A complete departure from existing structures may not necessarily be the actual goal, but it can occur as a consequence of this process.

Having had to contend with “only” one new technology at a time in the past, be it PCs, SAP software, SQL databases, or the Internet itself, companies are now facing an array of concurrent topics, such as the Internet of Things, social media, third-generation e-business, and tablets and smartphones. Professor Brenner thus believes that every good — and perhaps disruptive — idea can result in a “quantum leap in terms of data.”

Products and services shaped by customers

It has already been nearly seven years since the release of an app that enables customers to order and pay for taxis. Initially introduced in Berlin, Germany, mytaxi makes it possible to avoid waiting on hold for the next phone representative and pay by credit card while giving drivers greater independence from taxi dispatch centers. In addition, analyses of user data can lead to the creation of new services, such as for people who consistently order taxis at around the same time of day.

“Successful models focus on providing utility to the customer,” Professor Brenner explains. “In the beginning, at least, everything else is secondary.”

In this regard, the private taxi agency Uber is a fair bit more radical. It bypasses the entire taxi industry and hires private individuals interested in making themselves and their vehicles available for rides on the Uber platform. Similarly, Airbnb runs a platform travelers can use to book private accommodations instead of hotel rooms.

Long-established companies are also undergoing profound changes. The German publishing house Axel Springer SE, for instance, has acquired a number of startups, launched an online dating platform, and released an app with which users can collect points at retail. Chairman and CEO Matthias Döpfner also has an interest in getting the company’s newspapers and other periodicals back into the black based on payment models, of course, but these endeavors are somewhat at odds with the traditional notion of publishing houses being involved solely in publishing.

The impact of digitalization transcends Generation Y

Digitalization is effecting changes in nearly every industry. Retailers will likely have no choice but to integrate their sales channels into an omnichannel approach. Seeking to make their data services as attractive as possible, BMW, Mercedes, and Audi have joined forces to purchase the digital map service HERE. Mechanical engineering companies are outfitting their equipment with sensors to reduce downtime and achieve further product improvements.

“The specific potential and risks at hand determine how and by what means each individual company approaches the subject of digitalization,” Professor Brenner reveals. The resulting services will ultimately benefit every customer – not just those belonging to Generation Y, who present a certain basic affinity for digital methods.

“Think of cars that notify the service center when their brakes or drive belts need to be replaced, offer parking assistance, or even handle parking for you,” Brenner offers. “This can be a big help to elderly people in particular.”

Chief digital officers: team members, not miracle workers

Making the transition to the digital future is something that involves not only a CEO or a head of marketing or IT, but the entire company. Though these individuals do play an important role as proponents of digital models, it also takes more than just a chief digital officer alone.

For Professor Brenner, appointing a single person to the board of a DAX company to oversee digitalization is basically absurd. “Unless you’re talking about Da Vinci or Leibnitz born again, nobody could handle such a task,” he states.

In Brenner’s view, this is a topic for each and every department, and responsibilities should be assigned much like on a soccer field: “You’ve got a coach and the players – and the fans, as well, who are more or less what it’s all about.”

Here, the CIO neither competes with the CDO nor assumes an elevated position in the process of digital transformation. Implementing new databases like SAP HANA or Hadoop, leveraging sensor data in both technical and commercially viable ways, these are the tasks CIOs will face going forward.

“There are some fantastic jobs out there,” Brenner affirms.

Want more insight on managing digital transformation? See Three Keys To Winning In A World Of Disruption.

Image via Shutterstock

Comments

Let’s block ads! (Why?)

Digitalist Magazine

The latest in Oracle Partitioning – Part 2: Multi Column List Partitioning

This is the second blog about new partitioning functionality in Oracle
Database 12c Release 2, available on-premise for Linux x86-64, Solaris Sparc64, and Solaris x86-64 and for everybody else in the Oracle Cloud .

This one will talk about multi column list partitioning, a new
partitioning methodology in the family of list partitioning. There will be more for this method, coming in a future blog post (how about that for a
teaser?).

Just like read only partitions, this functionality is rather self-explaining. Unlike in earlier releases, we now can specify more than
one column as partition key columns for list partitioned tables, enabling you to model even more business use cases natively with Oracle Partitioning.

So let’s start off with a very simple example:

CREATE TABLE mc
PARTITION BY LIST (col1, col2)
(PARTITION p1 VALUES ((1,2),(3,4)),
 PARTITION p2 VALUES ((4,5)),
 PARTITION p3 VALUES (DEFAULT))
AS SELECT rownum col1, rownum+1 col2
FROM DUAL CONNECT BY LEVEL <= 10;

Yes, you can have a partitioned table with ten records, although I highly recommend NOT to assume this as best practice for real world environments. Just
because you can create partitions – and many of them – you should always bear in mind that partitions come with a “cost” in terms of additional
metadata in the data dictionary (and row cache, library cache), with additional work for parsing statements and so forth. Ten records per partition don’t cut it. You should always
consider having a reasonable amount of data per partition, but that’s a topic for another day.

When we now look at the metadata of this newly created table you will see the partition value pairs listed as HIGH VALUE in the partitioning metadata:

SQL> SELECT
partition_name, high_value FROM user_tab_partitions WHERE table_name=’MC';

PARTITION_NAME                 HIGH_VALUE
—————————— ——————————
P1                             ( 1, 2 ), ( 3, 4 )
P2                             ( 4, 4 )
P3                             DEFAULT
Now, while I talked about a “new partitioning strategy” a bit earlier, from a metadata perspective it isn’t one. For the database metadata it is “only” a
functional enhancement for list partitioning: the number of partition key columns is greater than one:

SQL> SELECT
table_name, partitioning_type, partitioning_key_count FROM user_part_tables WHERE table_name=’MC';

TABLE_NAME                     PARTITION PARTITIONING_KEY_COUNT
—————————— ——— ———————-
MC                             LIST                          
2

Let’s now look into the data placement using the partition extended syntax and query our newly created table. Using the extended partition syntax is
equivalent to specifying a filter predicate that exactly matches the partitioning criteria and an easy way to safe some typing. Note that both
variants of the partition extended syntax – specifying a partition by name or by pointing to a specific record within a partition – can be used for any
partition maintenance operation and also in conjunction with DML.

SQL> SELECT *
FROM mc PARTITION (p1);

      COL1       COL2
———- ———-
         1          2
         3          4
I can get exactly the same result when I am using the other variant of the partition extended syntax:

SQL> SELECT *
FROM mc PARTITION FOR (1,2);
      COL1       COL2
———- ———-
         1          2
         3          4

After having built a simple multi column list partitioned table with some data, let’s just do one basic partition maintenance operation, namely a
split operation on partition P1 that we just looked at. You might remember that this partition has two sets of key pairs as partition key definition,
namely (1,2) and (3,4). We use the new functionality of doing this split in an online mode:

SQL> ALTER
TABLE mc SPLIT PARTITION p1 INTO (PARTITION p1a VALUES (1,2), PARTITION p1b) ONLINE;

Table MC altered.
Unlike offline partition maintenance operations (PMOP) that take an exclusive DML lock on the partitions the database is working on (which prohibits any DML
change while the PMOP is in flight), an online PMOP does not take any exclusive locks and allows not only queries (like offline operations) but
also continuous DML operations while the operation is ongoing.

After we have now done this split, let’s check the data containment in our newly created partition P3:

SQL> SELECT *
FROM mc PARTITION (p1a);

      COL1       COL2
———- ———-
         1          2
That’s about it for now for multi column list partitioned tables. I am sure I have forgotten some little details here and there and I am sure that this
short blog post is probably not answering all questions you are having. So please, stay tuned and if you have any comments about this specific one or
suggestions for future blog posts, then please let me know. You can always reach me at hermann.baer@oracle.com.

Another one down, many more to go.

Let’s block ads! (Why?)

The Data Warehouse Insider

5 Ways To Make Performance Management Human Again

The September issue of the Harvard Business Review features a cover story on design thinking’s coming of age. We have been applying design thinking within SAP for the past 10 years, and I’ve witnessed the growth of this human-centered approach to innovation first hand.

Design thinking is, as the HBR piece points out, “the best tool we have for … developing a responsive, flexible organizational culture.”

This means businesses are doing more to learn about their customers by interacting directly with them. We’re seeing this change in our work on d.forum — a community of design thinking champions and “disruptors” from across industries.

Meanwhile, technology is making it possible to know exponentially more about a customer. Businesses can now make increasingly accurate predictions about customers’ needs well into the future. The businesses best able to access and pull insights from this growing volume of data will win. That requires a fundamental change for our own industry; it necessitates a digital transformation.

So, how do we design this digital transformation?

It starts with the customer and an application of design thinking throughout an organization – blending business, technology and human values to generate innovation. Business is already incorporating design thinking, as the HBR cover story shows. We in technology need to do the same.

SCN+SY 5 Ways To Make Performance Management Human Again

Design thinking plays an important role because it helps articulate what the end customer’s experience is going to be like. It helps focus all aspects of the business on understanding and articulating that future experience.

Once an organization is able to do that, the insights from that consumer experience need to be drawn down into the business, with the central question becoming: What does this future customer experience mean for us as an organization? What barriers do we need to remove? Do we need to organize ourselves differently? Does our process need to change – if it does, how? What kind of new technology do we need?

Then an organization must look carefully at roles within itself. What does this knowledge of the end customer’s future experience mean for an individual in human resources, for example, or finance? Those roles can then be viewed as end experiences unto themselves, with organizations applying design thinking to learn about the needs inherent to those roles. They can then change roles to better meet the end customer’s future needs. This end customer-centered approach is what drives change.

This also means design thinking is more important than ever for IT organizations.

We, in the IT industry, have been charged with being responsive to business, using technology to solve the problems business presents. Unfortunately, business sometimes views IT as the organization keeping the lights on. If we make the analogy of a store: business is responsible for the front office, focused on growing the business where consumers directly interact with products and marketing; while the perception is that IT focuses on the back office, keeping servers running and the distribution system humming. The key is to have business and IT align to meet the needs of the front office together.

Remember what I said about the growing availability of consumer data? The business best able to access and learn from that data will win. Those of us in IT organizations have the technology to make that win possible, but the way we are seen and our very nature needs to change if we want to remain relevant to business and participate in crafting the winning strategy.

We need to become more front office and less back office, proving to business that we are innovation partners in technology.

This means, in order to communicate with businesses today, we need to take a design thinking approach. We in IT need to show we have an understanding of the end consumer’s needs and experience, and we must align that knowledge and understanding with technological solutions. When this works — when the front office and back office come together in this way — it can lead to solutions that a company could otherwise never have realized.

There’s different qualities, of course, between front office and back office requirements. The back office is the foundation of a company and requires robustness, stability, and reliability. The front office, on the other hand, moves much more quickly. It is always changing with new product offerings and marketing campaigns. Technology must also show agility, flexibility, and speed. The business needs both functions to survive. This is a challenge for IT organizations, but it is not an impossible shift for us to make.

Here’s the breakdown of our challenge.

1. We need to better understand the real needs of the business.

This means learning more about the experience and needs of the end customer and then translating that information into technological solutions.

2. We need to be involved in more of the strategic discussions of the business.

Use the regular invitations to meetings with business as an opportunity to surface the deeper learning about the end consumer and the technology solutions that business may otherwise not know to ask for or how to implement.

The IT industry overall may not have a track record of operating in this way, but if we are not involved in the strategic direction of companies and shedding light on the future path, we risk not being considered innovation partners for the business.

We must collaborate with business, understand the strategic direction and highlight the technical challenges and opportunities. When we do, IT will become a hybrid organization – able to maintain the back office while capitalizing on the front office’s growing technical needs. We will highlight solutions that business could otherwise have missed, ushering in a digital transformation.

Digital transformation goes beyond just technology; it requires a mindset. See What It Really Means To Be A Digital Organization.

This story originally appeared on SAP Business Trends.

Top image via Shutterstock

Comments

Let’s block ads! (Why?)

Digitalist Magazine

Why am I getting so many checkpoint files when I have In-Memory OLTP enabled?

Recently, I looked an In-Memory OLTP issue with Principal Software Engineer Bob Dorr who is still my office neighbor.  After restoring a database that had just one memory optimized table, we dropped the table. Even without any memory optimized tables,number of checkpoint files keep going up every time we issue a checkpoint.  For a while, I thought we have a bug where our checkpoint files don’t get cleaned up properly.

But after looking closely in sys.dm_db_xtp_checkpoint_files, I notcied that the state_desc was “WAITING FOR LOG TRUNCATION” most of the checkpoint files.   The I realized that, we hadn’t backed up the transaction logs.   Checkpoint files go through various stages before they can be deleted and removed.  If it is “WAITING FOR LOG TRUNCATION”, they can’t be removed.  You will need to ensure logs are backup.

So this speaks importance of log backup.  Not doing log backup can cause log growth plus checkpoint file growth.

Here is a simple repro

  1. Create a database
  2. Create a memory optimized table
  3. insert some data
  4. backup the database (full backup) but do not do log backup
  5. then issue checkpoint on the database repeatedly in a loop like while 1 = 1 checkpoint
  6. observe the folder that  has checkpoint files (use dir /s ) to see  number of files keep growing
  7. stop the above while loop and backup your log, then you will observe most of the checkpoint files will be gone

What are checkpoint files?

They are data and delta files as documented in Durability for Memory-Optimized Tables. When you use disk based tables, the data is written to data files.  Even though data is stored in memory for memory optimized tables, SQL Server still needs to persists data for disaster recovery.  Data for memory optimized tables is stored in what we call checkpoint files.  Data file contains rows from insert and update operations. Delta file contains deleted rows.  Over time, these files can be ‘merged’ increase efficiency.  Unneeded files after the merge can be removed eventually (but this can only happen after a log backup).

Demo script

–create database and set up tables
CREATE DATABASE imoltp
GO

————————————–
— create database with a memory-optimized filegroup and a container.
ALTER DATABASE imoltp ADD FILEGROUP imoltp_mod CONTAINS MEMORY_OPTIMIZED_DATA
go
ALTER DATABASE imoltp ADD FILE (name=’imoltp_mod1′, filename=’c:\sqldata\imoltp_mod1′) TO FILEGROUP imoltp_mod

go
use imoltp
go
CREATE TABLE dbo.ShoppingCart (
ShoppingCartId INT IDENTITY(1,1) PRIMARY KEY NONCLUSTERED,
UserId INT NOT NULL INDEX ix_UserId NONCLUSTERED HASH WITH (BUCKET_COUNT=1000000),
CreatedDate DATETIME2 NOT NULL,
TotalPrice MONEY
) WITH (MEMORY_OPTIMIZED=ON)
GO

go
insert into dbo.ShoppingCart (Userid, CreatedDate, TotalPrice) values ( 1, getdate(), 1)
go

— backup database
backup database imoltp to disk = ‘c:\temp\imoltp.bak’ with init
go

–run checkponit
while 1 = 1 checkpoint

–step 4 issue the command from the following periodically to see file growth
dir /s c:\sqldata\imoltp_mod1

— stop the checkpoint in the while loop above and issue a backup log. then observe the files will eventually go away

–if you  have a small disk that has both checkpoint files and log in one drive, you can eventually run out of disk sapce and get and error like below
Msg 3930, Level 16, State 1, Line 31
The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction.

Jack Li |Senior Escalation Engineer | Microsoft SQL Server

twitter| pssdiag |Sql Nexus

Let’s block ads! (Why?)

CSS SQL Server Engineers

An Industry First: Teradata Debuts Open Source Kylo™ to Quickly Build, Manage Data Pipelines

Companies will benefit from simple, economical, accelerated data lake development; can focus talent on delivering high-impact business outcomes

Teradata (NYSE: TDC), a leading analytics solutions company, today announced a new and important contribution to the open source community that will deliver unprecedented efficiencies for companies creating data lakes. Teradata is introducing Kylo™, a data lake management software platform built using the latest open source capabilities such as Apache® Hadoop®, Apache Spark™ and Apache NiFi™. Kylo is a Teradata sponsored, open-source project that is offered under the Apache 2.0 license. Kylo evolved from code harvested from proven data lake engagements led by Think Big Analytics, a Teradata company, which will provide services and support for Kylo™.

With substantive experience creating business value from data lakes, Teradata is contributing Kylo™ to help organizations address the most common challenges they face in data lake implementation efforts. These include the central problem that data lakes simply take too long to build, and in the average 6-12 month build cycle, users find that use cases can become out of date and less relevant to quickly evolving businesses. Second, despite the lower cost of software, engineering costs quickly mount. Finally, a data lake, once created, may fail to attract users who find it difficult to explore, and so little value is realized.

Derived and developed from data lake deployments across industries, Kylo can easily help resolve these challenges, because it integrates and simplifies pipeline development and common data management tasks, resulting in faster time to value, greater user adoption and developer productivity. With Kylo, no coding is required, and its intuitive user interface for self-service data ingest and wrangling helps accelerate the development process. Kylo also leverages reusable templates to increase productivity.

“Many organizations find that implementing big data solutions on the Hadoop stack is a complex endeavor. Big data technologies are heavily oriented to software engineering, developers and system administrators,” said Nik Rouda, senior analyst with Enterprise Strategy Group (ESG). “Our research found 28 percent of organizations still struggle to staff teams with enough BI and analytics talent, much less big data and open source solution expertise. 77 percent of those surveyed say new big data initiatives will take between seven months and three years to show significant business value. It doesn’t have to be that way. I commend Teradata for open sourcing Kylo™ – an innovative and meaningful contribution.”

Encapsulating extensive experience from over 150 data lake engagements, Kylo™ helps organizations address the most common challenges they face in data lake implementation efforts, including:

- Skill shortage for experienced software engineers and administrators.
- Learning and implementing best practices around data lake governance.
- Driving data lake adoption beyond engineers.

When these challenges are overcome, high impact business outcomes are realized. In fact, Teradata has already helped many organizations save money and create new revenue streams from data lakes:

- A semiconductor manufacturer increased the yield quality of wafers; reducing waste, saving time, boosting output and thus increasing value to the business.
- An industrial equipment manufacturer enabled new service models, service-level agreements, intervention processes, and, notably, new revenue streams.
- A world-renowned research hospital reduced patient prep times, allowing doctors to treat more patients.

“Kylo is an exciting first in open source data lake management, and perfectly represents Teradata’s vision around big data, analytics, and open source software,” said Oliver Ratzesberger, Executive Vice President and Chief Product Officer, Teradata. “Teradata has a rich history in the development of many open source projects, including Presto and Covalent. We know how commercial and open source should work together. So we engineer the best of both worlds, and we pioneer new approaches to open source software as part of our customer-choice strategy, improving the commercial and open source landscape for everyone.”

Teradata’s vision for the blend of commercial and open source is recognized by customers, who continue to use Teradata to unleash their potential.

“At Discover® Financial Services, we are focused on leveraging leading-edge technology that helps us quickly bring products to market while providing exceptional customer service. Kylo™ has a unique framework that has the potential to accelerate development and value on new data sources that leverage Apache NiFi,” said Ka Tang, Director, Enterprise Data Architecture, Discover. “Kylo™ may provide an opportunity to leverage open source innovations while allowing the opportunity to give back to the open source community.”

“Open source software has an appeal to users seeking independence, cooperative learning, experimentation, and flexibility for customized deployments,” said Rick Farnell, President of Think Big, a Teradata company. “Our contribution is all about helping companies build a scalable data lake foundation that can continuously evolve with their business, technology data and analytical goals. We are removing impediments to use data to solve complex business problems and encouraging analytical users to contribute to the growing Kylo™ community. Going forward, our primary focus as a company is to help our customers create business value through analytics, rather than commodity capabilities. Kylo, along with our Teradata Everywhere approach to software and services, is a great example of our innovative strategy for the future.”

To this point, a major telecommunications company implemented Kylo™ after a large team of 30 data engineers spent months hand-coding data ingestion pipelines. Using Kylo, one single individual was able to ingest, cleanse, profile, and validate the same data in less than a week. Kylo™ not only improved data process efficiencies, it allowed those additional engineers to focus on multiple major business priorities.

Kylo software, documentation and tutorials are available now, via the Kylo™ project website: www.kylo.io – or the GitHub web site: — https://github.com/Teradata/kylo.

On request, Think Big offers these optional services, if required:

- Kylo™ support
- Kylo™ implementation services
- Kylo™ training
- Kylo™ managed services

“Kylo provides tooling on top of Apache NiFi to make it faster and easier to get data into your data lake,” said Scott Gnau, Chief Technology Officer, Hortonworks. “Hortonworks is pleased to announce Kylo’s certification with Hortonworks DataFlow and our expanded joint support relationship for NiFi.”

Teradata will play a leadership role in the governance, stewardship and community-building around open-source Kylo™.

Relevant Links
• Join the community, download Kylo today: CLICK to download
• Obtain Kylo services and support: The Think Big Analytics Kylo web page.
Enterprise Strategy Group research document, by Nik Rouda: Teradata open source technologies – an overview of strategy and history
• Think Big Expands Capabilities for Building Data Lakes with Apache Spark
• Teradata Acquires Big Data Partnership Consultancy, Expands Open Source Analytics Services
• Insights and Outcomes: How Teradata is Helping its Customers Capitalize on Data and Analytics

Let’s block ads! (Why?)

Teradata United States

Graph reduction

Let us consider graph1:
eblaC Graph reduction

g1 = {4612 <-> 4613, 4613 <-> 4614, 4642 <-> 4612, 4614 <-> 4522, 4798 <-> 4642, 4522 <-> 4376, 4536 <-> 4798, 4798 <-> 4996, 4376 <-> 4201, 4338 <-> 4536, 4813 <-> 4996, 4201 <-> 4043, 4074 <-> 4338, 4813 <-> 4735, 4043 <-> 3813, 3796 <-> 4074, 4646 <-> 4735, 3711 <-> 3813, 3665 <-> 3796, 4646 <-> 4585, 3711 <-> 3450, 3509 <-> 3665, 4584 <-> 4585, 3119 <-> 3450, 3177 <-> 3509, 4662 <-> 4584, 3119 <-> 2911, 2890 <-> 3177,4729 <-> 4662, 2911 <-> 2714, 2642 <-> 2890, 4729 <-> 4753, 2551 <-> 2714, 2641 <-> 2642, 4875 <-> 4753, 2518 <-> 2551, 4972 <-> 4875, 2481 <-> 2518, 5081 <-> 4972, 2365 <-> 2481, 4967 <-> 5081, 2320 <-> 2365, 4938 <-> 4967, 2310 <-> 2320, 4937 <-> 4938, 2215 <-> 2310, 2310 <-> 2317, 4942 <-> 4937, 2053 <-> 2215, 2315 <-> 2317, 4923 <-> 4942, 1943 <-> 2053, 2315 <-> 2316, 4922 <-> 4923, 1942 <-> 1943, 2329 <-> 2316, 4880 <-> 4922, 2329 <-> 2248, 4721 <-> 4880, 2248 <-> 2249, 4673 <-> 4721, 4683 <-> 4721, 2249 <-> 2246, 4672 <-> 4673, 4508 <-> 4683, 2246 <-> 2191, 4831 <-> 4672, 4507 <-> 4508, 2191 <-> 2093, 4779 <-> 4831, 2093 <-> 2052, 4551 <-> 4779, 4717 <-> 4779, 2052 <-> 2000, 4551 <-> 4409, 4489 <-> 4717, 2000 <-> 1961, 4274 <-> 4409, 4323 <-> 4489, 1961 <-> 1950, 4224 <-> 4274, 4084 <-> 4323, 1950 <-> 1951, 4223 <-> 4224, 3876 <-> 4084, 1951 <-> 1957, 4336 <-> 4223, 3769 <-> 3876, 1957 <-> 1948, 4336 <-> 4069, 4232 <-> 4336, 3704 <-> 3769, 1948 <-> 1949, 3767 <-> 4069, 4103 <-> 4232, 3545 <-> 3704, 2054 <-> 1949, 3561 <-> 3767, 4055 <-> 4103, 3409 <-> 3545, 2054 <-> 1996, 3415 <-> 3561, 3899 <-> 4055, 3408 <-> 3409, 1996 <-> 1997, 3415 <-> 3377, 3898 <-> 3899, 3425 <-> 3408, 2043 <-> 1997, 3345 <-> 3377, 3905 <-> 3898, 3461 <-> 3425, 2043 <-> 2128, 3277 <-> 3345, 3689 <-> 3905, 3410 <-> 3461, 2091 <-> 2128, 3277 <-> 3105, 3459 <-> 3689, 3360 <-> 3410, 2091 <-> 1946, 2923 <-> 3105, 3458 <-> 3459, 3254 <-> 3360, 1946 <-> 1838, 2822 <-> 2923, 2923 <-> 2894, 3458 <-> 3460, 3239 <-> 3254, 1725 <-> 1838, 2772 <-> 2822, 2894 <-> 2788, 3407 <-> 3460, 3238 <-> 3239, 1725 <-> 1531, 2771 <-> 2772, 2788 <-> 2598, 3406 <-> 3407, 1531 <-> 1342, 2480 <-> 2598, 3514 <-> 3406, 1342 <-> 1276, 2480 <-> 2402, 3321 <-> 3514, 3514 <-> 3504, 1219 <-> 1276, 2402 <-> 2400, 3153 <-> 3321, 3504 <-> 3272, 1219 <-> 1090, 2400 <-> 2401, 3042 <-> 3153,3023 <-> 3272, 1090 <-> 1035, 2793 <-> 3042, 3084 <-> 3042,2850 <-> 3023, 997 <-> 1035, 2424 <-> 2793, 3008 <-> 3084, 2739 <-> 2850, 997 <-> 960, 2134 <-> 2424, 3007 <-> 3008, 2578 <-> 2739, 2739 <-> 2645, 960 <-> 961, 1914 <-> 2134, 2488 <-> 2578, 2645 <-> 2356, 1656 <-> 1914, 2278 <-> 2488, 2195 <-> 2356, 1655 <-> 1656, 2277 <-> 2278, 2195 <->2023, 1896 <-> 2023, 1895 <-> 1896};

gx = Graph[g1];
we1 = Select[SortBy[{VertexList[g1], DegreeCentrality[g1]}[Transpose], Last], #[2] == 1 &][[All, 1]];

we11 = Table[we1[[i]] -> i, {i, 1, Length[we1]}];
we2 = Select[SortBy[{VertexList[g1], DegreeCentrality[g1]}[Transpose],Last],#[2] == 3 &][[All, 1]];
we22 = Table[we2[[i]] -> i + 11, {i, 1, Length[we2]}];
gx = Graph[g1(,GraphLayout[Rule]”SpringEmbedding”),VertexLabels -> Join[we11, we22]];
graph1 = HighlightGraph[gx, {Style[we2, Red], Style[we1, Yellow]}, VertexSize -> 2]

How to reduce the graph1 to graph2?. Graph2: x = {1 <-> 12, 12 <-> 4, 12 <-> 20, 20 <-> 7, 20 <-> 18, 18 <-> 11, 18 <-> 19, 19 <-> 10, 19 <-> 17, 17 <-> 14, 14 <-> 8, 14 <-> 6, 17 <-> 16, 16 <-> 13, 13 <-> 3, 13 <-> 5, 16 <-> 15, 15 <-> 2, 15 <-> 9};

graph2 = Graph[Reverse[x], GraphLayout -> “SpringEmbedding”,VertexLabels -> “Name”]

e0Ump Graph reduction

I have such an algorithm but it works relatively slowly (I need it to very large networks).
Designation of nodes are accidental and does not matter.

Let’s block ads! (Why?)

Recent Questions – Mathematica Stack Exchange

What Does Digital Transformation Really Mean To Your Business?

Digital transformation. Multichannel customer engagement. New cloud mindset.

If you’re playing Buzzword Bingo and scoring from home, your board should light right up. But what does it all actually mean? Is there anything behind the barrage of jargon that vendors and pundits foist on us that’s worth spending our precious hard-earned time and resources on?

Let’s consider the first one: Digital transformation (which I do agree has been overused almost to the point of being a cliché). We have now entered the territory where we hear a term so often that we stop listening before we allow ourselves to think about it—sort of like hanging up on a poor telemarketer before she finishes her pitch.

Here is my challenge for you: Think about what “digital transformation” means for you without rolling your eyes and diving into the next clickbait story that catches your eye. Part of the problem, I believe, is how the term has been explained. A lot of people immediately launch into an explanation of why you should totally transform yourself digitally, and what products or services you should be getting in order to get yourself digitally transformed so that you are not left standing on the platform as the 21st-century way of doing business pulls out of the station, and if you act now we will throw in a free (digital) toaster and a free waffle maker, etc., etc…

What? Oh, yes, sorry—that was also my natural reflexive reaction to hearing pitches like these.

But what does digital transformation mean? Here’s the simplest way I can explain it: Digital transformation means doing everything in your organisation with as few manual steps as possible, taking advantage of technology as much as possible to make work easier, do things faster, do new things, engage with your customers digitally, and gain a lot more understanding about what you’re doing while you’re at it.

OK, now that we’re covered the what,  let’s talk about the why. The easiest way to explain this is simply: The world has changed.

Customers now want to do everything, faster, easier, and simpler.

They are no longer happy with getting information the traditional way. Everything must be done with immediacy, and information needs to be ubiquitous and instantly available. This is true for both business-to-consumer (B2C) and business-to-business (B2B) models.

It is also true for the public sector. And if you think this trend is going to slow down, have a chat with an average teenager. They have never lived in a world without the Internet, and they will be entering the workforce and consuming goods very soon.

Can your business afford not to accelerate and provide digital services and contact points to this generation?

One more important aspect of these trends is that more and more of it means that the optimal solution is cloud solutions. As the great exodus of data and information to the cloud continues, it is now just common sense that in most cases, a cloud system is the best choice, as no system is an island. There will come a day when having your own server room will seem as antiquated as punch cards and mainframes. I’ll be 47 this year, and I am convinced it will happen in my lifetime. Scary.

But wait a minute, you say. You’re savvy to all this. You have a state-of-the art website, you allow customers to shop, order, and transact online. You are hip to all this digitalisation of business.

If you’ve done this, excellent—you’ve gone a long way toward digitising your business. However, even if customers are the lifeblood of any company, they are not the only thing that needs to be digitised. You also need to make the other parties in your business—your suppliers and your own employees—enjoy the same digital benefits. Just as you have made it easier for your customers to engage with you, you should also make it easier for your suppliers to transact with you, and you should also make life easier for your employees.

Once the three pillars of customer, suppliers, and employees are digitalised, we can use analytical tools to analyse, mine, and report on the data to present the information in consumable form.

At this point, it is very important to remember that digitalisation should never take the human element out of your processes. Rather, it should take the drudgery out of your workflow to allow the humans do what humans do best: make better decisions.

This is in line with my earliest blog post, “Becoming a Mindful Organisation,” and it also builds on concepts such as why a cloud system increasingly makes sense. But allow me to inject a dose of reality here: This is no magic bullet. (e.g., how will you migrate your 20-year-old bespoke order-entry system into the cloud system that IT has fallen in love with, and not lose any business ?)

I’ll leave you with  three highly unofficial pieces of advice, based on my own experience:

When considering solutions, consider the short-, medium-, and long-term view, and consider the three pillars: customers, suppliers, and employees. Find a solution that will have a positive (or at least the minimally negative) impact on all pillars, not just one. Think road map. Does your vendor have integrated solutions that will benefit multiple pillars?

Go past the marketing hype and accept that there will be some complexity. Simplicity does not simply happen just because you installed a new cloud-based system. Business is complex—competing issues and priorities will still fight for your time and attention, and there will be technical issues in the roadmap ahead.

There will also be resistance from internal and external parties. That’s OK, just take it in stride and find solutions to them, and be flexible enough to make adjustments. There is a technical term for this: C’est la vie. The history of IT has always been smart people like you finding ways to solve complex problems. This is just another bend in the river, folks.

Embrace change. Just as I have gotten over being a cloud skeptic, be willing to have an open mind and look at new way of doing things. Perhaps the way you’ve been dealing with your suppliers for the past 15 years can be rethought. Maybe the way your project engineers enter time can be streamlined. Maybe the online catalog that went up when Netscape was still a thing should have a refresh.

It is a brave new world out there, and although there are no certainties in life, all signs point to the future being cloudy. Get equipped, get informed, and get ready to digitally transform your organisation, because the rest of the world will not wait for you.

For more on digital transformation strategies, see Who’s In Charge Of Digital Transformation? You Are!

Comments

Let’s block ads! (Why?)

Digitalist Magazine

Released: System Center Management Pack for SQL Server and Dashboards (6.7.20.0)

We are happy to announce that updates to SQL Server Management Packs have been released!

Downloads available:

Microsoft System Center Management Pack for SQL Server 2016

Microsoft System Center Management Pack for SQL Server 2014

Microsoft System Center Management Pack for SQL Server (2008-2012)

Microsoft System Center Management Pack for SQL Server Dashboards

A Note About Dashboard Performance

Sometimes, you may run into a situation when dashboards open rather unwillingly—it may take quite a lot of time to get them ready. The core of the issue lays in large amounts of data written into Data Warehouse throughout the day. Every bit of this data is to be processed when you open any dashboard, which may lead to dashboards freezing. The issue is rather frequent if you open the dashboards after a certain period of inactivity. To neutralize this issue, it is recommended to enable the special “DW data early aggregation” rule. The rule will distribute the Data Warehouse processing load during the day, which will result in a quicker start of the dashboards.

By default, the rule has a 4-hour launch interval which works for most environments. In case the dashboards have not reached the desired performance level in your environment, decrease the interval. The more frequently the rule launches, the quicker the dashboards behave. However, do not decrease the interval below 15 minutes and do not forget to override the rule timeout value to keep it always lower than the interval value.

Please see below for the new features and improvements. Most of them are based on your feedback. More detailed information can be found in guides that can be downloaded from the links above.

New SQL Server 2008-2012 MP Features and Fixes

  • Implemented some enhancements to data source scripts
  • Fixed issue: The SQL Server 2012 Database Files and Filegroups get undiscovered upon Database discovery script failure
  • Fixed issue: DatabaseReplicaAlwaysOnDiscovery.ps1 connects to a cluster instance using node name instead of client access name and crashes
  • Fixed issue: CPUUsagePercentDataSource.ps1 crashes with “Cannot process argument because the value of argument “obj” is null” error
  • Fixed issue: Description field of custom user policy cannot be discovered
  • Fixed issue: SPN Status monitor throws errors for servers not joined to the domain
  • Fixed issue: SQL Server policy discovery does not ignore policies targeted to system databases in some cases
  • Fixed issue: GetSQL20XXSPNState.vbs fails when domain controller is Read-Only
  • Fixed issue: SQL ADODB “IsServiceRunning” function always uses localhost instead of server name
  • Increased the length restriction for some policy properties in order to make them match the policy fields
  • Actualized Service Pack Compliance monitor according to the latest published Service Packs for SQL Server

New SQL Server 2014 and 2016 MP Features and Fixes

  • Implemented some enhancements to data source scripts
  • Fixed issue: DatabaseReplicaAlwaysOnDiscovery.ps1 connects to a cluster instance using node name instead of client access name and crashes
  • Fixed issue: CPUUsagePercentDataSource.ps1 crashes with “Cannot process argument because the value of argument “obj” is null” error
  • Fixed issue: Description field of custom user policy cannot be discovered
  • Fixed issue: SPN Status monitor throws errors for servers not joined to the domain
  • Fixed issue: SQL Server policy discovery does not ignore policies targeted to system databases in some cases
  • Fixed issue: Garbage Collection monitor gets generic PropertyBag instead of performance PropertyBag
  • Fixed issue: GetSQL20XXSPNState.vbs fails when domain controller is Read-Only
  • Fixed issue: SQL ADODB “IsServiceRunning” function always uses localhost instead of server name
  • Increased the length restriction for some policy properties in order to make them match the policy fields
  • Actualized Service Pack Compliance monitor according to the latest published Service Packs for SQL Server

SQL Server Dashboards MP

  • No changes since 6.7.15.0. The version number has been bumped to 6.7.20.0 to match the current version of SQL Server MPs.

We are looking forward to hearing your feedback.

Let’s block ads! (Why?)

SQL Server Release Services