Browsing articles in "Blog"

SQL Server 2014: New Datatype

Apr 1, 2014   //   by Karen Lopez   //   Blog, Data Modeling, Database Design, Fun, Parody, Snark, Space, SQL Server, WTF  //  14 Comments

image

Today is the general availability release date for the newest version of SQL Server, aptly named SQL Server 2014.  I’m excited about many of the new features being rolled out today, but the ones that will impact data architects, modelers and database designers are the new datatypes that will be introduced.  But first, for those of you who have their heads stuck in the deep piping and spit-managing of databases, some background about datatypes:

A datatype is a categorization of data items, based on the range and types of data that it can contain and a set of actions that can be validly taken against that data. 

As such, applying a datatype to a column in a database makes it work as another type of constraint.  A tinyint column can’t hold my Starbucks name (Kitty) because it constrains the values to integers and only a subset of all integers, for example.

The number and type of datatypes (yes, I’m being meta there) varies depending on the strength and quality of the tequila the DBMS product management teams were drinking at their last Vegas Blow Out team building retreat, as called for in the ISO Standards for databases, AKA

ISO/IEC JTC 1/SC 32 - Data management and interchange.  

One of the things that developers and DBAs will tell you is that choosing the right datatype is important for performance reasons.  And by that, they mean the smallest datatype you can fit most of the data in. And maybe a bit smaller.  Soooo much bad info out there, I know.  When Knowledge Conquers Fear, we can love our data.  Thank the Cosmos you have me here to help you out.

What’s new in SQL Server 2014: A New Datatype

This new datatype is exciting for me as a data & space enthusiast.  The new feature finally allows modern database designers to properly specify the constraints for tracking time and location data in the same column. Yes, this means that your developers and DBAs no longer have to use comma-delimited values in their relational database designs when they need to track how much time and personal space they need to get up to speed on professional database design.  And it’s big enough to store that many man-hours.  Yeah. I said that.

BTW, it seems that Stack Overflow is *the* place to find info on how to implement comma-delimited values in database columns.  Kids, don’t get your database design knowledge from random forums on the Internet.

Anyway, back to the news!

The new feature makes so much sense with Microsoft’s push to the Cloud, it’s embracing of NoSQL technologies and all.  It’s AWESOME.

 

spacetime (Transact-SQL)

Defines a time and location in a universe.

SQL Server 2014

spacetime Description

Property

Value

Syntax

spacetime [(fractional seconds precision)], (universe, 5DGeometry)

Usage

DECLARE @MySpacetime spacetime (1000, 2014.12.0.2000.8,  image )

CREATE TABLE Table1 ( Column1 spacetime (1000, 2014.12.0.2000.8

image ) )

Time Range

-to +∞ and beyond
(I hope you have lots and lots of memory and storage)

Space Ranger

@cmdr_hadfield

image

Universe Range

Please check data.NASA.gov for the up-to-date listing of known Universes Multiverses, as this changes beyond Microsoft control. There is no control. There is no center.

5DGeometry Range

[you’ll need a 5D monitor to view this range.]

Timezone offset range

Thank Venus, no, nope, never. We are scientists here. Use Multiuniversal Universal Time Coordinates (UTMC).

Daylight saving aware

Oh, for Carl’s sake. Do you really think something like spacetime needs to be sullied by DST?

Storage size

If you have to ask, you don’t ever need to use this datatype. Seriously.

Accuracy

+/- 10 Planks. Depending on how far your server is from the Sun. Earth’s Sun, that is. 

Default value

1989-05-15 12:00:00.1000  2014.12.0.2000.8 SNAGHTML5a35643

Calendar

Hubble

SQL Azure Windows Azure Dammit!
Microsoft Azure DB Support
Yes, of course.  But only in Premium plans and higher. 

 

Special Considerations and Gotchas

Some gotchas with this new datatype:

  • Due to the highly multi-dimensional, multiuniversal nature of this datatype, there isn’t any backwards compatibility.  Unless, of course, you can fold spacetime and go back and change earlier versions of SQL Server. But if you could do that, you wouldn’t be reading my blog, would you?
  • Just like the confusion over timestamps, you can’t really treat this like a date or time datatype.  It’s special.  And spatial. 
  • This means you can’t convert it to date, time, datetime, timestamp or spatial datatypes, either.
  • The 5D geometry thing is way too complex to explain in a single blog post.  But for those of you that managed to stick it out through some college level math, it involves parsecs (the correct usage of the term) and the double declining balance method of space depreciation.  In this first rollout of spacetime, the geometry completely ignores most OctoDeca Bands.  Except for Miller tracks.
  • You can’t use normal date and geometrical math on data in the columns. You can bend or fold the values, but since space has no center, and time has no beginning or end, spacetime has no beginning or end. It is infinite.  So the usually infinity rules apply.
  • This datatype is only available via O365, but that makes sense since as announced today, SQL Server 2014 is also only available via O365 subscriptions.
  • This datatype is only available at O365 plans at U3 and higher.  Wait, I don’t think I should have said anything about the new Universe O365 plans.  Forget I said anything.  That’s probably not going to be a rule in our universe.  Seriously.  No NDA broken.  I think.

 

Note

Some of this post may have been inspired by some bad veggie April Fish (poisson d’avril) I had last night.   If you want to get some real information about the new features of SQL Server 2014, you probably shouldn’t read random blogs on the internet on launch day.  Especially when it’s 1 April.

Did you catch all the special references in this post?  Let me know.

What about XP, Agile, SCRUM, Lean…? #EDW2014

Mar 31, 2014   //   by Karen Lopez   //   Blog, Data Modeling, Events, Speaking  //  No Comments

At the upcoming Enterprise Data World 2014, I’ll be doing a half-day presentation on Driving Development Projects with Enterprise Data Models

Here are a few teases for what we will be talking about:

image

image

image

 

The Abstract:

Monday, April 28, 2014
08:30 AM – 11:45 AM

Level: 
Intermediate

Join this session to see how data fits in real-world enterprise development projects. We’ll answer such questions as:

  • "Who does what?"
  • "Why are we doing this?"
  • "Will it slow things down?”
  • “Will it work with agile development?”
  • "Will I have to actually talk to a data architect?"
  • “What about the Cloud?”
  • "What are the biggest mistakes teams make?"
  • "Will I still have a job?"

The session will feature demos of common data modeling-to-database processes, including reverse engineering, forward engineering, generating DDL, alter scripts, and more. You will leave with 10 tips for making model-driven database development successful in your organization’s culture and environment.

Getting Cosmos Data From the Source

Mar 21, 2014   //   by Karen Lopez   //   Awesome, Blog, Fun, Space  //  No Comments

20140321-194352.jpg

Me: Hi, Neil, I’m Karen

Neil: Hi, Karen….

5 Classic Data Modeling Mistakes…And How to Avoid Them

Mar 12, 2014   //   by Karen Lopez   //   Blog, Data, Data Modeling, Speaking  //  No Comments

Slides from my frequent DAMA / Enterprise Data World presentation on Data Modeling mistakes.  You can click on the stopwatch in the player to auto-advance the slides.

There’s no sound; these are just the slides. If you’d like attend a presentation on this topic, ask your local user group (DAMA, ERwin, PASS, etc.) to invite me.

Data Modeling is Iterative. It’s not Waterfall

Mar 7, 2014   //   by Karen Lopez   //   Blog, Data, Data Governance, Data Modeling, Database Design  //  3 Comments

Sure, data modeling is taught in many training classes as a linear process for building software.  It usually goes something like this:

  1. Build a Conceptual Data Model.
  2. Review that with users
  3. Build a Logical Data Model
  4. Review that with users
  5. Build a Physical Data Model
  6. Give it to the DBA
  7. GOTO step one on another project.

And most team members think it looks like this:

image

Training classes work this way because it’s a good way to learn notations, tools and methods.  But that’s not how data modeling works when the professionals do it on a real project.

Data modeling is an iterative effort. Those integrations can be sprints (typical for my projects) or have longer intervals. Sometimes the iterations exist just between efforts to complete the data models, prior to generating a database.  But it’s highly iterative, just like the software development part of the project. 

In reality, data modeling looks more like this:

Data Model Driven Development - Karen Lopez

This is Data Model-Driven Development.  The high-level steps work like:

  1. Discuss requirements.
  2. Develop data models (all of them, some of them, one of them).
  3. Generate Databases, XML schemas, file structures, whatever you might want to physically build. Or nothing physical, if that’s not what the team is ready for. 
  4. Refine
  5. Repeat.

These, again, are small intervals, not the waterfall steps of an entire project.  In fact, I might do this several times even in the same sprint. Not all modeling efforts lead to databases or physical implementations.  That’s okay.  We still follow an iterative approach.  And while the steps here look like the same waterfall list, they aren’t the same.

  • There isn’t really a first step.  For instance, I could start with an in-production database and move around the circle from there.
  • We could start with existing data models. In fact, that’s the ideal starting point in a well-managed data model-driven development shop.
  • The data models add value because they are kept in sync with what’s happening elsewhere – as a natural part of the process, not as a separate deliverable.
  • The modeling doesn’t stop.  We don’t do a logical model, then derive a physical model, throwing away the logical model.
  • Data  modelers are involved in the the project throughout its lifecycle, not just some arbitrary phase. 
  • Modeling responsibilities may be shared among more roles.  In a strong data model-driven process, it is easier for DBAs and BAs to be hands-on with the data models.  Sometimes even users.  Really.

By the way, this iterative modeling approach isn’t unique to data models.  All the models we might work on for a project should follow this project.  Class diagrams, sequence diagrams, use cases, flow charts, etc. should all follow this process to deliver the value that has been invested in them.  That’s what Agile means in “the right amount of [modeling] documentation”. Data model driven development means that models are “alive”.  

If you are a modeler and re-enforcing the wrong perceptions of needing a waterfall-like approach to data modeling, you are doing it wrong.  You might be causing more pain for yourself than anyone else on your project.

Data Models aren’t just documentation checklist items.  They model the reality of the living, breathing systems at all points in its life.  They deliver value because they are accurate, not because they are “done”.

Data Management Career Success…in Turbulent Times

Slides from my frequent DAMA and Enterprise Data World presentation on data management career success.

Pages:1234567...46»

Blog Categories

Subscribe via E-mail

Use the link below to receive posts via e-mail. Unsubscribe at any time. Subscribe to blog.infoadvisors.com by Email


Facebook Flickr foursquare Google+ LinkedIn Skype StumbleUpon Twitter YouTube

Categories

Archive

UA-356944-2