Monday, July 27, 2009

A point of view on SCRUM

Having worked in the industry for over 11 years, I have seen projects implemented in a number of different ways. In my current role I am working with an extremely detailed and rigorous project methodology that is based on Waterfall. While I am a certified Scrum Master, I think that marrying the best of agile and waterfall you will have a methodology that satisfies the development teams, our clients and project and programme management.

For me the fundamental difference between waterfall and SCRUM is selling products vs projects. While I am not promoting either methodology, it is interesting to consider and discuss the merits of each.

The following points describe the key areas that will help establish a high performance Scrum team:

- Estimates are provided by the team. Estimates are based on the teams ability to deliver.
- Team velocity is based on the current actual ability of the team.
- High performance teams, understand each other and work from project to project as a delivery team.
- Software is an art, more than a science. Ensuring software quality will inevitably take time. Working on software until it is right, trading off features is the right approach provided your client accepts that they may not get everything that they want.
- There is no leadership. One person is not responsible for driving the project to completion the team is.
- Assessing the delivery approach during a sprint retrospective, helps the team re-organize and optimize the delivery process.
- Sprints help organize the focus o f the team for the next period of the project.
- Having a Scrum Master removing impediments, allows the team to focus on delivery and not have to deal with issues that don’t add value to the project.
- Things change over time, on large projects it is difficult to estimate whether we will reach the 6 month target date. Scrum does not implement end dates, it lets the project take its course.
- Problems are detected earlier rather than later. The client gets to see the product regularly and does not test the product at the end of the project.
- Having the team report directly to the Product Owner means that they are no longer shielded by the project manager. This enforces accountability.
- Teams are self organizing, this lets team members focus on the areas that they are interested in.
- Using a backlog to identify features and requirements allows the team to have visibility on what is going to be delivered in the lifecycle of the project. Scrum promotes visibility to all stakeholders.
- Daily scrum meetings allows the team to refocus their effort daily and each team member must report on what they achieved, what they are going to achieve and what impediments they currently have holding them back.
- Producing code early on in the project allows the client to start getting the benefits of the product much sooner than in a traditional waterfall methodology.
- There is an inherent increase in the ability of the team to cross skill and grow their knowledge through osmosis and the collaborative nature of the Scrum project.
- When a problem is found, the entire team is stopped to focus on resolving the issue, while this stops the entire production line, the team learns together and the problem is resolved much quicker than when one person focuses on the problem.

The following observations help clarify the role of SCRUM:

- There must be inherent trust between the product owner and the team.
- Scrum teams are typically between 4-9 people. On a large project you will need to have multiple teams, with multiple product owners and SCRUM masters .
- Scrum aims to keep the development team happy and productive removing the unnecessary structure and rigor that stops them from producing code, which ultimately adds the real business value.
- Scrum implies that software requirements are emergent, meaning that as you go you will discover new requirements. Your client must understand that you are going through a discovery process. Discovery takes place for the duration of the process and not like in waterfall, up front.
- A scrum master is a servant leader person. They need good people skills and do not take a leadership position within the team.
- Scrum aims to make the project more visible, notifying all of the stakeholders of the teams velocity (ability to deliver). Clients must understand that if the team mix is not right, the velocity will be low. New teams will also have a low velocity. Teams that work together on multiple projects will have a much higher velocity.
- Teams must be groomed over time.


The following points are where I see challenges in using Scrum on some of the projects that I have worked on:

- Teams must be co-located. Scrum relies heavily on human interactions, collaboration and knowledge sharing. This will not suit the nature of distributed projects.
- Relying on extensive human interaction means that information is not always documented.
- Organizing work in sprints and starting development right away, removes the ability of the team to focus on research. Going directly to code before understanding the design options available. This approach assumes that you have an experienced and mature team that understands the application domain. This will work well in in-house IT departments where people work in the same environment for a prolonged period of time.
- Re-factoring comes at a cost. Waterfall reduces the amount of refactoring by doing all of the planning and design upfront.
- The project is not outcomes based, this means that the end product is not known upfront and the client only says its done when they are satisfied that they have what they want (which may or may not entirely be what they had initially envisioned). The client can change the scope during the project and on fixed price contracts this is detrimental to the project.
- The small team size, 4-9 members will mean additional resources to facilitate multiple teams. Additional Product owners and scrum masters.
- Each team member is expected to know what they are doing from day one. Coding starts as soon as possible.
- There is a huge emphasis on face to face communication while this is good for the team to bond, the knowledge is not recorded. If people leave that knowledge leaves. The low focus on creating documentation means that you don’t have anything to go back to when there is a dispute with the client, hence the inherent trust with the client that is a requirement.
- Scrum implies that the team will be happy and that attrition is naturally low. Attrition is a reality and the knowledge that is lost during a project, must be recorded somewhere, specifically on large complex implementations.
- There are huge contractual implications if the understanding of the team and the client are vastly different. Scrum will work in an organization where the team understands the application domain and requirements. Typically these type of resources belong to in-house IT departments.
- Scrum assumes that the team will do what is required to deliver the product, including business analysis, design, testing and coding from day one. While the analysts and designers have not yet uncovered the actual design, the coders will not be able to start and if they do there will be inherent re-work.
- There is a lot of bargaining and negotiation with the client as to which functionality is delivered first, typically our clients expect everything to be completed. SCRUM projects must have the ability to push back and the understanding from the client that this is part of the process.

Tuesday, July 7, 2009

Getting started with PerformancePoint Server Monitoring Installation

Microsoft announced earlier this year that PerformancePoint Server was not going to be released in a stand alone version, instead it would be bundled with Microsoft Office Sharepoint Server Enterprise Edition in future releases. They confirmed that Service Pack 3 is going to be made available for the current version of PerformancePoint software.

If you are planning on installing PerformancePoint Server 2007 on Windows Server 2008, SQL Server 2008 and MOSS 2007, you should read the following, as it will save you loads of time when you need to troubleshoot the installation.

The following redistributable files are required to pass the pre-requisite check on installation. Ensure that you do all of the installation below before running the Monitoring Server Configuration Manager.

The following files are part of the Feature Pack for Microsoft SQL Server 2005 and can be found here.


  • sqlncli.msi

  • SQLServer2005_ADOMD.msi

  • SQLServer2005_ASOLEDB9.msi

  • SQLServer2005_XMO.msi


  • Next you are going to need to install the Ajax Extensions found here: ASPAJAXExtSetup.msi

    Finally you will need to install service pack 2 for Performance Point Server found here

    One of the indications that the installation did not complete correctly the first time was an error message that appeared creating a new data source in the Dashboard Designer. Even though I knew the SQL credentials were correct, the test failed with the following message.

    Unable to connect to the specified server. Make sure the address is correct.













    If you are unable to connect to the external data source, try calling the web service directly using http://localhost:40000/WebService/PmService.asmx. This failed with the following error message:

    The page must have a <%@ webservice ...="" class="MyNamespace.MyClass"%> directive.


    Wednesday, June 24, 2009

    An easy technique for inverting a SQL table


    The following is an easy technique for inverting a table structure in T-SQL. I will explain the mechanics behind the technique and provide the sample code that we used on a recent project. The original code was more than 200 lines and was extremely complex. While the code was more than sufficient for our environment, there may be other ways that accomplish the same goal, this post aims to present one of the available options.

    The first step is to create a temp table that holds the table structure containing the rows of data that are going to be inverted into columns. You need to add an additional integer column that contains the row number for each of the rows in the table. This is automatically populated using the
    Row_Number() function.

    Declare @Count int, @i int, @SQL nVarchar(3000)
    Create Table #tmp(ModuleHierarchyID Int, WebPartCode Varchar(10), SVGFilename Varchar(255), XSLTFilename Varchar(255), RowNumber Int)

    The next step is to populate the temp table with the data that you want to invert into a column structure.
    Insert into #tmp(ModuleHierarchyID, WebPartCode, SVGFilename, XSLTFilename, RowNumber)
    Select
    ModuleHierarchyID, W.WebPartCode, SVGFilename, XSLTFilename,Row_Number() Over(ORDER BY ModuleHierarchyId DESC)As RowNumber
    from
    SVG_SVGMapping S, WEP_WebPart W

    where
    S.ModuleHierarchyId= @ModuleHierarchyID

    and
    S.WebPartId= W.WebPartId

    The data from the query above looks as follows






    The following section dynamically builds a sql query that extracts the tabular data and presents it in a column format. The @Count variabledetermines the depth of the table and uses this to iterate through each row using the rownumber to build each column. The dynamic SQL below contains a global select statement, with sub select statements for each column.
    Set@Count = (Selectcount(1) from #tmp)
    Set @i = 0
    Set
    @SQL = N'Select ' + Ltrim(str(@ModuleHierarchyID)) + ' As ModuleHierarchyId'

    While
    @i < @Count
    BeginSet @SQL = @SQL+ N', '
    Set
    @SQL = @SQL + N'(Select WebPartCode from #tmp where Rownumber = ' +ltrim(Str(@i + 1)) + ') As WebPartCode' + ltrim(str(@i + 1))+ ','
    Set
    @SQL = @SQL +N'(Select SVGFilename from #tmp where Rownumber = '+
    ltrim(Str(@i+ 1)) + ') As SVGFilename' + ltrim(str(@i + 1)) + ','
    Set
    @SQL = @SQL +N'(Select XSLTFilename from #tmp where Rownumber = '+
    ltrim(Str(@i + 1)) + ') As XSLTFilename' + ltrim(str(@i+ 1))
    Set
    @i = @i +1

    End


    EXECUTE sp_executesql @SQL

    The dynamic SQL should look as follows:

    Select 4 As ModuleHierarchyId,Select WebPartCode from #tmp where Rownumber = 1) As WebPartCode1,
    Select SVGFilename from #tmp where Rownumber = 1) As SVGFilename1,
    Select XSLTFilename from #tmp where Rownumber = 1) As XSLTFilename1,
    Select WebPartCode from #tmp where Rownumber = 2) As WebPartCode2,
    Select SVGFilename from #tmp where Rownumber = 2) As SVGFilename2,
    Select XSLTFilename from #tmp where Rownumber = 2) As XSLTFilename2)




    Once the query is run the following results are returned:



    The complete code looks as follows:


    Declare@Count int, @i int, @SQL nVarchar(3000)

    Create
    Table #tmp(ModuleHierarchyID Int, WebPartCode Varchar (10), SVGFilename Varchar(255), XSLTFilename Varchar(255), RowNumber Int)

    Insert
    into #tmp(ModuleHierarchyID,WebPartCode, SVGFilename, XSLTFilename, RowNumber)

    Select
    ModuleHierarchyID, W.WebPartCode , SVGFilename, XSLTFilename, Row_Number()Over (ORDERBY ModuleHierarchyId DESC) As RowNumber
    from
    SVG_SVGMapping S, WEP_WebPart W

    where
    S.ModuleHierarchyId = @ModuleHierarchyID
    and S.WebPartId= W.WebPartId

    Set
    @Count = (Select count(1) from #tmp)
    Set
    @i = 0
    Set @SQL = N'Select ' +Ltrim(str(@ModuleHierarchyID)) + ' As ModuleHierarchyId'

    While
    @i < @Count
    Begin
    Set
    @SQL = @SQL + N', '

    Set + N'(Select WebPartCode from #tmp where Rownumber = ' + ltrim(Str(@i + 1))+ ') As WebPartCode' + ltrim(str(@i + 1)) + ','

    Set
    @SQL = @SQL + N'(Select SVGFilename from #tmp where Rownumber = ' +ltrim(Str(@i + 1)) +') As SVGFilename' +ltrim(str(@i + 1)) +','

    Set
    @SQL = @SQL + N'(Select XSLTFilename from #tmp where Rownumber = ' + ltrim(Str(@i + 1))+ ') As XSLTFilename' + ltrim(str(@i + 1))

    Set
    @i = @i + 1

    End


    EXECUTE
    sp_executesql @SQL

    Monday, June 22, 2009

    Who decides when to "Virtualise"

    There are no hard and fast rules that determine when an organisation should look at virtualisation as a component of their technology platform. The decision is a difficult one driven by many agendas. Fortunately the virtualisation platforms have evolved and matured to the level where it is now acceptable to run your production environment on a virtual infrastructure.

    For some the decision is driven around cost, for others the inherent flexibility it provides, for some CIO's the decision is driven by an agenda centred on "Green IT" initiatives. There is no official documentation that explains the impact that running a virtual environment will have on the performance of your system. Each virtual environment is unique, be it SQL Server, IIS, SAP, Oracle etc…

    The decision to go virtual should follow the same project implementation methodology used to introduce new software into the organisation. The first step is to understand the requirements. Once the requirements have been defined you should understand the forces driving the organisation to consider a virtual hardware platform. The infrastructure should be designed to meet the business needs.

    Only once a feasible design has been completed should you implement the design in your development environments. Its key to understand how the organisations software behaves on virtual hardware before a significant investment is made rolling out a full blown implementation. Once the development goals are met, the infrastructure should be deployed in a test environment. Testing should not only include the software, but specific tests around non functional requirements, like the impact on manageability, flexibility and usability goals.

    One of the greatest concerns around virtualisation is the affect on performance. If the system is designed correctly, performance should not be a concern since virtualisation brings with it the ability to instantly scale. In an interesting post, the author of a virtualisation column, commented on what affect the quality of code has on performance in a virtual infrastructure. His assessment is correct, poor quality code in a virtualised world won’t have the resources available when things go wrong. These types of issues are easily hidden when an application has a dedicated piece of hardware all to itself.

    The point here is how you design your infrastructure, specifically disk, network and how the resources, CPU are allocated. If you need performance, you scale the farm. For a web application you should be more worried by the number of concurrent transactions the system is able to support.

    The following areas will help you decide if virtualisation is the correct course for your design. You need to decide which forces apply and whether they feature strongly in your solution / client requirements.

    Manageability

    One of the key selling points for virtualisation is the manageability it brings. You can provision new servers quickly without needing long procurement cycles to buy and implement a new piece of hardware, effectively increasing capacity in minutes. The converse is also true, you can de-provision servers and you wont have hardware standing around and depreciating over time.

    High Availability

    Virtualisation software now days has HA and redundancy by design. Virtual machines are moved without losing network connectivity! Your resource capacity now become “Highly Available”… The amount of available processing resources increases. If you lose a physical server that capacity is no longer available and so are the services were running on it. You can now transfer the services seamlessly to available capacity resources in the pool.

    Licensing

    Licensing is cheaper since you license the physical hardware and you can run multiple virtual servers on the same infrastructure. Licensing generally works per physical CPU socket, driving down the cost of licenses required to host the same platform on physical hardware.

    Flexibility

    Running environments with different operating systems become easier, you can run Windows Server 2008, Linux and Windows Server 2003 on the same physical hardware. You can provision multiple environments, QA, Development on the same physical tin, yet still have isolation from a server perspective. You can move virtual servers to other locations immediately while work is done on a physical machine.

    If a new project is established due to legislation changes and you need to make changes to accommodate your current production system, you can create a separate test environment quickly without having to try manage the changes on your current test infrastructure.

    Scalability

    You can scale a solution on demand. Call centers expecting a high volume of calls due to a recent ad campaign, can increase their processing capacity for a short period of time.

    Better Hardware Utilisation

    Virtual servers will load balance across the allocated physical servers. You could run both your SQL and web infrastructure on the same physical tin. If you processing requirements increase beyond you current capacity, you just add servers.

    Better Resource utilisation

    On 64 bit hardware you can scale the memory beyond the 32 bit limitations. This allows you to tile 32 Bit virtual machines on a single physical machine, effectively using the available memory on the 64bit physical machine. An example where you would use this is running Terminal Services (which is 32 bit – last time I checked, and can only use 4GB of Ram) and using the servers capability to run up to 32 GB / 64v GB of memory. This allows you to organically grow your server farm.

    Green IT


    The reduction in physical hardware means that there is no need to physically manufacture a server that is dedicated to one function. The manufacturing process in terms of electricity, power to manufacture a server, coal consumption etc are reduced. Heat emissions from multiple machines in a single rack are reduced, reducing the overall data center air conditioning requirements. The overall power consumption and power requirements for multiple physical machines is reduced.

    Isolation and encapsulation

    Virtualisation allows you to run multiple operating systems, product platforms on the same physical infrastructure. Reducing the cost to run each OS on its own dedicated hardware.

    Re-use

    When upgrading your physical hardware, it is possible to port your existing virtual images to a new hardware platform, reducing the need to re-install the application servers.

    Friday, April 17, 2009

    IT Environment Management

    The role of the IT environment manager has in the past been seconded from different areas of the organization. The decision makers that decide how servers are provisioned and who manages the environment has predominantly been divided between the infrastructure team or the development team. This role has typically been shared with that of infrastructure manager or the development manager, both of these having dual responsibility and not being able to provide a dedicated focus for environment activities.

    Business demands changes more quickly and with business change there is usually an inherent technology change. New technology means that the business now needs to manage both new and legacy environments. As Virtualization finds its way into the fabric of our IT infrastructure, we are now able to provision environments that run concurrently with legacy environments without the necessity to procure new hardware. This means new business capabilities can be provided in a short space of time and more efficient use of available resources.

    Keeping track of development and test requirements and ensuring that the projects have the necessary infrastructure on time is a difficult task. Having one person who understands the capabilities of the different development and test environments across all projects is another challenge. The environment manager is the role that coordinates all activates across Data Management, Configuration Management, Release Management, Infrastructure Management, and Test management.

    The following roles are important for environment management and provide input into the environment manager:

    Data Management

  • Provisions converted test data per test scenario per environment
  • Provides a consistent processes for data population into test environments
  • Enables data refreshes prior to the beginning of each test pass
  • Controls the acquisition/administration of project data
  • Handles data from storage and retrieval systems
  • Provides guidelines for the handling of classified data
  • Handles the planning, scheduling, and delivery of data into a test environment

    Configuration Management
  • Configurable item versions are managed such that the right version is always used in a development activity
  • Historical versions can be captured and deployed to test environments
  • Source code can be grouped to deploy application releases in a structured manner, maintaining security according to environment requirements
  • Defects can be traced in updated source code files
  • Defects can be traced per test environment
  • SCM impacts test environments by controlling configurable item version usage as well as the structured movement of compiled software from one environment to another.
  • Build Management

    Release Management
  • Provides a structured approach for deployment of system changes to Production, which reduces quality-related risk
  • Changes to software are bundled together in releases which occur on a limited basis, minimizing the impact of changes to the users and the likelihood of introducing defects to Production.
    Infrastructure Management
  • Procures hardware in terms of Networks, Servers and Disk.
  • Provisions physical servers for the purpose of hosting virtual servers.
  • Deploys virtual servers up to OS and network level.
  • Provisions disk.
  • Regular backups of environment components, including servers and databases.
  • Restoration of base server infrastructure in the event of a failure.
  • Ensures that the infrastructure services Mail, Directory Services and monitoring are functioning correctly.

    Test Management
  • Monitor the environment after the deployment of bug fixes.
  • Track core functionality within an environment.
  • Co-ordinate test activities and teams.
  • Includes processes to support the test environment.
  • Allows for the consistent tracking of defects associated with a test environment.
  • Captures core functionality within an environment, such as test execution, defect management and metrics reporting.
  • Executes test cases to verify an environment is ready for use.
  • Captures the required components needed for a test environment.

    Environment Management
  • Monitor the availability of the environment according to plan.
  • Track releases, upgrades and changes to the development and test environments.
  • Log, coordinate and resolve environment defects.
  • Plan the reuse or decommission of environment when testing is complete.
  • Coordinate test environment configuration, deployment and test activities required to provide a stable test platform.
  • Ensure monitoring of environments is conducted.
  • Communicate hardware requirements to Infrastructure Management.
  • Allocate project infrastructure based on the project requirements. Environment management may allow environment sharing where the technologies are similar and where there is sufficient capacity.

    Supporting Processes

  • Demand Management
    Manage the process of allocating test environments in an efficient way. Verify enough environments are available across projects.
  • Security Testing
    Verify an environment meets the security requirements prior to testing.
  • Smoke Tests
    Execute a selected test script with critical functionality to verify an application is ready for testing.
  • Health Checks and Monitoring
    Verify the environment is ready for testing. Manage the various mechanisms that provide monitoring capabilities.

    Each and every IT organization and project has its own unique requirements. While the information above is an accurate account of the types of work performed around environment management, each project can tailor the roles and ensure that the underlying responsibilities are managed and met within their organization


  •