The Importance of Creating a Secure Password for Your System or Network

Creating a secure password for your system or network is the cornerstone of an effective security strategy.   I continued to be amazed, however, by how many people, who have access to content critical to their organization’s success, fail to take the time to create more complex passwords that will protect this content.

Many users continue to place priority on convenience over security, and as a result, they choose passwords that are simple for hackers to decipher. Recently, SplashData, which develops password management applications, released its annual “Worst Passwords for 2012” list, compiled from common passwords that are posted by hackers.  The top three passwords were the actual word, “password,” “123456,” as well as “12345678” have not changed since last year. The worst passwords for 2012 also demonstrate that people are not even changing default passwords – still choosing convenience over security.

Furthermore, although there are some benefits to creating a simple, common password, such as being easy to recall when time to logon to your network or system, these benefits are miniscule compared to the harm a hacker can do your content and network.

Source: Washington State Office of the Attorney General

Source: Washington State Office of the Attorney General

Below are some tips on what makes a password “strong:”

  • It is at least eight characters long
  • It does not contain your username, real name, or company name
  • It does not contain a complete word
  • It is significantly different from previous passwords
  • It includes numbers and symbols, as well as a mixture of uppercase and lowercase letters

It is important for IT managers and system administrators to ensure that users are aware of the necessity of maintaining secure passwords. IT teams should educate users about the importance of strong passwords and how to implement measures that will ensure users’ passwords are effective. Furthermore, IT managers and system administrators also need to stay alert and always one step ahead, such as by discovering potential weak passwords before the hacker does.

IT teams can deploy firewalls, endpoint security, malware analysis and the complete arsenal of solutions to thwart cyberthreats.  But, something as basic as a more complex password is critical to strengthen the first line of defense.

I’d like to hear from IT teams that have had success in encouraging users to adopt more sophisticated passwords.  Share your secrets for success!

The Top Ten Hot Spots Of Performance Bottlenecks

What are performance bottlenecks and where can you detect them? 

Performance bottlenecks are places within an application that prevents the application from running as fast it should. Finding performance bottlenecks in applications is becoming a critical aspect of any enterprise level load testing exercise. However, the trouble with performance bottlenecks, is that they can be tough to identify, and in most cases, when you hear the words “performance bottleneck,” the typical culprits that come to mind are CPU, memory, disk and network. Although these are definite good places to start identifying bottlenecks, it’s also important to realize that these aren’t the only places where problems can lurk.  While there isn’t a magic bullet to detect performance bottlenecks, knowing where to look can improve your aim.

Source: Knowledge Sharing

Source: Knowledge Sharing

Below are the top ten hot spots IT managers and administrators should look into when detecting a performance bottleneck:

1) CPUs can handle millions of calculations and instructions, but performance suffers when the number of operations exceeds capacity. Furthermore, it is important to note that CPUs that sustain greater than 75 percent busy numbers will slow the entire system, and need room for activity where loads can reach 100% for short periods of time.

2) Memory – Performance bottlenecks that seem to implicate lack of memory are often the result of poorly designed software that manifest themselves as memory issues. The key to solving memory performance problems is to find the root cause of the symptom before adding more RAM.

3) Storage – There are practical and physical limits to performance even when using the best contemporary disk technology, so it is important for the user to combine and separate workloads on disks. In addition, it is important to note that local disks are still faster than the fastest NAS or SAN.

4) Network – The network is the most commonly blamed source of performance bottlenecks, but in reality it is rarely the source, unless there is a network component hardware failure, such as damaged switch port, bad cable or router configuration problems.

5) Applications – Poorly-coded applications sometimes masquerade themselves as hardware problems.  One symptom that may indicate this scenario occurs when a given application is running and the system slows down, but once the application is closed, system performance improves again.

6) Malware – Viruses, trojan horses and spyware account for a large percentage of perceived performance bottlenecks, and can reside on one or more servers, the user’s workstation, or a combination of the two. Antivirus, antispyware, local firewalls, network firewalls and a regular patching regimen will help protect systems and prevent resultant bottlenecks.

7) Workload – ITG teams should measure network capacity and performance regularly and increase capacity when activity is approaching performance limits.  It is vital for users also to also monitor their computers and other devices and inform IT teams where their equipment is reaching performance limits.

8) Outdated Hardware – The older the hardware, the more likely it is to fail. The best way to prevent tragedies such as sudden disappearance of data  is to always back up critical files to a server or at least an external hard drive, as well as monitor system performance regularly.

9) Filesystem – Each filesystem, such as JFS, XFS and NTFS, has a specific purpose, and using the incorrect one for an application can have disastrous results. It is important for users to consider filesystem choices wisely and to select the best one for the job.

10) Technology – The technology you select for your infrastructure is the foundation of your network’s performance.  Always survey key members of your organization in advance to estimate anticipated network needs up to 2-3 years down the road, and then purchase hardware accordingly.  Given the explosion of content today, make sure all hardware features scalability. Also study hardware price trends it sometimes pays to buy more capacity than you need initially, sometimes not.  Storage, for example, continues to decrease in price per megabyte, so as long as your storage architecture is flexible, you can save money by purchasing just what you need for the short term and then add to it.

I hope this has been helpful and look forward to comments and strategies from you!

Vector Resources exhibits at NASCENT, CETPA and CASBO

It’s been a busy few days here at Vector Resources, as there has been a lot of exciting activity surrounding Vector Resources exhibiting at NASCENT, CETPA and CASBO!

In summary:

Vector Resources exhibited and presented this year at NASCENT Technology’s ConneXions 2012 conference in Long Beach, CA, October 17-18. NASCENT Technology’s ConneXions 2012 is an annual conference that showcases the latest technology solutions from NASCENT, a leading technology solutions provider to the intermodal transportation industry. Vector Resources exhibited and demonstrated state-of-the-art, IP-based video surveillance solutions.

In addition, Vector Resources also exhibited at the 52nd Annual CETPA conference on October 18 in Monterey, CA. This year’s CETPA conference theme was “Rethink, Reshape, Redesign,” and with Vector Resources’ experience and expertise in providing vendor-neutral network solutions to school districts, charter organizations and private schools across the West, Vector offered effective solutions and provided valuable insights for all attendees interested in successfully addressing California schools’ technology challenges.during both days, and also presented on the topic, “Network Optimization and Security” on October 18.

Last but not least, Vector also exhibited at CASBO’s 44th Annual vendor show on October 24 in Pomona, CA. CASBO’s conference theme this year was “Vendor Odyssey 2012” and Vector discussed and offered insights on hot topics, such as how today’s K-12 technology solutions are driving teaching and learning in the 21st century classroom with several of the school business officials that attended the conference.

All of these conferences were a major success! For all who were able to join us this year at any one of the conferences mentioned above – thank you and we hope to see you all again next year!

Should Application Virtualization Be in Your Network’s Future? – Part 2

In my previous post, I discussed 10 very important benefits that IT managers should analyze when considering application virtualization.

In today’s post, I will continue to shed some more insight on the top five limitations and downsides to this network architecture:

1) Bandwidth Requirements – Streaming apps to hundreds of users concurrently can require massive amounts of bandwidth. IT teams should make sure the cost savings from moving to virtualized apps isn’t swallowed up by extra bandwidth costs.

2) No Tolerance for Network Outages – If there is any disruption from the servers hosting the apps to the user – whether it’s within the host, in the internet connection or in the organization’s network, users are dead in the water with very limited resources to be productive.

3) Balky Apps – Some apps, particularly those that require installation of specific system drivers, work less well virtually. Some apps are better suited to virtualization than others.

4) Added Complexity – Apps from some vendors, most notably Windows, were not developed to run in a virtual environment. When there are problems, IT teams may not be able to trace whether the problem is with the app or the virtual environment.

5) More Work – If there are some apps essential to the organization that don’t function in a virtual environment, IT teams may find themselves managing two architectures with some applications running virtually and others running on users’ desktops. This scenario defeats much of the purpose of moving to a virtual environment.

Application virtualization is potentially able to benefit and boost productivity for employees and for the organization overall. However, IT teams must consider carefully the benefits versus potential downsides, and analyze application virtualization within the context of their comprehensive network plan.

It is only with a careful analysis of the organization’s network architecture and the computing needs of different types of users throughout the organization, accompanied by a detailed cost/benefit analysis, should IT teams embark on any project as impactful to a network as application virtualization.

I hope you learned a few things from this post as well as my previous post and I look forward to hearing everyone’s feedback and comments!

Should Application Virtualization Be in Your Network’s Future? – Part 1

Image

Application virtualization is the concept of running application software from a remote server, rather than a user’s computer. With application virtualization, each application brings down its own set of configurations on-demand and allows computing resources to be distributed dynamically in real time.

Application virtualization can provide significant flexibility and convenience, offering IT teams and users easier network maintenance and great application portability. Before IT teams consider implementing application virtualization for the organization’s network, however, it is vital to understand the major benefits and also the downsides this architecture will provide.

With that said, below are the top 10 benefits that IT managers should analyze when considering application virtualization:

1) No Installation Required – Application virtualization simplifies software deployment, meaning IT teams download an app once and don’t have to install it on hundreds of different computers. This also means that once the team knows that the app works in the virtual environment, he/she does not need to make sure that it works on all of the different desktop variations in their network, because configuration variations on desktops do not usually affect virtualized apps.

2) Easy Application Updates – This goes hand-in-hand with benefit number one, but with application virtualization, the host vendor ensures the newest version of the apps are available.

3) Easy Resets – If for some reason an app is no longer working properly on a user’s computer, due to changed settings or incompatible add-ons, the IT team has the option to reset the app to its original state.

4) Application Management – Similar to the whitelisting/blacklisting concept with security software, IT teams can set rules for which workgroups can download which apps. This can save a significant amount of budget and avoids taking the path of least resistance, which is sometimes to simply load every app on every machine.

5) Improved Security – By isolating applications from the operating system and other apps, application virtualization provides increased security by prevent malware from spreading from one app to others.

6) Reduction in Costs – Application virtualization is less expensive than either purchasing software licenses for all machines or moving to full desktop virtualization. For organizations with up-to-date operating system licenses, application virtualization provides the benefit or “virtualness” without replacing and paying for current OS software.

7) Helpdesk Support – Host vendor helpdesk personnel are available to provide support for IT teams if users run into any issues, questions, or even conflicts. The helpdesk personnel can easily access all available apps in the network and can test the app in the same environment as users, if necessary.

8) Improved Roaming – Some applications allow users to store settings in the virtual environment, so that when they access their apps from any computer anywhere, they will immediately see their personalized settings.

9) Multiple Software Versions – IT teams can make available multiple versions of applications concurrently, such as Word 2003 and Word 2007, which can save users time when working with files that may not convert easily between older and new versions of an app.

10) Operating System Independence – Since virtualized apps are typically OS dependent, users who work in Windows, MacOS and Linux can potentially use their apps on any machine within their organization.

Please stay tuned for my next blog post, where I will provide everyone with the top five limitations and downsides to this kind of network architecture.

Until then, I look forward to everyone’s feedback and comments!

Vector Resources Ranks #4 in this Year’s Inland Empire Business Journal’s Interconnect/Telecommunications Firms List

Vector Resources ranked number four in the Inland Empire Business Journal’s “Interconnect/Telecommunications Firms Serving the Inland Empire” list, which appeared earlier this week.

As the general manager in the Vector Resources, Inland Empire office for seven years, it is a great honor for Vector Resources to have been recognized and mentioned on this list in the most recent September 2012 issue. It is especially rewarding for me since I founded the office in 2005 as the only employee, and today we have grown to more than 25 full-time staff.

I am excited to see this list positively showcase and highlight Vector Resources’ strength as a leading in-house turnkey integrator in the Inland Empire, and look forward to working with the entire Vector team to growing our presence in the Inland Empire and contributing to the ongoing success of the company.

Vector Resources Exhibits at Fourth Annual Arizona Technology Summit

Earlier this week, Vector Resources exhibited at the Fourth Annual Arizona Technology Summit, held Wednesday, September 19 at the Phoenix Convention Center.

This year’s conference theme was in a prime Vector sweetspot – IT Transformation. As expected, the Summit lived up to its billing, attracting an outstanding mix of CIOs, CTOs, CEOs and government agency decision makers eager for real, proven solutions for their telecommunications and networking needs. Approximately 1,500 people attended the conference, a sizeable proportion of which visited our booth to witness our demonstrations. We featured integration of Microsoft Lync and Polycom systems to facilitate video communications from nearly all devices. There was also a lot of interest in BYOD and providing security to the personal devices people want to link to corporate networks.

Vector is on the move in Arizona in just over 12 months, we have won contracts from the Mohave Educational Services Cooperative, Pima County, and the State of Arizona.

Overall, this conference was a major success and definitely exceeded my expectations. It will contribute to positive awareness about Vector and hopefully assist with securing invitations to compete for future contracts.

For all who were able to join us this year at the conference – thank you, and we look forward to meeting you all again at next year’s Arizona Technology Summit.