Whether you’re an office worker or a busy person at home managing bills and scheduling, you likely rely on a computer to get things done. Even in the "age of mobile", computers are essential, and many people are put in a bind if their computer suddenly dies.
To protect these devices, it’s important to understand some of the common (yet surprising) ways you can fry your computer:
Lightning strike surges.
A lightning bolt contains a billion joules of energy, which creates a powerful surge that can easily overpower electronic devices. And direct hits open often. A man in Rochester, New York was recently struck by lightning while working at his office, and the bolt damaged multiple computers and systems.
When lightning hits a building it goes through the entire structure, and in the best-case scenario it will travel through a grounding rod which pushes the energy safely into the ground. While the current rushes through the building, it goes through walls, sockets, and any device that is currently plugged in. Even if a computer is protected by a high-end surge protector, a direct lightning strike can destroy it in an instant. The only way to prevent such damage is to unplug the computer and power it by battery, so it’s separated entirely from power sources.
Keeping the computer "on" all the time. Many busy professionals will leave work for the day and keep their computer on so it’s readily accessible in the morning. Or home users keep it on all day and night so it’s convenient for everyone in the family. Despite the savings of a few minutes, leaving the computer on 24/7 can shorten its lifespan. A computer in sleep mode is still idling and the cooling system still needs to operate continuously to avoid overheating. These parts can fail over time, and a user that turns off the computer for twelve or more hours a day can greatly extend the usable life of the components.
Rebooting the machine also gives the computer the chance to clear memory fragments and process updates. Ignoring such updates can expose the computer to malware and/or prevent it from receiving an important patch that might save the computer from the "blue screen of death."
Diving into advanced settings. Attempting to boost or alter the machine’s performance is a good idea, but only if you take the right actions. Running defragmentation or virus scans are simple and worthwhile tasks, but you have to tread carefully when it comes to "advanced settings." Some users might try to adjust the computer’s BIOS settings. These settings tell the computer what steps should be taken right after booting. The problem with making these types of changes is it can introduce undesired consequences, for example the OS might not load properly. And with the typical cost of computers today, fixing the problem with professional help will be close the cost of a new machine.
Dust and ventilation issues. Desktop and laptop computers need surrounding free space so they can properly ventilate out heat. Avoid using your laptop on top of a fluffy blanket, as this can insulate the heat and easily clog the vents. Laptops are especially prone to heat problems as they’re often used in non-desk settings, so always provide some space around the laptop to prevent a crash. There are several laptop cooling pads on the market which can stop overheating so you can use the laptop for hours without risk.
Dust is another issue, as it can easily clog onto the exhaust fan. Dust acts as an insulation layer for components that need to shed heat, so it’s worth opening up the machine’s case every few months to clear away dust, especially if the computer is used within a dust-prone environment. Use a can of compressed air to safely clean the machine.
When treated carefully, desktop computers and laptops can last for several years. Extending the life of these machines helps companies and individuals to get the most "cost per year" value, and prevents the possibility of a crash and losing valuable data.
By David Zimmerman
For Information and assessments contact us.
Keeping your network in good shape can be a headache, especially after you decide to allow Voice-over-IP (VoIP) calls on your network. Here's how to prepare your network for VoIP.
If your small to midsize business (SMB) has decided to make the shift from landline phones to a business Voice-over-IP (VoIP) service, then you'll want to be aware of several key networking challenges that VoIP newbies face. In some cases, switching to VoIP requires an entire office restructuring, a different approach to using wireless internet, or a trip to the store to purchase more Ethernet cables.
To help you anticipate and prepare for these networking issues, I spoke with Curtis Peterson, Senior Vice President of Cloud Operations at cloud-based business phone system provider RingCentral. We discussed some of the obstacles Peterson witnesses when helping companies move to RingCentral products. Keep in mind: Some of the terminology and phrasing you'll read in this article may sound confusing, which is why companies such as RingCentral offer guided installation services to smaller organizations. If you've got networking expertise in-house, then you'll be able to manage most of these issues on your own. However, if you don't know the difference between WI-FI and dial-up service, well, then your vendor will work with you to get you set up pronto.
Before we get into networking specifics, you'll have to determine the devices on which you'll let your employees make VoIP calls. You can purchase dedicated VoIP phones that let employees make and receive calls from their desk. You can also make VoIP calls directly from a computer without ever touching an actual phone. To piggyback off that technique, you can also make VoIP calls from smartphones. Determine which, if not all, of these endpoints you'll be using immediately. "Before the network requires more thought, determine that," advised Peterson.
2. Buy WiresThis is a no-brainer but, now that you're making the switch to VoIP, you'll need enough Ethernet cables to connect your devices to the internet. Additionally, you'll need to purchase the right Ethernet cables. Peterson recommends buying Cat 6 cables if you can afford them. These cables can typically support 10 Gigabit Ethernet (10GbE) at 250 MHz for up to 328 feet. You can get 1,000 feet for anywhere from $90 to $170. If you can't afford Cat 6, then Peterson recommends you use Cat 5e cables, which can support up to 100-MHz bandwidth. Peterson discourages his clients from using older Cat 3 cables, which he said presents a "troubleshooting nightmare."
3. Choose a Power SupplyThe easiest way to ensure that you're getting power to your VoIP phones is by distributing Power over Ethernet (PoE) cables. PoE lets devices that aren't plugged into AC sources pull in juice from your internet. Companies use PoE for surveillance cameras, ceiling-mounted access points, and even LED lights. If your Ethernet switch doesn't allow for PoE, then you can order a PoE injector, which is an additional power source that can be used alongside non-PoE switches.
4. Manage Internet Traffic With a Dedicated VLAN
Building your network via a dedicated Virtual Local Area Network (VLAN) lets you better distribute network traffic to ensure that voice and video calls don't get dropped when someone starts downloading a large file onto their computer. If you dedicate your VLAN only to phone and video traffic, then you'll be able to isolate and manage VoIP traffic without having to worry about tertiary traffic.
5. Manage Wireless Traffic With Access Point Handoff"Traditional Wi-Fi networks are usually a small managed system designed for laptops and tablets, and not for voice and video," said Peterson. Because of this discrepancy, it's important that you analyze your network to determine how many simultaneous calls your wireless connection can manage. Peterson recommends managed Wi-Fi that supports access point (AP) handoff for when one network becomes overburdened. He also suggests a system that is set for smaller packet sizes as well as an on-premises or cloud-based controller that can manually control access points when necessary.
6. Test Your FirewallsPeterson suggests taking a vendor's maximum published throughput with a grain of salt. "This is not enough of a benchmark for how much media you can drive through a firewall," he explained. If you don't have someone in your organization who can help you determine the difference between media and data traffic, then contact a professional. Peterson recommends using software-defined firewalls, which are designed to filter internal data traffic and packets rather than just data traffic.
7. Doublecheck Your RouterDetermine if your router has Packets Per Second (PPS) capability. This functionality provides traffic shaping and policing, which lets you prioritize voice and video data on your network. "What we look for is basically assuming one out of every five people will be on a 1-megabits-per-second [Mbps] voice call, and one out of every 7 will be on a [video] conference at 100 megabits per second," he said. Multiply the number of voice users at your company who will be on a voice call and a video call at any given moment, and then multiple that number by a minimum of five. That's how many Mbps of traffic your router should be able to manage without any issue.
By Juan Martinez
For more Information on how to set up a VoIP system and Voice Service Quotes Contact us.
VOIP (or Voice over IP) is a service which allows you to make phone calls and get in touch with one another through the internet, but without any of the hassle which can be involved when using traditional telephone. Instead of talking through traditional telephony networks, your voice gets sent through to the internet to the other user and you have a much more convenient call experience – whilst also giving you great savings on your communication.
How do I pick one?
The problem is that there are so many VOIP services available on the market, and a whole host of providers trying to pedal deals, all claiming that theirs is the best. It can be really difficult an confusing trying to wade through all of the sales talk and pick one that is going to suit your needs. That is where reviews sites come in- they offer you an indepth, full lowdown on everything from pricing, usability, quality, support, features, and of course, both the pros and the cons. Doing your homework by using these sites before making a purchase is really important and should be a vital step of the purchasing process.
Business or personal?
Although VOIP can be useful for personal use, it really shines when it is incorporated into a business and its intercommunication needs. It can be cheaper, as it is channelled through to your regular internet and so won’t rack up extra charges if calling internationally. It is a great asset for companies that want to stay interconnected even though they may be internationally located, as there are no issues with connecting calls internationally. One manager in the office in New York can talk to a manager in the office in Shanghai and it wouldn’t cost any more than sending an email. That interconnectivity is great when taking into account any future business needs. When travelling, communicating with the home base isn’t a headache! It can be as easy as opening up the computer and calling up, instead of trying to find and finance a mobile phone and calls plan while abroad. This is extremely useful when setting up a new office or business, or just staying in touch to keep updated.
Any business that wants to get updates as quickly as possible benefits from using VOIP services and you should consider making the switch now. You don’t even need to use a computer – you can keep things traditional by using a VOIP-enabled telephone which is also connected to the internet. It’s great for keeping in touch with clients – as you can call normal phones even when using a broadband phone network! Overall, VOIP systems give a greater range of flexibility to a business and allow it to work in such a way that they can get the best out of their employees. Being able to communicate effectively gives that extra edge when getting work done.
For information about dozens of VoIP providers Contact us. Information is free!
We live in a choice-driven society powered by technology. Whether it’s cutting the cable cord and moving to à la carte television or ditching the concept of ownership and opting to take part in the sharing economy, people want to pick and choose what works best for their individual needs instead of relying on a single provider or product.
In the case of cloud computing, a plethora of options is available to IT network teams for how they use infrastructure in private clouds, public clouds, and on-premise data centers and connect enterprise applications running in these environments.
With cloud adoption no longer an “if” but a “how,” companies are focusing on creating the right mix of public and private clouds to maximize efficiency and minimize spending. According to RightScale's State of the Cloud Survey, 85% of enterprises in 2017 are now creating cloud strategies that include multiple clouds, of which 58% are planning to implement hybrid cloud environments.
While many benefits exist to utilizing both private and public clouds, there are some challenges to the hybrid cloud environment that need to be addressed to ensure security and efficiency.
Handling Encrypted Traffic
Understanding the types of security threats (many of which revolve around access) helps address them properly. The rise in hybrid clouds has also resulted in an increase in the amount of encrypted traffic that application networking solutions need to address. Load balancers, which are often tasked with decrypting incoming traffic, need to support the latest encryption protocols and ciphers such as elliptic curve cryptography (ECC). They also need to scale horizontally and support granular per-app services to address the unique needs to each application.
The Risk Of Retrofitting Network Design
Building a hybrid cloud network requires a strategic plan to ensure successful integrations between public and private cloud services, as well as any on-premise applications and data. In today’s reality, many companies find themselves creating a plan after they’ve already started utilizing different providers and clouds. Though it’s not unusual, there are factors to consider to address efficiency and constraints. For example, it’s important to know what applications can be successfully migrated and run in which environments and where a particular set of data is allowed to reside.
Capacity Planning In The Cloud Era
According to the State of the Cloud Survey, the top initiative in 2017 for the majority of all cloud users is optimizing costs. While public clouds can appear to be a low-cost option, the price can jump quickly with heavy usage, especially when utilizing a dynamic cloud that continues to scale as needed. On the flip side, setting up a private cloud is not an insignificant endeavor, given the physical hardware environments needed. One method to curtail costs, which comes up in my conversations with Avi Networks customers, is to look at ways to minimize investment in hardware load balancers where software alternatives are available.
Over the past few years, I have had conversations with network administrators and architects at several large enterprises and found that hardware load balancers are significantly over-provisioned, to the tune of about 80%. One hapless network administrator at a large online retailer confided in me by saying, “I would rather run at 20% capacity for the better part of the year than be caught without being able to handle sudden traffic bursts. The potential loss of business and lack of peace of mind is just not worth it.” Software load balancers architected for cloud-native applications are finally making it possible to expand or shrink load-balancing capacity dynamically in response to real-time traffic needs.
To make the shift to a hybrid cloud computing environment, a few key best practices will help ensure success:
1. Automate Services
Hybrid clouds are designed to thrive on automation. For example, using a next-generation load-balancing solution allows predictive app auto-scaling. Such systems are analytics-driven and can automatically recognize changing traffic patterns in real time and spin up additional instances without human intervention. This end-to-end automation across the environment is made possible when a hybrid cloud traffic management system is in place.
IT services teams can build this self-service infrastructure to not only optimize computing resources and provisions on the fly but also to shift workloads as needed. These types of capabilities provide the agility that hybrid clouds promise with built-in elasticity, responsiveness and efficiency.
Managing cloud services with multiple providers or across environments doesn’t need to be challenging; network teams simply need a single and central point of management across all environments, no matter where applications are running. Because public and private cloud infrastructures operate independently, it’s critical to use technology that provides portability of data and applications between clouds.
For example, when it comes to application networking services, software load balancers that combine central management together with per-app delivery services enable a high degree of customization and flexibility. The alternative of deploying an expensive, monolithic hardware load balancer in front of multiple applications creates problems when each application needs to be maintained or updated, causing downtime for others.
3. Use Vendor-Agnostic Services
Now is the time to take advantage of the healthy competition that is brewing between cloud providers to avoid getting locked into a single cloud provider. Because not all cloud providers deliver consistent services, it behooves companies to remain nimble and test out different services to find the ones that work best.
By keeping the marketplace open and utilizing different providers, companies can take advantage of the myriad options to lower costs and increase performance, especially when they build a hybrid cloud that utilizes the best-of-breed from private and public clouds.
Hybrid cloud computing is growing in popularity because it offers companies flexibility, scalability and agility. To capitalize on this environment, IT teams must spend time creating a strategy that matches their organizational requirements. Putting private and public clouds together requires automation tools and management capabilities to make a system efficient and cost-effective over the long haul.By Ranga Rajagopalan
For more information and quotes contact us.
The cloud is increasingly a part of business, and any failure in distributed infrastructures could result in a potentially costly downtime.
Cloud computing is a reality that most businesses today are facing. While there are still holdouts — especially businesses that have security and data sovereignty issues — the cloud will be prevalent to practically all businesses in the mid-term. In fact, if the early nineties and aughties were all about having an online presence as the minimum requirement for brands, then the next five years are all about businesses completing their cloud migration.
Gartner estimates that by 2022, businesses would have already shunned their corporate “no cloud” policies and thus embrace the benefits of cloud platforms, amid some potential risks.
Of course, the benefits outweigh the potential risks: shorter time-to-market, lower infrastructure and storage costs, greater agility in using IT resources, and the ability to optimize the use of infrastructure.
However, there is also a potential downside. Given that your business does not have 100 percent control over the infrastructure when you are deploying apps and services over a cloud provider, then you might be worried about leaving your business assets and reliability to the hands of a third party.
Significant infrastructure downtime is among a business’ worst nightmares, as it can mean losses in terms of sales, productivity, and customer trust. Other concerns include security breaches, software issues, or even human errors — all of these can lead to tangible costs with monetary value.
What’s important is for a business to ensure it has adequate redundancies and safeguards in place, which can help mitigate the potentially damaging effects of such risks and threats.
In this article, we will discuss the best practices that can help ensure the reliability of your cloud-based systems, and that can help ensuring the integrity of your service in the event of a downtime. These particularly involve Disaster Recovery (DR) solutions, as well as Business Continuity (BC). Together, BCDR means your system can bounce back from any eventuality, which can involve downtime, data loss, data breaches, and similar cloud catastrophes.
Disaster Recovery as a Service
With the emergence of the cloud as the preferred infrastructure for businesses, the need for services that give assurance of data integrity has also risen. This has brought Disaster Recovery as a service or DRaaS to light, and providers of all sizes are now offering their own DRaaS solutions.
Both AWS and Azure, for example, provide DRaaS services on their respective cloud infrastructures, which ensure that businesses running their systems on the cloud can have faster disaster recovery capabilities without the expense of deploying systems on second, third, or additional sites.
Independent providers also offer similar services, such as IBM, Idealstor, nScape, and the like. Some of these solutions specifically target cloud users, although these services can also provide an added layer of assurance for businesses that run their systems on on-premises deployments.
Not all DRaaS options are equal, however. As a business, you will need to take these following matters into consideration, in ensuring your DR capabilities are at par with today’s standards.
One disadvantage of the legacy approach to disaster management is that these are mostly manual. If you can remember the tape backups of olden times, or even making regular off-site backups, these are labor-intensive, and require some lead time before business continuity systems kick in.
The advantage of modern BCDR solutions is that these will make regular backups and redundancies of your system, without added human intervention. And when such a disaster or downtime strikes, the redundancies in place will automatically bring the system back up to speed, likewise without human intervention.
One area where most IT managers have concerns with is the ease by which they can manage their BCDR deployments. While this can be more easily done on pure-play cloud settings, it can be a different matter altogether when it comes to hybrid cloud deployments or even on-premises deployments that utilize cloud-based DRaaS.
For this purpose, a good solution will involve unified management across both cloud and on-prem deployments, to ensure that IT management can have better visibility over the backups, redundancies, and protocols in place. Solutions like Azure DRaaS promise just this kind of efficiency, given its legacy capabilities in Windows servers, as well as virtualization in hybrid cloud environments.
Another area that IT managers should watch out for is whether one’s DRaaS provider offers the ability to test the system on a regular basis. This means having the ability to simulate failures in a controlled environment, so that you know how well you can bounce back, how short the time-to-recovery is, and whether there is any manual intervention required when such an eventuality arises.
You can expect legacy solutions to require some manpower when doing such tests, but a modern DRaaS solution should provide some level of automation, so that you can keep poking and prodding your system for potential loopholes.
Actual post-failure capabilities
Now, this is the biggest test of your DRaaS deployment. Understandably, no business wants any infrastructure failure, but in the event that a disaster hits, it pays to be protected, or at least capable to bounce back. When such a disaster occurs, you will need to evaluate your BCDR provider, whether they are able to deliver as promised, whether your system can run fully on backups, and how quickly the actual time-to-recovery will be. Your BCDR provider should have the adequate agility and flexibility to address any extended downtimes and ensure fastest recovery times.
A final word
Businesses should not live in constant fear of system failures, but it is a reality of life that IT managers should be aware of. What’s important is that you should not live in fear wondering when an outage will occur. Instead, through BCDR solutions, you should be able to anticipate any potential system issues, which then lets you shift your time and resources to core business activities.
by Daan Pepijn
For more information and quotes contact us.
When you shop for a new computer or laptop, one important feature to look for is the type and number of ports. Ports are docking points that connect external devices, wired connections and more. With the many types of ports and versions of each port available, it can be hard to know what to look for.
To help you find the right computer for your business, here are some of the most common types of ports and what they do.
USB Type AThis is the most common USB connector found on computers. Often simply referred to as a USB port, it is a universal port that can connect everything from external drives to peripherals.
USB Type B
A less common type of USB, a USB Type B port connects docking stations and printers.
USB Type C
The newest type of USB port, the USB Type C is predicted to replace other types of USBs. It is the slimmest version – thus fitting in slimmer laptops and smaller computers – and is reversible, so the connector fits both ways. The USB Type C supports different types of connections, including displays and chargers.
A USB 3 offers high-speed transfers, such as between external drives and computers. It has a maximum transfer rate of 5 Gbps, making it a decent option for transferring files over a wired connection.
A microSD slot reads microSD memory cards that contain external storage. SD card reader This slot reads SD cards from digital cameras. It is also known as a 3-in-1, 4-in-1 and 5-in-1 card reader.
Also known as a headphone jack, the audio jack connects your headphones and microphones to your computer. The 3.5 mm audio jack is the most common type of audio jack found in computers.
The Ethernet port connects your computer directly to local networks and the internet using a wired connection. Ethernet is the alternative when Wi-Fi is not available or when the Wi-Fi signal is poor.
The HDMI port connects your computer to TVs, projectors and other external monitors. The output resolution depends on your computer's graphics card. It also includes audio with video, so you don't need a separate audio connection.
Similar to HDMI, a DisplayPort connects your computer to an external monitor. It is the most advanced type of display connection, able to broadcast videos with a resolution of up to 4K and to accommodate multiple monitors in HD. The DisplayPort appears as its own connector or using a USB Type C port.
DVI is a more budget-friendly version of a DisplayPort. Also known as Dual Link, DVI is limited to an output of 1920 x 1200 resolution and needs a second connection to support a 4K monitor. DVI only appears on desktop computers, not laptops.
Thunderbolt 3 is the fastest connection for data transfers, at up to 40 Gbps. It can also connect multiple external monitors at 4K resolutions.
The VGA port connects your monitor to a computer's video card. It is one of the oldest and least powerful display ports, so you won't find it on many current computers.
By Sara Angeles
For more technical questions and quotes contact us.