3/15/07 01:05-02:00 Router problem.
2/21/07 00:15-04:15 An upstream fiber backbone failure. The upstream provider
says a main backbone router died and had to be replaced.
2/1/07 24:30-05:00 Backbone upgrade to aleviate peak dsl slowness.
Upstream provider had a routing issue they couldnt figure out, so
many users were affected.
1/23/07 23:00-01:30 Mail proxy server Hard drive failure.
1/2/07 16:29-17:20 An upstream fiber backbone failure. Problem identified as cards at XO and they believe it is fully resolved.
1/2/07 04:00-04:50 a fiber backbone failed within the nighttime maintenance window. Perhaps upsteam maintenance.
10/27/06 08:00+ Due to a massive increase in SPAM recently, some email
is being delivered late including some duplicates and general slowness
sending emails. We are adding additional servers to resolve the issue.
09/12/06 21:55-22:35 Router problem affected dsl-7/8/9/10 accounts
06/22/06 10:00-10:30,
11:30-12:00 dsl-13,dsl-14,dsl-15,dsl-16 accounts. bad ATM card replaced.
13:40-15:00 dsl-13,dsl-14,dsl-15,dsl-16 configuration issues.
05/28/06 20:30-21:00 Heat related power outage in the area. While our generator
was running fine a fiber provider's backup power died causing an outage.
05/12/06 21:00-22:10 qwest had some issue with dsl-7.
05/12/06 11:00-11:20 dsl-13,dsl-14,dsl-15 router problem.
02/23/06 17:30-18:15 Fiber outage. Affected most service.
02/23/06 20:30-22:45 Switch died. Affected most service.
02/24/06 02:30-09:30 Slowness. Affected about half DSL and dialup. Dropped packets on new switch from negotiation settings.
01/17/06 19:45-20:30 dsl-7 and dsl-8 accounts affected by an ATM circuit outage.
01/15/06 13:00-14:50 backbone outage in Duluth MN . Affected Duluth dialup.
12/29/05 00:00-04:25 fiber outage affecting about half of all circuits and dialup.
12/24/05 07:00-11:00 atm outage affecting all dsl-7 and dsl-8 accounts.
12/24/05 10:30-11:00 dsl-9 and dsl-10 account outage while fixing dsl-7/8 atm.
09/21/05 19:50-22:45 Storm power outage in Hudson WI.
09/04/05 07:30-10:30 dsl-5/dsl-6 router died. Replaced processor and memory. Affected only dsl-5 and dsl-6 accounts.
07/24/05 02:00-10:00 Hudson backbone issue.
07/23/05 12:00-16:00 Hudson Wi Power loss (storm).
07/23/05 14:00-18:00 Duluth backbone issues.
07/22/05 14:00-17:00 Minneapolis power outage (road construction). Some problems with UPS working with backup power generator.
01/27/05 5:00-08:20 server crash. Affected some dialup
and our website.
10/30/04 4:00-11:00 server crash. Affected some dialup
and our website.
9/5/04 13:30-14:30 radius server problem, affected most dialup users attempting to connect.
7/26/04 13:00-13:30 fiber outage on 651-393-2600 and some outstate numbers.
5/20/04 19:00-21:00 Denial of service attack causing timeouts etc.
Took a while to track and get it blocked upstream.
3/7/04 23:55-03:00 dsl and 612-236-1101 pool down due to hardware switch failure - replaced.
1/28/04 00:30-02:30 dsl down due to hardware switch failure - replaced.
1/28/04 11:15-12:30 qwest backbone problem. little effect to dialup or dsl.
1/26/04 15:00-16:30 Backbone down on 612-435-0100 due to problem with
upgrade upstream.
12/26/03 14:09-15:09 Hudson Wisc backbone problem.
12/18/03 13:30 one of the st.cloud numbers out of service for 24 hours. The other st.cloud number was still fine. Phone company problem. Same number down again12/20-12/22 over the weekend. Phone company again, they say they won't do it again.
11/28/03 00:00-02:30 backbone problem.
11/18/03 08:30-09:15 Fiber problem.
10/08/03 01:00-02:30am Upstream backbone problem on 612-435-0100 pool only.
9/16/03 15:30-16:00 Radius Server problem.
9/11/03 10:30 Random email Server failure to connect problem for several days
on one modem pool.
Flakey issue finally located and replaced.
8/20/03 ?-9:30 Server problem. Users could log in but our website but
didnt pop up.
8/19/03-8/23/03 Major plague of the blaster and welchia viruses amoung others.
These viruses are denial of service viruses which attempt to slow and block
all internet services.
Developed and implemented specific new defenses and notification systems.
4/5/03 05:30-16:00 Billing/support system down. Caused initial login
screen to not work for dialin users and trouble getting help desk info.
Some linux webserver repair ongoing.
2/28/03 08:40-0930 XO backbone down again.
2/27/03 23:20-03:30 DS3 Internet backbone through XO down. 612-435-0100 pool
down.
651-393-2600 and dsl ok after some minor routing changes.
2/3/03 05:00-07:00 Power out in Hudson Wi.
1/18/03 17:00-18:30 Weird routing issue on 208.50.45 network. Affected only dsl users with 208.50.45 external IPs.
1/8/03 18:30-22:00 Routing resolution wasnt quite right. Erroneous routes
not fully deleted caused packet timeout/loss.
1/8/03 21:00-21:45 unrelated massive denial of service attack from outside.
essentially stopped traffic on dsl and 393-2600 pool for 45 min.
1/8/03 12:00-17:00 another denial of service virus/hackers on same t1 - mostly
unnoticed by our users.
1/7/03 9:00-17:00 denial of service virus/hackers on 1 t1 - mostly
unnoticed by our users.
12/18/02 15:00-17:00 technical problems on 1 DS3. Affected DSL and
the 651-393-2600 with several 5-10 minute outages.
12/2/02 13:30-13:59 technical problem, had to reboot mailservers.
9/16/02 an upstream ds3 router in chicago went down causing www outage on 612-435-0100. back up again about 8:20am. 651-393-2600 was uneffected.
9/04/02 mail server issue. Switched to backup mail servers. A few users may see some duplicate old emails if they were not deleted off our servers properly by
their email clients or if they have the leave-on-server setting selected.
9/03/02 09:00-10:30 mail server issue.
9/03/02 13:00-16:30 another related mail server issue. Proxy server replaced.
9/02/02 00:00-12:00 news.usfamily.net news server taken down to debug a network issue.
7/19/02 mail.usfamily.net email server problem 6-9am
7/10 - 7/15/02 10 minute frame relay drops once or twice a day.
Turned out to be a linux denial of service virus.
7/4/02 10 minute backbone outages at 7:45,8:45,9:15,11:00,13:45
and finally found and replaced flakey router at 15:20.
6/25/2 14:30-16:30 Power supply fried on 612-435-0100 pool.
6/20/02 15:01-15:40 The mail servers were moved to our new Golden Valley location.
Unfortunatly didn't go exactly as planned. Lost DNS for 30 minutes.
5/2/02 14:01-19:42 The radius servers of the national pool failed.
These are supported by the national pool subcontractor.
Did not affect any local users.
5/3/02 08:00-? Similar national pool issues again.
4/11/02 03:00-18:00 A mailserver hard disk failure. Some timeouts as
as mail was redirected to backup servers and some ongoing slowdown or delays until new disk
reloads fully.
2/9/02 17:30-18:00 planned maintenance on mailserver responding to usfamily.net
2/9/02 20:00-21:00 unplanned maintenance on mailserver responding to mail.usfamily.net
2/10/02 20:00-21:30 up and down - more unplanned maintenance on both mailservers responding to mail.usfamily.net and usfamily.net . Problem turned out to be a Unix file system issue.
1/29/02 08:00-13:00 DSL down. Our main DSL router died and had to be replaced.
Not a walk in the park.
1/19/02 20:00-21:40 DSL rounter crashed and had to be reloaded.
12/17-18/01 We have noticed some significant email delays and are working to
aleviate them. Since 9/11 and with the Christmas Season spam has increased
10 fold and email has increased 4 fold. Where we expect about a million emails
a month we are now seeing a million emails every 5 days. Some of the spam
levels are becoming effectively denial of service attacks bringing normal
email to a halt. Our spam filter
is preventing the issue from reaching our users and
we expect things will settle down after 12/24 but we are working to
increase our processing speed to improve timeliness of normal email. We have
already improved the situation significantly but will continue to monitor
and enhance.
12/10/01 09:45-16:45 A major Twin Cities Internet backbone outage occured today
and affected us along with much of the Twin Cities. One engineer said
a large switch was replaced, literally burnt up, and had to be replaced a second time and a DSX panel was replaced. It is not believed that there has been a backbone outage this long in the last 5 years.
11/28/01 21:45-23:50 Internet backbone on the 612-435-0100 pool is down upstream in chicago, engineers are on it. All they say at the moment is that it is an OC12.
DSL users and the 651-393-2600 pool are unaffected.
Fixed at 23:50. It was a power outage in an OC12 from Santa Clara CA to Chicago
at the CA end. It affected a significant part of the nation but only 1 of our pools.
11/22/01 7:00-10:30 Internet outage upstream from us on one of the backbones.
Affected most DSL users and the 393-2600 pool. Affected many Internet providers in this part of the country.
11/8/01 8:30-9:30 1 Mail server crashed, so we moved traffic to another server.
Then we were hit with a denial of service attack by a spammer sending about 500,000 emails. These were all blocked by our spam filter and none of this caused any real issues but slowed things down for quite a while.
10/22/01 12:00-13:00 ATM router crash - DSL users down.
10/18-20/01 Some mail server delivery delays.
Problem server replaced.
10/2/01 20:00 - 22:30 QWest market street switch failure.
(that is their main switch for the entire twincities.)
612-435-0100 down about 95%. Other lines didn't seem to be noticably affected.
7/13/01 17:00 - 23:00 USFamily.Net website down for hard disk swap out. Didn't affect customers.
6/15/01-6/30/01 Recurring DSL-4 atm line issues. Only effected DSL-4 users.
5/29/01 21:11 - 24:00 A telephone T1 flaked out in the main modem pool. Took a while to find which one and get it offline. Caused lots of disconnections, fast busy signals etc. Qwest line repair has been dispatched. Line repaired and back in service 08:00 5/30.
5/7/01 9:00 - 11:00 Upstream Internet outages.
4/29/01 12:00 - 18:00 Due to technical difficulties we were unable to receive and take any support calls on Saturday.
4/26/01 18:00 - 20:00 Qwest was repairing a line and their test equipment kept locking up the entire modem pool. We finally got them to disable that T1 before continuing that repair. Symptoms were no answer or a recording.
1/31/01 19:00 - 21:32 About a third or all our lines went down in the phone companies switch causing lots of interruptions and busy signals. Fixed at 9:32pm.
1/24/01 21:27 Had to reboot some switches - busy 3 minutes.
12/21/00 15:00-16:30 Qwest repairing lines and replacing faulty equipment
in our building caused some various and confounding disruptions.
12/7/00 05:00 Email delays have been resolved.
12/6/00 An upsteam bottleneck related to DNS is slowing down email delivery.
Our upstream provider is working on identifying the problem and we are attempting some workarounds to improve the situation. This appears to have stared 12/5/00. While no email is lost it is quite delayed in some instances.
11/9/00 Qwest repaired some mouse damaged cables in this area. Caused a little
instability to a few isdn users. Most users were unaffected.
11/7/00 2:30pm-8:00pm A major backbone outage upstream resulted in 50% lost packets resulting in painfully slow internet access and timeouts until it was repaired.
11/7/00 8:00am-11:00am An ATM line failure created dsl outage for some dsl users. Both issues were within Qwest.
10/24/00 8:00pm-11:30am A major frame relay outage at Qwest resulted in 70% lost packets resulting in painfully slow internet access and timeouts until it was repaired.
9/18/00 4:00pm-6:10pm Been a long time since we had a problem but today a powerline broke off the pole out on the street. (Watch the 10pm news). Pretty spectacular. Had to shut everything down safely for a while.
5/04/00 9:10pm-9:45pm Some line problems resulting in a few busy signals. Fixed at 9:45.
4/26/00 ?-10:00am Metro wide phone line problems causing frequent disconnections.
4/11/00 12:00-9:00pm USWest had outage in northern suburbs to 393-2600 due
to programming of their new fiber between COs. Other pools were accessible.
Apparently a programmer forgot about 10 digit phone numbers.
3/31/00 7:30-8:30 More OS errors swapped to backup system again.
Reason for recent failures finally isolated and resolved.
3/29/00 7:30-9:00 OS errors swapped to backup system.
3/10/00 1:00-9:00 After another hard disk crashed we replaced the entire server.
It takes a long time to load all that data.
3/9/00 8:00-18:00 Hard disk crashed forcing a complete reload. Some users may see a few duplicate email messages from the reload. We soon will have new "parallel server technology" that we hope will make this particular disaster pretty transparent to our users.
3/2/00 19:00-21:00 some lines on 393-2600 were answering but not connecting
3/1/00 11:00-13:30 mail server disk errors
2/20/00 20:00-20:30 server crashed
2/11/00 06:15-7:30 upgrading mail servers.
1/29/00 23:30-24:00 our upstream backbone provider had a internet outage of some kind - every thing got very slooow.
1/26/00 20:30 rebooted a hung switch on 651-393-2600 pool.
1/8/00 DSL NAT problems 1pm-11:30pm. some dsl had access or severe speed issues.
12/27/99 Server down 8:30-9pm.
10/26/99 A presumed power glitch at USWest brought down some telephone lines
from 7:30 pm until 10pm
resulting in more interruptions than typical.
9/27/99 DSL failure for many DHCP dsl users from 7pm until resolved
10am on 9/28. USWest set some VCIs in the ATM to loopback all data
resulting in the service outage.
8/30/99-9/1/99 USWest line trouble causing connect failures
connecting on the 651-628-8803 pool. - USWest ultimately found a DS3 that was down somewhere reducing bandwidht to the CO resulting in "all circuits are busy recording".
3/16/99 from 2pm on: USWest line trouble causing some slow speeds and difficulty
connecting on 628-8803/628-8607. - Resolved by USWest overnight.
2/22/99 3pm UPS Power failure. Resolved within 30 minutes.
2/1/99 The USWest "Ring no answer" problem appears to be resolved.
1/31/99 The USWest "Ring no answer" problems have abated. We are not sure if it is 100% resloved but we are hopefull and continuing to monitor at all times.
There were no problems on 1/30.
1/27/99-1/29/99 "Ring no answer" problems on 651-628-8803. USWest has identified
that they have a severe overload condition at the Market CO. This is
resulting in Ringing without answer during the peak hours of 8-11pm while
the USFamily.Net modem pool remains over 1/3rd available.
There are some increased interruptions associated with this problem.
USWest is attempting
to find both long and short term solutions and we are looking at adding
another modem pool in a different part of the Twin Cities sooner than
planned.
Thanks for your understanding and patience, we have USWest working on
this 24 hours per day.
Jim
10/2/98 10:30-11:00 a.m. USWest
"accidently diabled" our main access number. Thanks to Pat Prat at USWest it was
restored within 15 minutes of being reported. Users were still able to logon into the
other modem pool.
7/4/98 00:00 - 8am Some users unable to
connect due to a unix server problem. (Murphy's Law of Holiday Service was in effect).
This problem was actually our fault - sorry.