[prev in list] [next in list] [prev in thread] [next in thread] 

List:       qubes-users
Subject:    Re: [qubes-users] Random sudden reboots
From:       cooloutac <raahelps () gmail ! com>
Date:       2017-11-28 13:15:40
Message-ID: d202d9ad-fbfd-4bcc-bcc2-b3faa6bb0a2e () googlegroups ! com
[Download RAW message or body]


On Monday, November 27, 2017 at 2:09:12 PM UTC-5, David Hobach wrote:
> On 11/27/2017 07:57 PM, David Hobach wrote:
> > On 11/27/2017 07:47 AM, Wael Nasreddine wrote:
> > > I'm running 4.0-RC2 on Asrock Z170 pro4/i7-6700k and I got two hard 
> > > reboots in the last few hours, often around the time I start a VM. I 
> > > do not see anything in the log.
> > > 
> > > P.S: I've been running Citrix XenServer for two years on this machine 
> > > with no issues.
> > > 
> > > Nov 26 22:36:38 dom0 sudo[911]: pam_unix(sudo:session): session closed 
> > > for user root
> > > Nov 26 22:36:38 dom0 audit[911]: USER_END pid=911 uid=0 auid=1000 
> > > ses=2 msg='op=PAM:session_close 
> > > grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_
> > > Nov 26 22:36:38 dom0 audit[911]: CRED_DISP pid=911 uid=0 auid=1000 
> > > ses=2 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" 
> > > exe="/usr/bin/sudo" hostname=? addr=?
> > > Nov 26 22:36:38 dom0 kernel: audit: type=1106 
> > > audit(1511764598.490:447): pid=911 uid=0 auid=1000 ses=2 
> > > msg='op=PAM:session_close grantors=pam_keyinit,pam_limits,pam_keyi
> > > Nov 26 22:36:38 dom0 kernel: audit: type=1104 
> > > audit(1511764598.490:448): pid=911 uid=0 auid=1000 ses=2 
> > > msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" exe="/us
> > > Nov 26 22:36:38 dom0 qmemman.daemon.algo[2304]: 
> > > balance_when_enough_memory(xen_free_memory=97794733, 
> > > total_mem_pref=5101760998.4, total_available_memory=60546306417.6)
> > > Nov 26 22:36:38 dom0 qmemman.daemon.algo[2304]: 
> > > left_memory=24233992215 acceptors_count=1
> > > -- Reboot --
> > > Nov 26 22:38:00 dom0 systemd-journald[240]: Runtime journal 
> > > (/run/log/journal/) is 8.0M, max 196.7M, 188.7M free.
> > > Nov 26 22:38:00 dom0 kernel: Linux version 
> > > 4.9.56-21.pvops.qubes.x86_64 (user@build-fedora4) (gcc version 6.4.1 
> > > 20170727 (Red Hat 6.4.1-1) (GCC) ) #1 SMP Wed Oct 18 00:2
> > > Nov 26 22:38:00 dom0 kernel: Command line: placeholder 
> > > root=UUID=9b846465-f59a-4f83-adfa-5468c915defd ro 
> > > rootflags=subvol=root rd.luks.uuid=luks-1b3c3eda-7836-443a-bc07-
> > > Nov 26 22:38:00 dom0 kernel: x86/fpu: Supporting XSAVE feature 0x001: 
> > > 'x87 floating point registers'
> > > Nov 26 22:38:00 dom0 kernel: x86/fpu: Supporting XSAVE feature 0x002: 
> > > 'SSE registers'
> > > Nov 26 22:38:00 dom0 kernel: x86/fpu: Supporting XSAVE feature 0x004: 
> > > 'AVX registers'
> > > Nov 26 22:38:00 dom0 kernel: x86/fpu: xstate_offset[2]:   576, 
> > > xstate_sizes[2]:   256
> > > Nov 26 22:38:00 dom0 kernel: x86/fpu: Enabled xstate features 0x7, 
> > > context size is 832 bytes, using 'standard' format.
> > > Nov 26 22:38:00 dom0 kernel: x86/fpu: Using 'eager' FPU context switches.
> > > Nov 26 22:38:00 dom0 kernel: Released 0 page(s)
> > > Nov 26 22:38:00 dom0 kernel: e820: BIOS-provided physical RAM map:
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x0000000000000000-0x000000000009bfff] usable
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x000000000009c800-0x00000000000fffff] reserved
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x0000000000100000-0x0000000067e1ffff] usable
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x0000000067e20000-0x0000000067e20fff] ACPI NVS
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x0000000067e21000-0x0000000067e6afff] reserved
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x0000000067e6b000-0x0000000067ebcfff] usable
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x0000000067ebd000-0x0000000068bedfff] reserved
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x0000000068bee000-0x000000006ee48fff] usable
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x000000006ee49000-0x000000006f7adfff] reserved
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x000000006f7ae000-0x000000006ff99fff] ACPI NVS
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x000000006ff9a000-0x000000006fffefff] ACPI data
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x000000006ffff000-0x000000006fffffff] usable
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x0000000070000000-0x0000000077ffffff] reserved
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x00000000e0000000-0x00000000efffffff] reserved
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x00000000fe000000-0x00000000fe010fff] reserved
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x00000000fec00000-0x00000000fec00fff] reserved
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x00000000fed90000-0x00000000fed91fff] reserved
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x00000000fee00000-0x00000000feefffff] reserved
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x00000000ff000000-0x00000000ffffffff] reserved
> > > Nov 26 22:38:00 dom0 kernel: Xen: [mem 
> > > 0x0000000100000000-0x0000000191f95fff] usable
> > 
> > 
> > Yes, I can confirm that issue (except for the "around the time I start a 
> > VM"). Didn't find anything yet neither, my log looks similar. Some 
> > memory balancing tends to be the last entry.
> 
> P.S.: I run a T530 with coreboot, some i5, ME cleaned.
> 
> I'm not sure whether it's a heat issue, but I'd guess not as I had 
> tested it until ~95 Celsius and it tends to run at max 80 with Qubes. I 
> think it reboots at ~100 Celsius.
> 
> But yes, it tends to happen at relatively high load.

psu issue?

-- 
You received this message because you are subscribed to the Google Groups \
"qubes-users" group. To unsubscribe from this group and stop receiving emails from \
it, send an email to qubes-users+unsubscribe@googlegroups.com. To post to this group, \
send email to qubes-users@googlegroups.com. To view this discussion on the web visit \
https://groups.google.com/d/msgid/qubes-users/d202d9ad-fbfd-4bcc-bcc2-b3faa6bb0a2e%40googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic