[prev in list] [next in list] [prev in thread] [next in thread] 

List:       mesos-issues
Subject:    [jira] [Comment Edited] (MESOS-8483) ExampleTests PythonFramework fails with sigabort.
From:       "Till Toenshoff (JIRA)" <jira () apache ! org>
Date:       2018-01-30 22:42:00
Message-ID: JIRA.13133385.1516803903000.80124.1517352120112 () Atlassian ! JIRA
[Download RAW message or body]


    [ https://issues.apache.org/jira/browse/MESOS-8483?page=com.atlassian.jira.plugin. \
system.issuetabpanels:comment-tabpanel&focusedCommentId=16345935#comment-16345935 ] 

Till Toenshoff edited comment on MESOS-8483 at 1/30/18 10:41 PM:
-----------------------------------------------------------------

One more note, this problem is only visible when using the Xcode 9 Apple LLVM clang \
and newer - versions found within Xcode 8.2 and earlier do not show this linkage \
problem.


was (Author: tillt):
One more note, this problem is only visible when using the Xcode 9 Apple LLVM clang \
and newer - version found within Xcode 8.2 and earlier do not show it linkage \
problem.

> ExampleTests PythonFramework fails with sigabort.
> -------------------------------------------------
> 
> Key: MESOS-8483
> URL: https://issues.apache.org/jira/browse/MESOS-8483
> Project: Mesos
> Issue Type: Bug
> Affects Versions: 1.5.0
> Environment: macOS 10.13.2 (17C88)
> Python 2.7.10 (Apple's default - not homebrew)
> Reporter: Till Toenshoff
> Assignee: Till Toenshoff
> Priority: Blocker
> 
> Starting the {{PythonFramework}} manually results in a sigabort:
> {noformat}
> $ ./src/examples/python/test-framework local
> [..]
> I0124 15:22:46.637238 65925120 master.cpp:563] Using default 'crammd5' \
> authenticator W0124 15:22:46.637269 65925120 authenticator.cpp:513] No credentials \
> provided, authentication requests will be refused I0124 15:22:46.637284 65925120 \
> authenticator.cpp:520] Initializing server SASL I0124 15:22:46.659503 2385417024 \
> resolver.cpp:69] Creating default secret resolver I0124 15:22:46.659624 2385417024 \
> containerizer.cpp:304] Using isolation { environment_secret, filesystem/posix, \
> posix/mem, posix/cpu } I0124 15:22:46.659951 2385417024 provisioner.cpp:299] Using \
> default backend 'copy' I0124 15:22:46.661628 67534848 slave.cpp:262] Mesos agent \
> started on (1)@192.168.178.20:49682 I0124 15:22:46.661669 67534848 slave.cpp:263] \
> Flags at startup: --appc_simple_discovery_uri_prefix="http://" \
> --appc_store_dir="/var/folders/_t/rdp354gx7j5fjww270kbk6_r0000gn/T/mesos/store/appc" \
> --authenticate_http_executors="false" --authenticate_http_readonly="false" \
> --authenticate_http_readwrite="false" --authenticatee="crammd5" \
> --authentication_backoff_factor="1secs" --authorizer="local" \
> --container_disk_watch_interval="15secs" --containerizers="mesos" \
> --default_role="*" --disk_watch_interval="1mins" --docker="docker" \
> --docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io" \
> --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" \
> --docker_stop_timeout="0ns" \
> --docker_store_dir="/var/folders/_t/rdp354gx7j5fjww270kbk6_r0000gn/T/mesos/store/docker" \
> --docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume" \
> --enforce_container_disk_quota="false" --executor_registration_timeout="1mins" \
> --executor_reregistration_timeout="2secs" --executor_shutdown_grace_period="5secs" \
> --fetcher_cache_dir="/var/folders/_t/rdp354gx7j5fjww270kbk6_r0000gn/T/mesos/work/agents/0/fetch" \
> --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" \
> --gc_disk_headroom="0.1" --hadoop_home="" --help="false" --hostname_lookup="true" \
> --http_command_executor="false" --http_heartbeat_interval="30secs" \
> --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem" \
> --launcher="posix" --launcher_dir="/usr/local/libexec/mesos" --logbufsecs="0" \
> --logging_level="INFO" --max_completed_executors_per_framework="150" \
> --oversubscribed_resources_interval="15secs" --port="5051" \
> --qos_correction_interval_min="0ns" --quiet="false" \
> --reconfiguration_policy="equal" --recover="reconnect" --recovery_timeout="15mins" \
> --registration_backoff_factor="1secs" \
> --runtime_dir="/var/folders/_t/rdp354gx7j5fjww270kbk6_r0000gn/T/mesos/work/agents/0/run" \
> --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="true" \
> --version="false" --work_dir="/var/folders/_t/rdp354gx7j5fjww270kbk6_r0000gn/T/mesos/work/agents/0/work" \
> --zk_session_timeout="10secs" python(1780,0x700004068000) malloc: *** error for \
>                 object 0x106ac07c8: pointer being freed was not allocated
> *** set a breakpoint in malloc_error_break to debug
> {noformat}
> When running the {{PythonFramework}} via lldb, I get the following stacktrace:
> {noformat}
> * thread #7, stop reason = signal SIGABRT
> * frame #0: 0x00007fff55321e3e libsystem_kernel.dylib`__pthread_kill + 10
> frame #1: 0x00007fff55460150 libsystem_pthread.dylib`pthread_kill + 333
> frame #2: 0x00007fff5527e312 libsystem_c.dylib`abort + 127
> frame #3: 0x00007fff5537b866 libsystem_malloc.dylib`free + 521
> frame #4: 0x000000010d24daac \
> _scheduler.so`google::protobuf::internal::ArenaStringPtr::DestroyNoArena(this=0x000070000ac355b0, \
> default_value="") at arenastring.h:264 frame #5: 0x000000010d2fe1aa \
> _scheduler.so`mesos::Resource::SharedDtor(this=0x000070000ac35580) at \
> mesos.pb.cc:31016 frame #6: 0x000000010d2fe063 \
> _scheduler.so`mesos::Resource::~Resource(this=0x000070000ac35580) at \
> mesos.pb.cc:31011 frame #7: 0x000000010d2fe485 \
> _scheduler.so`mesos::Resource::~Resource(this=0x000070000ac35580) at \
> mesos.pb.cc:31009 frame #8: 0x000000010b0257c7 \
> _scheduler.so`mesos::Resources::parse(name="cpus", value="8", role="*") at \
> resources.cpp:702 frame #9: 0x000000010c7ae4c9 \
> _scheduler.so`mesos::internal::slave::Containerizer::resources(flags=0x000000010202bac0) \
> at containerizer.cpp:118 frame #10: 0x000000010c3a93e1 \
> _scheduler.so`mesos::internal::slave::Slave::initialize(this=0x000000010202ba00) at \
> slave.cpp:472 frame #11: 0x000000010c3d7cb2 _scheduler.so`virtual thunk to \
> mesos::internal::slave::Slave::initialize(this=0x000000010202ba00) at slave.cpp:0 \
> frame #12: 0x000000010e459c39 \
> _scheduler.so`process::ProcessManager::resume(this=0x00000001005790f0, \
> process=0x000000010202ba00) at process.cpp:2819 frame #13: 0x000000010e57ef75 \
> _scheduler.so`process::ProcessManager::init_threads(this=0x00000001001a9fa8)::$_2::operator()() \
> const at process.cpp:2443 frame #14: 0x000000010e57eb30 _scheduler.so`void* \
> std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, \
> std::__1::default_delete<std::__1::__thread_struct> >, \
> process::ProcessManager::init_threads()::$_2> >(void*) [inlined] \
> decltype(__f=0x00000001001a9fa8)::$_2>(fp)(std::__1::forward<>(fp0))) \
> std::__1::__invoke<process::ProcessManager::init_threads()::$_2>(process::ProcessManager::init_threads()::$_2&&) \
> at type_traits:4291 frame #15: 0x000000010e57eb1f _scheduler.so`void* \
> std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, \
> std::__1::default_delete<std::__1::__thread_struct> >, \
> process::ProcessManager::init_threads()::$_2> >(void*) [inlined] void \
> std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, \
> std::__1::default_delete<std::__1::__thread_struct> >, \
> process::ProcessManager::init_threads()::$_2>(__t=0x00000001001a9fa0)::$_2>&, \
> std::__1::__tuple_indices<>) at thread:336 frame #16: 0x000000010e57eafb \
> _scheduler.so`void* \
> std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, \
> std::__1::default_delete<std::__1::__thread_struct> >, \
> process::ProcessManager::init_threads()::$_2> >(__vp=0x00000001001a9fa0) at \
> thread:346 frame #17: 0x00007fff5545d6c1 libsystem_pthread.dylib`_pthread_body + \
> 340 frame #18: 0x00007fff5545d56d libsystem_pthread.dylib`_pthread_start + 377
> frame #19: 0x00007fff5545cc5d libsystem_pthread.dylib`thread_start + 13
> {noformat}
> Given that this example works (most of the time) on other macOS systems, I am \
> assuming this is a problem of my system.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic