天道酬勤,学无止境

Detect supported audio encoders on Android to prevent crash “The given audio encoder 2 is not found”

When an Android device does not support a mandatory audio-encoder, you get: (X=numeric index of the encoder)

E/MediaProfiles(4048): The given audio encoder X is not found A/AudioSource(4048): frameworks/base/media/libstagefright/AudioSource.cpp:58 CHECK(channels == 1 || channels == 2) failed. A/libc(4048): Fatal signal 11 (SIGSEGV) at 0xdeadbaad (code=1) In native code. No Exception to react to. App is just force closed.

Is there any way to query an Android >3.x device if AAC AMR-NB and AMR-WB are actually supported? Documentation ( http://developer.android.com/guide/appendix/media-formats.html ) says that these are core media formats and thus always supported. Some actual and common phones out there (of major brands) don't.

MediaCodec.createByCodecName(String name) and Get supported Codec for Android device only works with API16=Android 4.1 but the devices in question are 4.0.x . It also does not list AMR-NB and AAC.

评论

On your Android device, the media profiles/capabilities are stored in a config file called media_profiles.xml(on ICS/4.0).

Normally, this file is located in /etc folder on the device.

So what you could do is

  1. Connect your device to pc and pull /etc/media_profiles.xml file using adb pull command
  2. Examine the AudioEncoderCap properties. There will be multiple entries for each capbility that is advertised. for ex:

    AudioEncoderCap name="aac" enabled="true" minBitRate="8192" maxBitRate="96000" minSampleRate="8000" maxSampleRate="16000" minChannels="1" maxChannels="1"

  3. If the 'enabled' flag is set to "true" as above, then that capability should be supported by the device. If it is not supported, the 'enabled' flag will be set to "false".

AFAIK, the media codecs framework introduced on jb (4.1) read this/similar config file to expose the device capabilities to the application layer.

Hope this helps.

Thanks!

受限制的 HTML

  • 允许的HTML标签:<a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • 自动断行和分段。
  • 网页和电子邮件地址自动转换为链接。

相关推荐
  • Android应用程序捆绑包在Android应用程序中引入了资源未找到崩溃(Android App Bundle introduces Resource Not found crash in Android app)
    问题 通过使用android新的Android App捆绑包,我在我的2个Google Play商店应用中发现了资源未找到崩溃的问题:- 这是应用程序之一的Fabric的堆栈跟踪:- Unable to start activity ComponentInfo{/com.Lastyear.MainActivity}: android.content.res.Resources$NotFoundException: File res/drawable/abc_item_background_holo_dark.xml from drawable resource ID #0x7f08002c at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2377) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2429) at android.app.ActivityThread.access$800(ActivityThread.java:151) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1342) at
  • Chrome命令行参数全集
    https://peter.sh/experiments/chromium-command-line-switches/ List of Chromium Command Line Switches There are lots of command lines which can be used with the Google Chrome browser. Some change behavior of features, others are for debugging or experimenting. This page lists the available switches including their conditions and descriptions. Last automated update occurred on 2019-08-12. ConditionExplanation--/prefetch:1[1] ⊗/prefetch:# arguments to use when launching various process types. It has been observed that when file reads are consistent for 3 process launches with the same /prefetch:#
  • Android 10正式版发布,看看都有哪些新特性
    谷歌在今年3月推出了Android 10.0的首个测试版,昨天,Android 10.0的正式版正式向外发布,而最先尝到新版本的自然是亲儿子Pixel手机。 新特性解读 根据Android官网的介绍,Android 10.0将聚焦于隐私可控、手机自定义与使用效率,此版本主要带来了十大新特性。 创新与新体验 可折叠 基于强大的多窗口支持,Android 10扩展了跨应用程序窗口的多任务处理,并在设备折叠或展开时提供屏幕连续性来维护应用程序状态。有关如何优化可折叠应用程序的详细信息,请参阅开发人员指南。 5G 网络 Android 10承诺提供持续更快的速度和更低的延迟,并增加了对5G的平台支持,并扩展了现有api,以帮助您利用这些增强。您可以使用连接性api来检测设备是否具有高带宽连接,并检查连接是否已计量。有了这些,你的应用程序和游戏可以为5G以上的用户定制丰富的身临其境的体验。 Live Caption 此功能将自动向视频、播客和音频消息添加说明文字。这些说明是实时性和系统性,因此它们不限于特定的应用程序。Live Caption 文本框可以调整大小并在屏幕周围移动。Live Caption 不仅对那些发现自己处于音频无法选择的情况下的用户很有帮助,而且对听力障碍者来说也非常有益。 具体参考 https://youtu.be/YL-8Xfx6S5o 智能回复通知
  • Unity与UE4引擎源码内使用到的第三方库的比较
    首先看UE4中的 Third-Party Libraries UnrealEngine\Engine\Source\ThirdParty 下: 源码路径: 引擎exe的安装路径下(会发现跟源码路径是一样的): ADO 注意:这只是为Windows访问权限设置了Windows包含的一组lib的include路径。 AMD AMD GPU服务(AGS)库使软件开发人员能够查询通常无法通过标准操作系统或图形API获得的AMD GPU软件和硬件状态信息。 解决PC上的AMD驱动程序错误,利用PC上AMD特定的图形功能 https://github.com/GPUOpen-LibrariesAndSDKs/AGS_SDK/ Android This is the Android Software Development Kit(Android软件开发工具包) cxa_demangle: UE在Android设备上运行时需要调用堆栈打印,UE需要此函数将错误的函数名称转换为易于理解的名称。 UE通过stlport与cxx-stl链接,但是该库中不包含abi :: cxa_demangle函数。 NDK附带的gnu-libstdc ++库确实包含此功能,但是UE不针对gnu-libstdc ++进行链接,也不想仅针对此功能这样做。 最简单的解决方案是编译cxa
  • Android音视频开发:录制视频 + 语音识别 + 人脸识别
    最近两个月在搞 Android 音视频相关方面的需求,下面是记录一次音视频开发的实操记录; 最下面有demo; 移动端具体的需求是这个样子的: 录制视频 + 同时语音识别 + 同时人脸识别 + 同时语音合成;视频时长大概是一个小时,不能压缩处理。 OK,需求出来我是崩溃的,Android 原生不支持录制视频的同时去实现语音识别,而 iOS 原生支持,这就加长了 Android 开发周期;其次 Android 麦克风不支持同时两个音频源同时输入音频,也就是说要使用 MediaCodec (编解码器) 和 MediaFormat ;这对于九窍已通八窍的我着实有点难度; 认真分析之后给出了下面几种方案: 方案一: 开一个线程使用 AudioRecord 录制音频流,将音频流拿去解析实现语音识别同时使用 MediaCodec 编码成 AAC 音频格式文件; 另一个线程使用 Camera 录制视频流,拿到视频流去实现人脸识别同时使用 MediaCodec 编码成 H264 格式视频流文件; 最后将 AAC 与 H264 合成为 MP4 格式视频文件; 方案二: 开一个线程使用 AudioRecord 录制音频流,将音频流拿去解析实现语音识别同时使用 MediaCodec 编码成 AAC 音频格式文件; 另一个线程使用 Camera 录制视频流,拿到视频流去实现人脸识别同时使用
  • 使用Android MediaRecorder(Using android MediaRecorder)
    问题 下面是我用于记录视频和音频的工作代码的结构: 问题:1)为什么需要CamcorderProfile ? setProfile(...)似乎将尺寸设置为QUALITY_HIGH给出的任何尺寸,但稍后我使用setVideoSize(...)设置了所需的尺寸,该尺寸将对此进行覆盖。 但是,当我删除两条CamcorderProfile行时,应用程序在setVideoSize(...)崩溃,并且LogCat E/MediaRecorder(19526): setVideoSize called in an invalid state: 2 。 2)如何不录制音频? 该文档指出,如果未调用setAudioSource(...) ,将不会有任何音轨。 但是,当我删除该行时,应用程序使用LogCat E/MediaRecorder(19946): try to set the audio encoder without setting the audio source first setProfile(...)在setProfile(...)崩溃E/MediaRecorder(19946): try to set the audio encoder without setting the audio source first 。 3
  • Using android MediaRecorder
    Below is the structure of my working code to record video and audio: Questions: 1) Why is the CamcorderProfile needed? setProfile(...) appears to set the dimensions to whatever QUALITY_HIGH gives, but later I set the dimensions I want with setVideoSize(...), which overrides this. However, when I remove the two CamcorderProfile lines, the app crashes at setVideoSize(...) with LogCat E/MediaRecorder(19526): setVideoSize called in an invalid state: 2. 2) How do I not record audio? The documentation states that if setAudioSource(...) is not called, there will be no audio track. However, when I
  • 在Android中检测“哨声”声音(Detect 'Whistle' sound in android)
    问题 我想检测“吹口哨”的声音。 为此,我已经实现了http://code.google.com/p/musicg/ 源代码本身有问题。 当您启动应用程序时,就可以收听了,但是当您返回并再次重新启动检测器线程时,它不会触发啸叫检测。 DetectorThread.java package weetech.wallpaper.services; import java.util.LinkedList; import weetech.wallpaper.utils.Debug; import android.media.AudioFormat; import android.media.AudioRecord; import com.musicg.api.WhistleApi; import com.musicg.wave.WaveHeader; public class DetectorThread extends Thread { private RecorderThread recorder; private WaveHeader waveHeader; private WhistleApi whistleApi; private Thread _thread; private LinkedList<Boolean> whistleResultList = new
  • android模拟器可以播放音频吗(can the android emulator play audio)
    问题 我想录制并通过录制的声音传递给手机的扬声器,但是我无法使录制代码正常工作(应用程序崩溃,请在此处查看我的尝试),因此我现在尝试查看模拟器是否可以执行与音频相关的任何操作或不。 我将wav(16位pcm,44k采样频率,单声道)和mp3(通过Audacity完成的录制和转换)的1秒钟录音复制到了sdcard。 我可以在IDE的文件资源管理器中看到文件,因此我猜模拟器已正确检测到sdcard。 但是我无法获得模拟器的内置音乐播放器来检测它们(为什么?)。 作为第二次尝试,我将代码HERE复制到示例hello world Android应用程序中。 这是主要的活动课 public class MainActivity extends Activity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // String PATH_TO_FILE = "/sdcard/asMP3.mp3"; // String PATH_TO_FILE = Environment.getExternalStorageDirectory().getPath()+"
  • can the android emulator play audio
    I wanted to record and pass through the recorded sound to the phone's speaker, but I could not get the recording code to work (app crashes, SEE MY ATTEMPT HERE) so I am now trying to see if the emulator can do anything related to audio or not. I copied a 1 sec recording, in both wav (16 bit pcm, 44k sampling frequency, mono) and mp3 (recording and conversion both done through Audacity) to the sdcard. I can see the files in the IDE's file explorer, so I guess the sdcard is being properly detected by the emulator. But I could not get the emulator's built in music player to detect them (Why ??)
  • Android “cpu may be pegged” bug
    Foreword: This severe bug can cause Android devices to lock up (unable to press Home/Back buttons, needs hard reset). It is associated with OpenGL surfaces and audio playback. Logcat repeats something to the effect of W/SharedBufferStack( 398): waitForCondition(LockCondition) timed out (identity=9, status=0). CPU may be pegged. trying again. once every second, hence the name of this bug. The fundamental cause of this is likely due to a deadlock when buffering data, be it sound or graphics. I occasionally get this bug when testing my app on the Asus EEE Transformer tablet. The crash occurs when
  • Phonegap app crashes when switched orientation even with AndroidManifest changes
    Here's my Manifest: <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.mappp.mobile" android:versionCode="1" android:versionName="1.0" > <supports-screens android:largeScreens="true" android:normalScreens="true" android:smallScreens="true" android:resizeable="true" android:anyDensity="true" /> <uses-permission android:name="android.permission.CAMERA" /> <uses-permission android:name="android.permission.VIBRATE" /> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <uses-permission android:name=
  • Receive callback on all Android media button events all the time (even when another app is playing audio)
    Background Info: I need to detect whenever a user presses the play/pause button found on most headsets (KEYCODE_MEDIA_PLAY_PAUSE). I have it all mostly working using MediaSessions, but when another app starts playing audio, I stop getting callbacks. It seems like this is because the app that's playing audio created its own MediaSession and Android sends KeyEvents only to the newest MediaSession. To prevent this I create an OnActiveSessionsChangedListener and create a new MediaSession every time it fires. This does work, but every time I create a new MediaSession, the listener fires again, so I
  • Detect 'Whistle' sound in android
    I want to detect 'Whistle' sound. For that I have implemented http://code.google.com/p/musicg/ Source code itself having issue. When you start app it is ready for listen but when you go back and again restart detector thread it does not trigger whistle detection. DetectorThread.java package weetech.wallpaper.services; import java.util.LinkedList; import weetech.wallpaper.utils.Debug; import android.media.AudioFormat; import android.media.AudioRecord; import com.musicg.api.WhistleApi; import com.musicg.wave.WaveHeader; public class DetectorThread extends Thread { private RecorderThread recorder
  • android 媒体播放器 - 如何禁用范围请求? (Nexus 7 上的音频流中断)(android media player - how to disable range request? (broken audio streaming on Nexus 7))
    问题 我有一个音频流应用程序,它运行本地代理服务器。 本地代理服务器建立到互联网流媒体源的 http 连接,在本地获取和缓冲流媒体数据。 然后,在应用程序内部,我使用 MediaPlayer 连接到本地代理服务器,使用方法 mediaPlayer.setDataSource(...); // the url of the local proxy server 一切都很好(有很多 Android 设备和不同的操作系统版本 - 1.5...4.0),直到 Nexus 7 发布。 在 Nexus 7 中,媒体播放器拒绝播放来自本地代理服务器的源。 当我查看日志时,似乎 MediaPlayer 在内部使用范围请求。 我的本地代理服务器无法处理。 它返回 HTTP/1.0 200 OK 和数据。 但是,媒体播放器不喜欢这样并抛出异常: Caused by: libcore.io.ErrnoException ?:??: W/?(?): [ 07-18 00:08:35.333 4962: 5149 E/radiobee ] ?:??: W/?(?): : sendto failed: ECONNRESET (Connection reset by peer) ?:??: W/?(?): at libcore.io.Posix.sendtoBytes(Native Method)
  • Android Oreo Notification Crashes System UI
    I've managed to get notifications working in older API's, but not Oreo. Creating the notification causes my app to still work fine (no messages in logcat), however SystemUI crashes and reboots in an endless cycle while the Activity is running. The is the error in logcat for the systemui process: java.lang.IllegalArgumentException: width and height must be > 0 My code: private void showPlayingNotification() { NotificationCompat.Builder builder = mNotificationUtils.getAndroidChannelNotification(this, "Play", mMediaSessionCompat); if( builder == null ) { Log.i("Play Notification","No notification
  • 音视频系列2:基本知识
    1. 存储格式 1.1 WAV、WMV、WMA、ASF、MMS、AVI:微软全家桶 微软的东西,windows用户经常能见到。 首先是wav音频文件。WAV是微软开发的一种声音文件格式,它实际是采用RIFF文件规范存储的,WAV是文件的扩展名,内中音频的格式通常是PCM,也可以存储一些压缩过的数据;然后是名为WMA的音频编码格式,能够以较MP3少1/3~1/2的码率存储相似音质的音频,通常后缀名为“.wma”。 wmv/asf是一系列由微软开发的视频编码格式和文件格式。其中WMV version 9因为被许多地方选用而以VC-1编码格式之名为人熟知,微软为此专门开发了一种名为ASF的文件格式来存储,但后缀名既可能为“.asf”,也可能为“.wmv”。 nms流媒体协议:微软在同时代还曾开发过名为MMS的流媒体协议,基于UDP或TCP进行传输,后升级为MS-WMSP协议(又称WMT,即Windows Media HTTP Streaming Protocol),可以使用HTTP传输。 AVI全称Audio Video Interleaved,是微软在很早便推出的多媒体文件格式,但因其良好的适应性,仍然被广泛使用。AVI可以支持非常广泛的音视频编码格式,包括较新的H.264、HE-AAC等。AVI由RIFF格式衍生,它的文件结构分为头部、主题和索引三部分,描述信息通常放在INFO
  • phonegap android app crashing due to low memory on opening camera
    I am developing an android phonegap app using phonegap v1.3.0 My app has a button which when clicks launches the camera using the phonegap api, when returns the photo path when the user captures the image. But what is happening is that the app crashes before the camera returns, so when the camera returns, the app is restarted. So I dont get the image which was captured. On checking the logs I see that the kernal sends sigkill to my app after the camera is launched. A point to note is that when I kill all the running apps and then start my app and take a picture it returns normally without the
  • android:检测声级(android: Detect sound level)
    问题 使用MediaRecorder我可以从设备的麦克风捕获声音。 从声音中我只需要分析音量(响度),而无需将声音保存到文件中。 两个问题: 如何在给定的时间点获得声音的响度? 如何在不将声音保存到文件的情况下进行分析? 谢谢你。 回答1 使用mRecorder.getMaxAmplitude(); 要进行声音分析而不保存所有内容,请使用mRecorder.setOutputFile("/dev/null"); 这是一个例子,我希望这会有所帮助 public class SoundMeter { private MediaRecorder mRecorder = null; public void start() { if (mRecorder == null) { mRecorder = new MediaRecorder(); mRecorder.setAudioSource(MediaRecorder.AudioSource.MIC); mRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP); mRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB); mRecorder.setOutputFile("/dev/null")
  • Illegal State Exception when calling MediaCodec.configure()
    I get the IllegalStateException on MediaCodec.configure() line, I'm trying to record audio using MediaCodec. This only occur on some phones, on tabs everything is fine. This particular crash example is from Samsung Galaxy S4. Exception traces: 01-22 17:33:38.379: V/ACodec(16541): [OMX.google.aac.decoder] Now Loaded 01-22 17:33:38.379: V/ACodec(16541): onConfigureComponent 01-22 17:33:38.379: W/ACodec(16541): [OMX.google.aac.decoder] Failed to set standard component role 'audio_encoder.aac'. 01-22 17:33:38.379: E/ACodec(16541): [OMX.google.aac.decoder] configureCodec returning error -2147483648