Quantcast
Channel: Media Foundation Development for Windows Desktop forum
Viewing all 1079 articles
Browse latest View live

video-re-encoding introducing an audio stream offset disrupting sync

$
0
0

Hello, 

                   

I'm attempting to write a simple windows media foundation command line tool to use IMFSourceReader and IMFSyncWriter to load in a video, read the video and audio as uncompressed streams and re-encode them to H.246/AAC with some specific hard-coded settings.  

Gist of the full simple program: https://gist.github.com/m1keall1son/33ebaf1271a5234a4ed1d8ba765eafd6

A test video: https://www.videvo.net/video/alpaca-on-green-screen/3442/

(Note: the video's i've been testing with are all stereo, 48000k sample rate)

The program works, however in some cases when comparing the newly outputted video to the original in an editing program, I see that the copied video streams match, but the audio stream of the copy is pre-fixed with some amount of silence and the audio is offset, which is unacceptable in my situation.

audio samples:
original - |[audio1] [audio2] [audio3] [audio4] [audio5] ... etc
copy     - |[silence] [silence] [silence] [audio1] [audio2] [audio3] ... etc

In these cases the first video frames coming in have a **non zero** timestamp but the first audio frames do have a 0 timestamp.  

I would like to be able to produce a copied video who's first frame from the video and audio streams is 0, so I first attempted to subtract that initial timestamp (`videoOffset`) from all subsequent video frames which produced the video i wanted, but resulted in this situation with the audio:  

original - |[audio1] [audio2] [audio3] [audio4] [audio5] ... etc
copy     - |[audio4] [audio5] [audio6] [audio7] [audio8] ... etc

The audio track is shifted now in the other direction by a small amount and still doesn't align.  

I've been able to fix this sync alignment and offset the video stream to start at 0 with the following code inserted at the point of passing the audio sample data to the IMFSinkWriter:

//inside read sample while loop
...

// LONGLONG llDuration has the currently read sample duration
// DWORD audioOffset has the global audio offset, starts as 0
// LONGLONG audioFrameTimestamp has the currently read sample timestamp

//add some random amount of silence in intervals of 1024 samples
static bool runOnce{ false };
if (!runOnce)
{
    size_t numberOfSilenceBlocks = 1; //how to derive how many I need!?  It's aribrary
    size_t samples = 1024 * numberOfSilenceBlocks; 
    audioOffset = samples * 10000000 / audioSamplesPerSecond;
    std::vector<uint8_t> silence(samples * audioChannels * bytesPerSample, 0);
    WriteAudioBuffer(silence.data(), silence.size(), audioFrameTimeStamp, audioOffset);

    runOnce= true;
}

LONGLONG audioTime = audioFrameTimeStamp + audioOffset;
WriteAudioBuffer(dataPtr, dataSize, audioTime, llDuration);

Oddly, this creates an output video file that matches the original.

original - |[audio1] [audio2] [audio3] [audio4] [audio5] ... etc
copy     - |[audio1] [audio2] [audio3] [audio4] [audio5] ... etc

The solution was to insert extra silence in block sizes of 1024 at the beginning of the audio stream.  It doesn't matter what the audio chunk sizes provided by IMFSourceReader are, the padding is in multiples of 1024.

(Note: the linked video only requires 1 extra 1024 size block to sync.)

A screen shot of the audio track offsets of the different attempts : https://i.stack.imgur.com/PP29K.png

My problem is that there seems to be no reason for the the silence offset of this size to exist.  Why do i need it?  How do i know how much i need?  I stumbled across the 1024 sample silence block solution after days of fighting this problem.  

Some videos seem to only need 1 padding block, some need 2 or more, and some need no extra padding at all!

My question here are:

 - Does anyone know why this is happening?  

 - Am I using Media Foundation incorrectly in this situation to cause this?

 - If I am correct, How can I use the video metadata to determine how many 1024 blocks of silence I need to pad the audio stream on a video that has a video stream that starts at a later time than the audio stream? 

Other random things I have tried:

- Increasing the duration of the first video frame to account for the offset: Produces no effect.


MFTranscodeContainerType_FMPEG4 iOS/OSX playback

$
0
0

Hi there, wondering if anyone can provide assistance on creating a fragmented MPEG 4 video with Media Foundation API's that can be played back on iOS devices such as iPhone or iMac with OSX. 

Creating a IMFSinkWriter with encoding attributes of MF_TRANSCODE_CONTAINERTYPE set to MFTranscodeContainerType_MPEG4 creates a standard MP4 video no issues with playback but when I change the transcode container type to MFTranscodeContainerType_FMPEG4 then it will only playback the recording on Windows 10 (which you expect) or through VLC or some other 3rd party media players on any other OS such as OSX (iOS through infuse works as well).

checking the media file details in mp4box for fragmented MP4 file shows 

        Computed Duration 00:00:00.000 - Indicated Duration 00:00:33.447
        Fragmented File: yes - duration 00:00:00.000

with No sync sample found.

a non-fragmented standard MP4 created in MediaFoundation APIs shows a computed duration correctly and lots of sync samples.

I have tried every combonation of encoder, codec and mediatype attributes, along with using a frgmented media sink (MFCreateFMPEG4MediaSink) and all can only be played natively through windows 10.

None of the samples created have MF_MT_MPEG_SEQUENCE_HEADER attached when using fragmented options as well so I assume that these need to be added during each WriteSample to the SinkWriter but I dont know how to calculate the SPS/PPS data required.

I've been stuck on this for months and desperate for some help.

 

Media Foundation Access violation c0000005 exception

$
0
0

We are using media foundation via SharpDX .NET wrapper. And we are randomly getting this error from unmanaged code. Important is to say that this happens at the point when we are stopping couple of videos.

This is complete output of error from windows dump file:

-------------------------------------------------------------------------------------------------------

FAULTING_IP: 
+0
7300c9f1 654c            dec     esp

EXCEPTION_RECORD:  ffffffff -- (.exr 0xffffffffffffffff)
ExceptionAddress: 7300c9f1
   ExceptionCode: c0000005 (Access violation)
  ExceptionFlags: 00000000
NumberParameters: 2
   Parameter[0]: 00000008
   Parameter[1]: 00000000
Attempt to execute non-executable address 00000000

CONTEXT:  00000000 -- (.cxr 0x0;r)
eax=0c9e9328 ebx=00000000 ecx=0c7e005e edx=12e2fb60 esi=0c9f07a8 edi=12e2fbc4
eip=00000000 esp=12e2fb48 ebp=12e2fbac iopl=0         nv up ei pl nz ac po nc
cs=0023  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00010212
00000000 ??              ???

DEFAULT_BUCKET_ID:  WRONG_SYMBOLS

PROCESS_NAME:  wrapper.exe

ERROR_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s.

EXCEPTION_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s.

EXCEPTION_PARAMETER1:  00000008

EXCEPTION_PARAMETER2:  00000000

WRITE_ADDRESS:  00000000 

FOLLOWUP_IP: 
mfplat!CMFByteStreamOnStream::GetLength+39
5e975f75 85c0            test    eax,eax

FAILED_INSTRUCTION_ADDRESS: 
+39
7300c9f1 654c            dec     esp

NTGLOBALFLAG:  0

APPLICATION_VERIFIER_FLAGS:  0

APP:  wrapper.exe

ANALYSIS_VERSION: 6.3.9600.17336 (debuggers(dbg).150226-1500) x86fre

MANAGED_STACK: !dumpstack -EE
OS Thread Id: 0xa50 (34)
Current frame: 
ChildEBP RetAddr  Caller, Callee

PRIMARY_PROBLEM_CLASS:  WRONG_SYMBOLS

BUGCHECK_STR:  APPLICATION_FAULT_WRONG_SYMBOLS

LAST_CONTROL_TRANSFER:  from 5e975f75 to 00000000

STACK_TEXT:  
WARNING: Frame IP not in any known module. Following frames may be wrong.
12e2fb44 5e975f75 0c9e9328 12e2fb60 00000001 0x0
12e2fbac 5e892c2c 0c9f07a8 12e2fbc4 19588c98 mfplat!CMFByteStreamOnStream::GetLength+0x39
12e2fbd0 5e893e6a 12e2fbf0 00000000 0c9f05c8 mf!MFCreateUrlmonSchemePlugin+0xaea74
12e2fbfc 5e893f98 00000000 12e2fc18 5e961f7b mf!MFCreateUrlmonSchemePlugin+0xafcb2
12e2fc08 5e961f7b 054b4a70 0c9f0d48 12e2fd20 mf!MFCreateUrlmonSchemePlugin+0xafde0
12e2fc18 5e961b3c 0c9f0d48 00000000 00000000 mfplat!CCompletionPort::InvokeCallback+0x12
12e2fd20 5e968cab 12e2fd60 76a61287 0c9f05c8 mfplat!CWorkQueue::CThread::ThreadMain+0xa5
12e2fd28 76a61287 0c9f05c8 f0fa0967 00000000 mfplat!CWorkQueue::CThread::ThreadFunc+0xd
12e2fd60 76a61328 12e2fd74 755733ca 0522a060 msvcrt!_endthreadex+0x44
12e2fd68 755733ca 0522a060 12e2fdb4 776c9ed2 msvcrt!_endthreadex+0xce
12e2fd74 776c9ed2 0522a060 6598815d 00000000 kernel32!BaseThreadInitThunk+0xe
12e2fdb4 776c9ea5 76a612e5 0522a060 00000000 ntdll!__RtlUserThreadStart+0x70
12e2fdcc 00000000 76a612e5 0522a060 00000000 ntdll!_RtlUserThreadStart+0x1b


SYMBOL_STACK_INDEX:  1

SYMBOL_NAME:  mfplat!CMFByteStreamOnStream::GetLength+39

FOLLOWUP_NAME:  MachineOwner

MODULE_NAME: mfplat

IMAGE_NAME:  mfplat.dll

DEBUG_FLR_IMAGE_TIMESTAMP:  4a5bda38

STACK_COMMAND:  ~34s; .ecxr ; kb

FAILURE_BUCKET_ID:  WRONG_SYMBOLS_c0000005_mfplat.dll!CMFByteStreamOnStream::GetLength

BUCKET_ID:  APPLICATION_FAULT_WRONG_SYMBOLS_BAD_IP_mfplat!CMFByteStreamOnStream::GetLength+39

ANALYSIS_SOURCE:  UM

FAILURE_ID_HASH_STRING:  um:wrong_symbols_c0000005_mfplat.dll!cmfbytestreamonstream::getlength

FAILURE_ID_HASH:  {437113f2-d01f-43e3-70a4-d2128193b3d1}

Followup: MachineOwner

-----------------------------------------------------------------------------------------------------------------------------

Does anybody have clue what could be cause for this? It looks like this 

mfplat!CMFByteStreamOnStream::GetLength+0x39 throws null reference exception.


"MFTEnumEx" can get HEVC decoder but "ActivateObject" returns ERROR

$
0
0

I have installed the HEVC extension,so count is 1,but "ActivateObject" function returns "E_ACCESSDENIED General access denied error.".

i7 8565u  visualstudio2019  17763.379

thanks

#include <cstdio>
#include <iostream>
#include <Windows.h>

#include <mfapi.h>
#include <mfidl.h>


int main()
{
	HRESULT hr = CoInitializeEx(0, COINIT_MULTITHREADED);
	if (SUCCEEDED(hr))
	{
		hr = MFStartup(MF_VERSION);
		if (SUCCEEDED(hr))
		{
			MFT_REGISTER_TYPE_INFO info = { MFMediaType_Video, MFVideoFormat_HEVC };
			UINT32 count = 0;
			IMFActivate** ppActivate = NULL;
			hr = MFTEnumEx(MFT_CATEGORY_VIDEO_DECODER, MFT_ENUM_FLAG_ALL, &info, NULL, &ppActivate, &count);

			IMFTransform* dec = NULL;
                        //here
			hr = ppActivate[0]->ActivateObject(__uuidof(IMFTransform), (void**)& dec);

			printf("%x", hr);


			for (UINT32 i = 0; i < count; i++)
			{
				ppActivate[i]->Release();
			}
			CoTaskMemFree(ppActivate);

			E_ACCESSDENIED General access 

			MFShutdown();
		}
		CoUninitialize();
	}
	int a;
	std::cin >> a;
	return 0;
}

Where do I get camera driver for Parkard Bell Easy ote TE69KB series? I use windows 7. Official Packadr bell website does not have it.

$
0
0
Where do I get camera driver for Parkard Bell Easy ote TE69KB series? I use windows 7. Official Packadr bell website does not have it. 

OPM HDCP KSV

$
0
0

Hello,

I am writing a video player for which I am using DirectX11 OPM to enable HDCP on the connected display device. I am using Nvidia 750TI graphics card for output. To authenticate the connected display device, I am reading the HDCP KSV of display  using OPM_GET_CONNECTED_HDCP_DEVICE_INFORMATION with COPPCompatibleGetInformation on OpmVideoOutput and verifying it agains my list of authorised devices. It was working fine until one day I replaced the Nvidia 750TI with 1030GT. 

On Nvidia 1030 GT, the COPPCompatibleGetInformation on OpmVideoOutput does not returns the HDCP KSV.

I am now clueless on how to proceed on this. Any help regarding this is appreciated.

Regards,

Vivek



Media Foundation record audio

$
0
0
When I let audio data to an input stream on this Media Foundation transform, I always get "MF_E_NOTACCEPTING".
How can I do to avoid the issue?

Lazy loading MFCreateAVIMediaSink

$
0
0

Hello,

I would like to lazy load MFCreateAVIMediaSink(), that only exists starting from Windows 8.1. It is easy by using LoadLibrary() and GetProcAdress().

However, the doc tells that the symbol should be found in mf.dll, and in reality, it must be looked for into mfsrcsnk.dll.

I wonder if I can just assume that the doc is wrong and that in the future, it will always work with mfsrcsnk.dll, or if there is a way to crawl in mf.dll as an "umbrella dll" and automatically locate the good dll that contains MFCreateAVIMediaSink(), that might perhaps not always be found in mfsrcsnk.dll in the future.



How do I control image quality when encoding (using IMFSinkWriter)?

$
0
0

I noticed a major difference in image quality between Media Foundation and FFMPEG. This seems to have nothing to do with bitrate, and is evident even in the first image in the video, so probably the I-frame level is already losing a lot of detail.

How do I set the IMFSinkWriter to encode with more details?

Windows 10 notification listener for desktop app

$
0
0

Hi.

I have already read "Notification listener: Access all notifications" (https://docs.microsoft.com/en-us/windows/uwp/design/shell/tiles-and-notifications/notification-listener) where is described how to use the notification listener in a UWP app.

I would like to do the same thing from a desktop (not bridged) app (C#, WPF). Is it possible?

Thanks

Changing of different audio tracks within Movies and TV / Media Player

$
0
0
Hello, my goal is to create some videos with clips I recorded from my PC. I decided to try out splitting some of the audio when Ifirst record for more freedom in the post production process, however it's causing a lot of hassle. When I open each clip, it defaults to the second track, which is the raw microphone audio which I don't want in the video. I understand I can swap it back to track 1 in the small popup at the bottom but as soon as I go to trim the clip, it defaults back to the second audio track again. I've tried a lot of different things and none of it worked. I'd just appreciate some assistance.

Different results with Media Foundation H264 decoding on amd hardware

$
0
0

Hello!

I am developing video playback feature for the game engine. All I need is to get raw IYUV data from H264 video to make a texture. It works properly on most of platforms, but I have troubles with AMD graphics (image is corrupted).

HRESULT hr = S_OK;
IMFSample *pSample = NULL;
IMFMediaBuffer *pBuf = NULL;
IMFMediaType *pVideoType = NULL;
size_t  cSamples = 0;
DWORD streamIndex, flags;
LONGLONG llTimeStamp;

hr = m_pReader->ReadSample(
	MF_SOURCE_READER_FIRST_VIDEO_STREAM,    // Stream index.
	0,                              // Flags.&streamIndex,                   // Receives the actual stream index. &flags,                         // Receives status flags.&llTimeStamp,                   // Receives the time stamp.&pSample                        // Receives the sample or NULL.
);

if (FAILED(hr))
{
	return false;
}

wprintf(L"Stream %d (%I64d)\n", streamIndex, llTimeStamp);
if (flags & MF_SOURCE_READERF_ENDOFSTREAM)
{
	wprintf(L"\tEnd of stream\n");
}
if (flags & MF_SOURCE_READERF_NEWSTREAM)
{
	wprintf(L"\tNew stream\n");
}
if (flags & MF_SOURCE_READERF_NATIVEMEDIATYPECHANGED)
{
	wprintf(L"\tNative type changed\n");
}
if (flags & MF_SOURCE_READERF_CURRENTMEDIATYPECHANGED)
{
	wprintf(L"\tCurrent type changed\n");

	m_pReader->GetCurrentMediaType(
		(DWORD)MF_SOURCE_READER_FIRST_VIDEO_STREAM,&pVideoType);

	// Get the frame dimensions and stride
	UINT32 w, h;
	MFGetAttributeSize(pVideoType, MF_MT_FRAME_SIZE, &w, &h);
	m_width = w;
	m_height = h;
	m_aspectRatio = (float)(w) / (float)(h);
}
if (flags & MF_SOURCE_READERF_STREAMTICK)
{
	wprintf(L"\tStream tick\n");
}

if (pSample)
{
	++cSamples;
}
else {
	return false;
}

if (FAILED(hr))
{
	wprintf(L"ProcessSamples FAILED, hr = 0x%x\n", hr);
}
else
{
	wprintf(L"Processed %d samples\n", cSamples);
}

pSample->ConvertToContiguousBuffer(&pBuf);

DWORD nCurrLen = 0;
pBuf->GetCurrentLength(&nCurrLen);

uint8_t *imgBuff;
DWORD buffCurrLen = 0;
DWORD buffMaxLen = 0;
pBuf->Lock(&imgBuff, &buffMaxLen, &buffCurrLen);

uint32_t luminanceStride = sizeof(uint8_t) * m_width;
uint32_t chrominanceStride = sizeof(uint16_t) * (m_width / 2);

// two buffers for luminance and chrominance.
void *luminanceSource = imgBuff;
void *chrominanceSource = offset_by_bytes(luminanceSource , lumStride * m_height);

Thanks in advance!

How to enable hardware h264 decoding without media session

$
0
0

Hello,

I am trying to write application which decodes h264 video frames and displays them. Because i have to do some processing before rendering i decided to not use media session and provide frames manually by reading them by SourceReader and then providing frames to decoder IMFTransform. I am able to do it in software mode, but performance is too slow, so i want to enable hardware accelerated decoding. What steps do i have to take to achieve that?

As far as i understand from documentation and samples all i have to do is

- Create Direct3DDeviceManager9

- Create Direct3D device

- Set device in Direct3DDeviceManager9

- Set CODECAPI_AVDecVideoAcceleration_H264 to True in decoder IMFTransform

- Send MFT_MESSAGE_SET_D3D_MANAGER with pointer to Direct3DDeviceManager9 to decoder IMFTransform

But it still runs in software mode. Am I missing something?

why my system audio driver are not workin gafter new window?

$
0
0
I have done my new window 2 days before but my audio drivers are not working.Can anyone tell what is the issue?

Ho to encode video from uncompressed YUV data to raw H264 bit stream file with Media Foundation

$
0
0

I'd like to do encoding from YUV to raw H264 bitstream without any container(WMV, MP4)

MS has a similar example for "Tutorial: Using the Sink Writer to Encode Video":
https://docs.microsoft.com/en-us/windows/desktop/medfound/tutorial--using-the-sink-writer-to-encode-video
It encodes RGB file to WMV file. 

I know the VIDEO_INPUT_FORMAT should be changed to "MFVideoFormat_I420". But how to change output setting? I expect that output file is output.264 directly without any container. Any suggestion is appreciated in advance. 




Media Foundation and Windows Explorer reporting incorrect video resolution, 2560x1440 instead of 1920x1080

$
0
0
Reproduce this issue using this video, and view frame size in explorer or use any app based on Media Foundation source reader. VLC, FFMPEG correctly report the file as 1920x1080.

https://teleport.blob.core.windows.net/content/should_be_1080p.mp4

The file was produced by an IP camera, may be non-standard, but MF should be able to deal with it given all other software does.

Also vote here : https://aka.ms/AA4y7a2

IMFCaptureEngine Access Violation for MJPG Video format on Windows 8 32bit

$
0
0

I'm developing the desktop application to record .mp4 video format from USB camera using Capture Engine sample code. My application is crashing while recording the MJPG video format onWindows 8 32bit OS.

I have modified the capture engine sample code as per my requirement and completed the development. My application is working fine in the following OSes: Window 10 64bit, Window 8.1 32/64bit and Windows 8 64bit.

The USB camera supports two video formats: UYVY and MJPG. The Access violation occurs only when the camera has MJPG and  UYVY format. To verify the crash issue, I tried different cameras which have above formats and able to recreate the issue. 

Then, I tried the camera which supports YUY2 and MJPG format and my application are able to record a video in the two formats without any crash.

Also, I updated the PC and tried it but unfortunately, the issue still persists.

The issue is occurring after initializing the preview and retrieving the current media type usingGetCurrentDeviceMediaType(). Below code snippet is used to configure for video recording.

HRESULT RecordVideo(TCHAR *tzDestinationFile)
{
	HRESULT hr = E_FAIL;
	IMFCaptureSink *pCaptureSink = NULL;
	IMFCaptureRecordSink *pRecordSink = NULL;
	IMFCaptureSource *pCaptureSource = NULL;
	IMFMediaSource *pMediaSource = NULL;
	IMFPresentationDescriptor *pPD = NULL;
	IMFMediaType* pSrcMediaType = NULL;

	if((m_pCaptureEngine == NULL) || (m_pCaptureEngineCB == NULL))	{	return MF_E_NOT_INITIALIZED;	}

	hr = m_pCaptureEngine->GetSink(MF_CAPTURE_ENGINE_SINK_TYPE_RECORD, &pCaptureSink);	if (FAILED(hr)){	goto done;	}

	hr = pCaptureSink->QueryInterface(IID_PPV_ARGS(&pRecordSink));	if (FAILED(hr)){	goto done;	}

	hr = m_pCaptureEngine->GetSource(&pCaptureSource);	if (FAILED(hr)){	goto done;	}

	// Clear any existing streams from previous recordings.
	hr = pRecordSink->RemoveAllStreams();	if (FAILED(hr)){	goto done;	}

	hr = pRecordSink->SetOutputFileName(tzDestinationFile);		if (FAILED(hr)){	goto done;	}

	hr = ConfigureVideoEncoding(pCaptureSource, pRecordSink, MFVideoFormat_H264);	if (FAILED(hr)){	goto done;	}

	hr = m_pCaptureEngine->StartRecord();	if (FAILED(hr)){	goto done;	}

	m_bRecording = true;		
    
done:
    SafeRelease(&pCaptureSource);
    SafeRelease(&pRecordSink);
	SafeRelease(&pPD);
    SafeRelease(&pMediaSource);
    return hr;	
}



HRESULT ConfigureVideoEncoding(IMFCaptureSource *pCaptureSource, IMFCaptureRecordSink *pRecordSink, REFGUID guidEncodingType)
{
	IMFMediaType *pMediaType = NULL;
	IMFMediaType *pH264MediaType = NULL;
	GUID guidSubType = GUID_NULL;

	if((pCaptureSource == NULL) || (pRecordSink == NULL) || (guidEncodingType == GUID_NULL))
		return E_FAIL;

	// Configure the video format for the recording sink.
	HRESULT hr = pCaptureSource->GetCurrentDeviceMediaType((DWORD)MF_CAPTURE_ENGINE_PREFERRED_SOURCE_STREAM_FOR_VIDEO_RECORD , &pMediaType);	if (FAILED(hr)){	goto done;	}

	if(pMediaType == NULL)
		return E_FAIL;

	hr = ConfigureH264EncoderMediaType(pMediaType, guidEncodingType, &pH264MediaType);	if (FAILED(hr)){	goto done;	}

	// Connect the video stream to the recording sink.		
	DWORD dwSinkStreamIndex = 0;
	hr = pRecordSink->AddStream((DWORD)MF_CAPTURE_ENGINE_PREFERRED_SOURCE_STREAM_FOR_VIDEO_RECORD, pH264MediaType, NULL, &dwSinkStreamIndex);	if (FAILED(hr)){	goto done;	}

done:
    SafeRelease(&pMediaType);
    SafeRelease(&pH264MediaType);
    return hr;
}

In addition to that, I'm posting the access violation exception and my PC configuration here. I hope this will be helpful to find the root cause.

Exception:

First-chance exception at 0x75151A65 in .exe: Microsoft C++ exception: _com_error at memory location 0x0312F538.
First-chance exception at 0x75151A65 in .exe: Microsoft C++ exception: _com_error at memory location 0x0312F538.

..........OnCaptureEvent MF_CAPTURE_ENGINE_PREVIEW_STARTED

First-chance exception at 0x53D9FF4C (mfreadwrite.dll) in .exe: 0xC0000005: Access violation reading location 0x00000000.
Unhandled exception at 0x53D9FF4C (mfreadwrite.dll) in .exe: 0xC0000005: Access violation reading location 0x00000000.

DxDiag:

------------------
System Information
------------------
Time of this report: 5/9/2018, 18:17:57
       Machine name: WindowsTeam
   Operating System: Windows 8 Pro 32-bit (6.2, Build 9200) (9200.win8_gdr.151112-0600)
           Language: English (Regional Setting: English)
System Manufacturer: Dell Inc.
       System Model: Vostro 3900  
               BIOS: BIOS Date: 03/03/15 15:17:01 Ver: 04.06.05
          Processor: Intel(R) Core(TM) i5-4460  CPU @ 3.20GHz (4 CPUs), ~3.2GHz
             Memory: 4096MB RAM
Available OS Memory: 3502MB RAM
          Page File: 1139MB used, 3003MB available
        Windows Dir: C:\Windows
    DirectX Version: DirectX 11
DX Setup Parameters: Not found
   User DPI Setting: Using System DPI
 System DPI Setting: 96 DPI (100 percent)
    DWM DPI Scaling: Disabled
     DxDiag Version: 6.02.9200.16384 32bit Unicode

I investigated in many ways but I could not get any clue to solve this issue. I was stick with the issue for past one week. 

May I know why this issue is occurring only on Windows 8 32bit?

Let me know if you require any other information from my end.

Thanks in advance.


How to reuse IDirect3DSurface9 from decoder output in renderer

$
0
0

Hello,

I have an application without media session, where i supply h264 decoder manually with samples and then render output by using Direct3D9. I use hardware decoding, so i can cast decoder output to IDirect3DSurface9 which i want to use as input to renderer. If i just pass pointer to this surface to renderer i get black screen as output. Is there any way to reuse this IDirect3DSurface9 in renderering? Or are they some constrictions on surface parameters when doing so? So far i have tried:

- Copying content of decoder output surface to renderer input by using LockRect and then memcpy. It works, but using memcpy  forces to copy memory from GPU to CPU and then back to GPU which affects performance badly

- Copying content of decoder output surface to renderer input by using UpdateSurface, but it fails as it requires source surface to been created with D3DPOOL_SYSTEMMEM (currently it is D3DPOOL_DEFAULT)


How to detect if Media Foundation is installed

$
0
0

Hi,

I want to check if the Windows Feature "Media Foundation" is installed prefered by reading Registry values.

I want to check this for Windows Server 2012, 2016 and 2019. 

Can anybody please give me a hint?

Regards,
Christian

Capture Filters in WMF?

$
0
0
I have a video device that streams a custom GUID subtype stream which is not recognized in WMF.  The vendor provided me a sample capture application which uses a capture filter in DirectShow to capture the video.  Is there something analogous to the DirectShow capture filters in WMF?  Also, is it possible to record this format as a raw stream to a binary file?
Viewing all 1079 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>